• 0 Posts
  • 151 Comments
Joined 1 year ago
cake
Cake day: June 20th, 2023

help-circle






  • Or, you really enjoy a hobby but your hyperfocus makes you research the hobby instead of doing it. E.g. you like photography and your hyperfocus kicks in researching places to go take photos, or gear to buy… Or you spend hours choosing the best cycling route until it’s too dark or the weather changes and you go “What happened to my beautiful afternoon??”.

    My hyperfocus tends to kick in whenever the ADHD gremlin inside my brain chooses, not always when I’m doing whatever I enjoy. I wish that was always the case.






  • In most of Europe, the prices of Model 3’s match pretty well those of the Polestar 2. The difference in build quality between those two is night and day. The Tesla feels like a Chrysler/Dodge Neon in comparison, with leather being the only concession whatsoever to niceness.

    The fact that in Europe somehow they’re “premium” and not budget cars within their category blows my mind.








  • Jrockwar@feddit.uktoAI@lemmy.mlHow reliable are modern LLMs?
    link
    fedilink
    arrow-up
    2
    arrow-down
    3
    ·
    edit-2
    1 month ago

    The least unreliable LLM I’ve found by far is perplexity, in the Pro mode. (By the way, if you want to try it out, you get a few free uses a day).

    The reason is because the Pro mode doesn’t retrieve and spit out information from its internal memory bank, but instead, it uses that information to launch multiple search queries, then summarises the pages it finds, and then gives you that information.

    Other LLMs try to answer “from memory” and then add some links at the bottom for fact checking but usually Perplexity’s answers come straight from the web so they’re usually quite good.

    However, I still check (depending on how critical the task is) that the tidbit of information has one or two links next to it, that the links talk about the right thing, and I verify the data myself if it’s actually critical that it gets it right. I use it as a beefier search engine, and it works great because it limits the possible hallucinations to the summarisation of pages. But it doesn’t eliminate the possibility completely so you still need to do some checking.