I followed these steps, but just so happened to check on my mason jar 3-4 days in and saw tiny carbonation bubbles rapidly rising throughout.

I thought that may just be part of the process but double checked with a Google search on day 7 (when there were no bubbles in the container at all).

Turns out I had just grew a botulism culture and garlic in olive oil specifically is a fairly common way to grow this bio-toxins.

Had I not checked on it 3-4 days in I’d have been none the wiser and would have Darwinned my entire family.

Prompt with care and never trust AI dear people…

  • skillissuer@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    14
    ·
    6 months ago

    oh but you see, it’s “hallucination” when LLM is wrong and it’s hype cycle fuel when it’s correct. no, LLMs don’t “hallucinate”, that implies that this state is peculiar, isolated, triggered by very specific circumstances. LLMs bullshit all the time, sometimes they are right, sometimes not, the process that produces both types of response is the same. pushing for “hallucination” tries to obscure that. use of “hallucination” also implies that LLMs know something, they don’t, by design. it just so happens that if they “get” things right, it’s because it appeared in training material enough times to make an impression in model.

    • 𝘋𝘪𝘳𝘬@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      5
      ·
      6 months ago

      LLMs bullshit all the time

      Bullshitting to me is giving intentionally wrong statements. LLMs do not generate intentionally wrong statements. Saying they do, means that you imply intelligence.

      LLMs know nothing nor are they intelligent. They also are not right or wrong, they generate output based on statistics.

      “Hallucination” as a term for “AIs” making things up is used since the early 2000s (even if it’s meaning has changed since then).