image description (contains clarifications on background elements)

Lots of different seemingly random images in the background, including some fries, mr. crabs, a girl in overalls hugging a stuffed tiger, a mark zuckerberg “big brother is watching” poser, two images of fluttershy (a pony from my little pony) one of them reading “u only kno my swag, not my lore”, a picture of parkzer parkzer from the streamer “dougdoug” and a slider gameplay element from the rhythm game “osu”. The background is made light so that the text can be easily read. The text reads:

i wanna know if we are on the same page about ai.
if u diagree with any of this or want to add something,
please leave a comment!
smol info:
- LM = Language Model (ChatGPT, Llama, Gemini, Mistral, ...)
- VLM = Vision Language Model (Qwen VL, GPT4o mini, Claude 3.5, ...)
- larger model = more expensivev to train and run
smol info end
- training processes on current AI systems is often
clearly unethical and very bad for the environment :(
- companies are really bad at selling AI to us and
giving them a good purpose for average-joe-usage
- medical ai (e.g. protein folding) is almost only positive
- ai for disabled people is also almost only postive
- the idea of some AI machine taking our jobs is scary
- "AI agents" are scary. large companies are training
them specifically to replace human workers
- LMs > image generation and music generation
- using small LMs for repetitive, boring tasks like
classification feels okay
- using the largest, most environmentally taxing models
for everything is bad. Using a mixture of smaller models
can often be enough
- people with bad intentions using AI systems results
in bad outcome
- ai companies train their models however they see fit.
if an LM "disagrees" with you, that's the trainings fault
- running LMs locally feels more okay, since they need
less energy and you can control their behaviour
I personally think more positively about LMs, but almost
only negatively about image and audio models.
Are we on the same page? Or am I an evil AI tech sis?

IMAGE DESCRIPTION END


i hope this doesn’t cause too much hate. i just wanna know what u people and creatures think <3

  • Smorty [she/her]@lemmy.blahaj.zoneOP
    link
    fedilink
    English
    arrow-up
    9
    ·
    15 hours ago

    that is interesting. i know that there are plenty of plant recognition onces, and recently there have been some classifiers specifically trained on human skin to see if it’s a tumor or not. that one is better than a good human doctor in his field, so i wonder what happened to that mushroom classifier. Maybe it is too small to generalize or has been train in a specific environment.

    • tburkhol@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      15 hours ago

      I haven’t looked closely enough to know, but I recall medical image analytics being “better than human” well before the current AI/LLM rage. Like, those systems use machine learning, but in a more deterministic, more conventional algorithm sense. I think they are also less worried about false positives, because the algorithm is always assumed to be checked by a human physician, so my impression is that the real sense in which medical image analysis is ‘better’ is that it identifies smaller or more obscure defects that a human quickly scanning the image might overlook.

      If you’re using a public mushroom identification AI as the only source for life-and-death choice, then false positives are a much bigger problem.

      • Smorty [she/her]@lemmy.blahaj.zoneOP
        link
        fedilink
        English
        arrow-up
        3
        ·
        15 hours ago

        yes, that is what i have heard too. there was a news thing some days ago that this “cancer scanner” thing will be available in two years to all doctors. so that’s great! but yes, we very much still need a human to watch over it, so its out-of-distribution-generations stay in check.

    • jawa21@lemmy.sdf.orgM
      link
      fedilink
      arrow-up
      5
      ·
      15 hours ago

      Do not trust AI to tell you if you can eat a mushroom. Ever. The same kinds of complexity goes into medicine. Sure, the machine learning process can flag something as cancerous (for example), but will always and forever need human review unless we somehow completely change the way machine learning works and speed it up by an order of magnitude.

      • Smorty [she/her]@lemmy.blahaj.zoneOP
        link
        fedilink
        English
        arrow-up
        5
        ·
        15 hours ago

        yeah, we still very much need to have real humans go “yes, this is indeed cancer”, but this ai cancer detection feels like a reasonable “first pass” to quickly get a somewhat good estimation, rather than no estimation with lacking doctors.

        • agegamon@beehaw.org
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          9 minutes ago

          Sorry in advance for being captain obvious, but I feel like I can’t get over this. Your comment is *valuable and I completely agree with your take here, but then the elephant in the room is: how do the people with power actually choose to use these tools? It’s not like I can effect change on healthcare AI use on my own.

          So yes, it really can be first pass, good sanity check type of tool. It could help a good doctor if it was employed in a sane and useful way. And if the people with power over the system choose to use that way, I believe it would be a genuine benefit to a majority of humanity, worth the cost of its creation and maintenance.

          Or, it could be used to second guess the doctors, cram more cases through without paying them fairly, or “justify” not having enough qualified experts to match our collective need.

          Just framing how it is used a little bit differently suddenly takes us from genuine benefit to humanity, into profit-seeking for the 1% and lower quality of life for the remainder of us. That is by far my largest concern with this. I suppose that’s my largest concern with a lot of things right now.