Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youā€™ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cutā€™nā€™paste it into its own post ā€” thereā€™s no quota for posting and the bar really isnā€™t that high.

The post Xitter web has spawned soo many ā€œesotericā€ right wing freaks, but thereā€™s no appropriate sneer-space for them. Iā€™m talking redscare-ish, reality challenged ā€œculture criticsā€ who write about everything but understand nothing. Iā€™m talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyā€™re inescapable at this point, yet I donā€™t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldnā€™t be surgeons because they didnā€™t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canā€™t escape them, I would love to sneer at them.

Last weekā€™s thread

(Semi-obligatory thanks to @dgerard for starting this)

    • froztbyte@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      3
      Ā·
      5 days ago

      I keep wanting to give zed a serious go (even just to try it out because it looks interesting) but thereā€™s some design decisions that are raisedeyebrow.gif and feel like a very telling thing about the mindset of the devs

      fresh install:

      • at start, it immediately attempts to start making connections to github copilot and a couple of other network sources. no user prompt, no indication that itā€™s going to be doing this, no indication that it is doing it
      • figuring out how to turn said undesired features off was not documented and just by the by semi answered elsewhere (also: I applied those settings and it still did a bunch of shit, so I had to pull the repo and scratch through it myselfā€¦)

      while Iā€™m certainly from the older guard of crotchety, I still think itā€™s fucking reasonable to ask the user before your software goes off and does shit. you donā€™t even have to overload them with requests, you can make it granular with a customize button. this shit has been solved in fucking windows application installers since the goddamn 00s

      christ Iā€™m gonna stay angry at the last decade for a long time

  • o7___o7@awful.systems
    link
    fedilink
    English
    arrow-up
    23
    Ā·
    edit-2
    8 days ago

    Update on LLM reviewer situation:

    PM is down to let us pitch them our argument. Good news: PM seems like a cool person, is open minded, and is being pretty frank about the forces at work here. Bad news: taking action on this will open a whole can of worms, so any proof has to be ironclad. After conferring with our local grant wizards, the battle plan is to crank out a 15 minute pitch consisting of:

    • a 2 min elevator pitch of our tech, highlighting what the reviews mangled
    • intro to LLMs for people who know what glycosylation is
    • intro to semiotics for the same
    • show how transformer architectures transform symbols into symbols to produce text-shaped objects without actual intent, ideas, or context (and why ā€œautomated AI detectionā€ is also bullshit).
    • show a few examples of plausible-at-first-glance gen-ai slop (the nonexistant turkish fortress, mouse dck, etc)
    • Highlight how our weird reviews (both good and bad) fit exactly into this bin (absolutely mis-interpreting a table, inventing a bacterial species we didnā€™t use and talking shit about it, miscounting our team members, etc)

    Weā€™ll be leaning on the Stochastic Parrot paper pretty hard, because itā€™s a good entry into the field on the skeptical side and is just well constructed in general. Iā€™m also on the hunt simplified diagram for how LLMs convert tokens to arrays to tokens from the original transformer literature. Unfortunately, so much of the literature is obscurantist on purpose, and I want to avoid falling into the ā€œIt canā€™t be that stupidā€ trap. Any pointers in that direction are most welcome!

    Wish us luck, heh!

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      Ā·
      8 days ago

      good luck! it sounds like youā€™re coming in remarkably well-prepared, so unless theyā€™re gonna go fingers-in-ears (and it sounds like the PMā€™s better than that), youā€™re at least likely to make an impact

      Unfortunately, so much of the literature is obscurantist on purpose

      between this and all the SEO on OpenAIā€™s marketing horseshit and breathlessly parroted press releases, itā€™s exhausting to find good sources for how any of this stuff actually works in reality. shit, Iā€™ve had old primary sources on things like Sora get buried after OpenAIā€™s promises didnā€™t pan out. Iā€™m hoping you can find what you need ā€” our back archives might have a few links if you havenā€™t searched through here yet.

  • BigMuffin69@awful.systems
    link
    fedilink
    English
    arrow-up
    22
    Ā·
    11 days ago

    Actual message I got while renewing my insurance plan last night. Thank you for adding a shitty chat bot which will give me false information about my life and death decisions, bravo.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      Ā·
      8 days ago

      This tool solely exists so that you can ask it questions and get assistance, but also we disavow any responsibility for the answers to the questions we just told you to ask it. Has this kind of clause been held up in court anywhere? Like, Iā€™m sure it has but it seems like the same logic would be ridiculous in any other context. Like, consider the fraught legal history of the anarchist cookbook.

      • corbin@awful.systems
        link
        fedilink
        English
        arrow-up
        12
        Ā·
        12 days ago

        Itā€™s almost completely ineffective, sorry. Itā€™s certainly not as effective as exfiltrating weights via neighborly means.

        On Glaze and Nightshade, my prior rant hasnā€™t yet been invalidated and thereā€™s no upcoming mathematics which tilt the scales in favor of anti-training techniques. In general, scrapers for training sets are now augmented with alignment models, which test inputs to see how well the tags line up; your example might be rejected as insufficiently normal-cat-like.

        I think that ā€œforce-feedingā€ is probably not the right metaphor. At scale, more effort goes into cleaning and tagging than into scraping; most of that ā€œforcedā€ input is destined to be discarded or retagged.

        • froztbyte@awful.systemsOP
          link
          fedilink
          English
          arrow-up
          11
          Ā·
          12 days ago

          yeah this is the thing Iā€™ve been thinking a lot about

          fucking reCaptcha is literally mass-weaponising users for data filtration, and there is no good counter besides just not using reCaptcha (which is something one canā€™t easily pull off without things like regulatory action, massive reputational problems that make people gtfo, etc)

          I have similar worries about cloudflare being such a massive chokepoint and using that position to enable ā€œai bot filterā€ services. feels extremely monopolistic, but ianal and Iā€™m not entirely sure what the case grounds/structure on that would be (if any)

          the only other viable strategy at the moment is fully breaking contact with any potential bad traffic systems, and thatā€™s extremely fucking dire because thatā€™s yet another nail in the coffin of the increasingly less open internet

          • bitofhope@awful.systems
            link
            fedilink
            English
            arrow-up
            9
            Ā·
            12 days ago

            The whole Cloudflare bot detection is so weird and eerie. Iā€™ve had issues where I canā€™t get past it presumably just because Iā€™m using some in-application browser just to get a login cookie, but other times it just lets fucking curl through no questions asked.

            • flavia@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              5
              Ā·
              11 days ago

              it just lets fucking curl through no questions asked

              Fucking what. Iā€™ve heard of sites blocking curl and Iā€™ve been able to get around it by copying user agent and sometimes cookies from the browser. Now Iā€™m cursed with the knowledge that I could probably just scrape stuff from everywhere

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        Ā·
        12 days ago

        I saw people say they would add 10% opaque layers of the musk with Epsteinā€™s accomplice (whos name i forgot for a second and too lazy to look her up) photo. Would be nice if there was a tool to do so automatically. (Not that i post on twitter anymore).

        • swlabr@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          Ā·
          12 days ago

          tbh that sounds like a pretty easy script to write! Too bad I am not near a computer rn

          • bitofhope@awful.systems
            link
            fedilink
            English
            arrow-up
            5
            Ā·
            12 days ago

            I got nerd sniped into trying to resize felons_musk_and_maxwell.webp to the same size as some base image before compositing it on top with a 10% dissolve in the same magick invocation but I need to sleep so Iā€™m giving up for now.

          • ShakingMyHead@awful.systems
            link
            fedilink
            English
            arrow-up
            5
            Ā·
            12 days ago

            Wouldnā€™t really need a script, though. Just open up photoshop or GIMP and add a layer after everything is finished.

            • Soyweiser@awful.systems
              link
              fedilink
              English
              arrow-up
              6
              Ā·
              12 days ago

              But that doesnā€™t scale properly, you want ideally some sort of browser extension that just automatically does it for you before the data gets send to twitter.

    • antifuchs@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      Ā·
      12 days ago

      They added sleeps to training jobs? Sounds like they deserve a raise for improving energy efficiency insteadā€¦

    • luciole (he/him)@beehaw.org
      link
      fedilink
      English
      arrow-up
      6
      Ā·
      12 days ago

      I thought they were gonna do that themselves by feeding on their own outputs littered all over the www. Maybe they can use some help.

  • Rinn@awful.systems
    link
    fedilink
    English
    arrow-up
    20
    Ā·
    10 days ago

    A publically funded radiostation in my city has fired all of its hosts and replaced them with 3 AI ā€œhostsā€ (non-English link).

    Theyā€™re trying to defend this by saying that all of the hosts were just independent contractors and AI is not the main reason theyā€™re firing them, and that the AI thing is just going to be ā€œan experiment to appeal to Gen Zā€. Fortunately, most peopleā€™s response seems to be ā€œfuck off with this crapā€.

    I justā€¦ canā€™t with this. Even if they really were firing the hosts anyway (which is possible), I absolutely hate that they are using public money to run ā€œexperimentsā€ with AI media. Heads should roll for this.

    • skillissuer@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      10
      Ā·
      edit-2
      10 days ago

      I justā€¦ canā€™t with this. Even if they really were firing the hosts anyway (which is possible), I absolutely hate that they are using public money to run ā€œexperimentsā€ with AI media. Heads should roll for this.

      i think it might be code for firing these people, but technically not, because they just terminated contracts with 15 single-person companies, so they never really hired them in the first place

  • BlueMonday1984@awful.systems
    link
    fedilink
    English
    arrow-up
    20
    Ā·
    9 days ago

    Update on the character.ai lawsuit:

    Gizmodo just reported on the story - in addition to the suicide that kicked this litigation off, theyā€™ve also discovered an hour-long screen recording where a test account (self-reported as thirteen years old) gets sexted relentlessly by the siteā€™s chatbots.

    So, in addition to driving one specific teen to suicide, character.ai is also facing accusations that their bots are sexually harassing children.

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    19
    Ā·
    edit-2
    12 days ago

    The Bookseller: Penguin Random House underscores copyright protection in AI rebuff

    Penguin Random House (PRH) has amended its copyright wording across all imprints globally, confirming it will appear ā€œin imprint pages across our marketsā€. The new wording states: ā€œNo part of this book may be used or reproduced in any manner for the purpose of training artificial intelligence technologies or systemsā€, and will be included in all new titles and any backlist titles that are reprinted.

    Now that the content mafia has realized GenAI isnā€™t gonna let them get rid of all the expensive and troublesome human talent. itā€™s time to give Big AI a wedgie.

    • bitofhope@awful.systems
      link
      fedilink
      English
      arrow-up
      14
      Ā·
      12 days ago

      Itā€™s weird how rarely I see people point this, but in theory this kind of boilerplate should be technically meaningless. If copyright protections include the privilege to use the work for training a machine learning algorithm, you need explicit permission anyway. OTOH if itā€™s fair use or otherwise not something copyright law is concerned with, the copyright holderā€™s objection doesnā€™t matter.

      For the record, I think AI models are derivative works and thus theyā€™re not only infringing on typical ā€œall rights reservedā€ works, but also things such as Free software whose license terms require attribution if used in derivative work, and especially share-alike copyleft licensed work.

      • gerikson@awful.systems
        link
        fedilink
        English
        arrow-up
        12
        Ā·
        12 days ago

        I thinkt itā€™s pretty well-lknown that Spotify got all its initial music from Oink. They moved fast, got dominant, and were able to present the record labels with a big audience prepared to pay for streaming music. The labels quickly ensured theyā€™d get the lionā€™s share of that revenue.

        OpenAI and friends tried the same thing - scrape everything, build AGI, reap the rewards. Except it didnā€™t work, and theyā€™re in a much worse position morally. Even if they can get a judgement that what theyā€™re doing is legal, it will cost them a lot in litigation fees, coupled with the public perception that these culture vampires are ripping off the poor honest author. Not a good place to be in.

    • BlueMonday1984@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      Ā·
      edit-2
      12 days ago

      Now that the content mafia has realized GenAI isnā€™t gonna let them get rid of all the expensive and troublesome human talent. itā€™s time to give Big AI a wedgie.

      Considering the massive(ly inflated) valuations running around Big AI and the massive amounts of stolen work that powers the likes of CrAIyon, ChatGPT, DALL-E and others, I suspect the content mafia is likely gonna try and squeeze every last red cent they can out of the AI industry.

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        12
        Ā·
        11 days ago

        At some point, something is going to reveal that all the money in AI has gone into power costs for datacenters and NVidia chips and that the AI companies themselves arenā€™t doing so hot. I hope itā€™s the discovery process for some of the inevitable lawsuits.

  • skillissuer@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    17
    Ā·
    edit-2
    10 days ago

    this is peak AI. you might not like it, but itā€™s how top of the bubble looks like

    Radio station uses AI to interview the ghost of a dead Nobel-winner with 3 quirky zoomers who donā€™t exist, seems baffled people donā€™t like it starring three bots and deepfake of Wisława Szymborska

    related notesfrompoland and onet article they probably referenced (in polish) and another. would you guess that they fired a dozen or so* people just before? (and somehow had money for whatever horseshit they were sold) small radio stations arenā€™t probably bringing serious money either way now

    homepage of that radio boasts about their ā€œalmost entirely created by AIā€ content. it looks like they tried to convince zoomers to get an FM radio and listen to it somehow. itā€™s gonna go great

    apparently this radio is in liquidation since january however this might be related to dislodging previous govtā€™s propagandists from public media

    *original report used very handy word that does not appear in english that one could translate as ā€œfewteenā€ and can mean any number from 11 to 19 inclusive

    • mirrorwitch@awful.systems
      link
      fedilink
      English
      arrow-up
      15
      Ā·
      edit-2
      10 days ago

      What I never get about this stuff is how unfun all of it is. The characters in character.ai donā€™t sound anything like their model characters, at all. ChatGPT necromancy is terrible, the sĆ©ance table in my hometown sucked but the medium on a lazy day was still significantly better at producing some sort of impersonation that felt at least a little bit like the dead person, a skill Iā€™ve come to appreciate a bit when compared to ChatGPTā€™s attempt at it. Everything that ChatGPT writes, no matter who itā€™s trying to imitate, has the exact same flavour, and the flavour is slop.

    • bitofhope@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      Ā·
      10 days ago

      Fuck, I didnā€™t need to be reminded that they named the robot Optimus. Was ā€œBenderā€ or ā€œWall-Eā€ too much of a deep cut? Or is it just that Disneyā€™s trademark lawyers are scarier than Hasbro and Nvidia combined?

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      Ā·
      10 days ago

      My read: sounds like a teenager that knows the touted functionality of the scam tech they are referencing, but is not wise enough to the ways of the world to know they are scams.

  • BlueMonday1984@awful.systems
    link
    fedilink
    English
    arrow-up
    16
    Ā·
    edit-2
    8 days ago

    ā€˜They wish this technology didnā€™t existā€™: Perplexity responds to News Corpā€™s lawsuit

    ā€œThere are around three dozen lawsuits by media companies against generative AI tools. The common theme betrayed by those complaints collectively is that they wish this technology didnā€™t exist,ā€ said the Perplexity team in the blog. ā€œThey prefer to live in a world where publicly reported facts are owned by corporations, and no one can do anything with those publicly reported facts without paying a toll.ā€

    I wish the AI bros at Perplexity and elsewhere a very cope and fucking seethe.

    Okay, quick personal sidenote:

    With how much misinformation, manipulation, outright theft and other horrific shit this AI bubble has caused, I suspect weā€™re gonna see some attempts at an outright ban on AI. How successful theyā€™re gonna be, I donā€™t know, but at the bare minimum itā€™ll enjoy some popularity on the political fringe.

    • bitofhope@awful.systems
      link
      fedilink
      English
      arrow-up
      18
      Ā·
      8 days ago

      They prefer to live in a world where publicly reported facts are owned by corporations, and no one can do anything with those publicly reported facts without paying a toll.

      Yea, down with corporate IP trolls, information gatekeepers and idea landlords! Anyway, what was Perplexityā€™s business model again?

    • sc_griffith@awful.systems
      link
      fedilink
      English
      arrow-up
      17
      Ā·
      8 days ago

      they wish this technology didnā€™t exist

      this is supposed to be invalidating, but likeā€¦ yes? whatā€™s wrong with that?

    • o7___o7@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      Ā·
      edit-2
      8 days ago

      Burglars telling homeowners to cope and seethe when questioned about their possession of crowbars at time of arrest.

  • BlueMonday1984@awful.systems
    link
    fedilink
    English
    arrow-up
    16
    Ā·
    11 days ago

    Kendrick Zitron dropped - its mainly focusing on Prabhakar Raghavanā€™s recent kicking upstairs, and Googleā€™s bleak future.

    Main highlight was this snippet:

    I am hypothesizing here, but I think that Google is desperate, and that its earnings on October 30th are likely to make the street a little worried. The medium-to-long-term prognosis is likely even worse. As the Wall Street Journal notes, Googleā€™s ad business is expected to dip below 50% market share in the US in the next year for the first time in more than a decade, and Googleā€™s gratuitous monopoly over search (and likely ads) is coming to an end. Itā€™s more than likely that Google sees AI as fundamental to its future growth and relevance.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      17
      Ā·
      11 days ago

      Man, itā€™s almost like hollowing out the core value centers of a company in search of short-term growth will leave an empty dying husk that can neither serve in new markets nor continue to exist in their previous niche.

      If only there had been some kind of warning about the consequences of this management style. Hey, howā€™s GE doing these days again?

      • thesporkeffect@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        Ā·
        11 days ago

        Itā€™s not that the money is unaware this kills the business. They know. They donā€™t care because their process is business-agnostic. By design and intent they extract value from the business like it was a capri sun pouch and when line no longer goes up, itā€™s discarded for the next one.

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      Ā·
      11 days ago

      I think this is the second or third time that either Ed or somebody on his Discord reminded me about Shingy

  • BlueMonday1984@awful.systems
    link
    fedilink
    English
    arrow-up
    16
    Ā·
    10 days ago

    Character.ai is getting sued thanks to one of their users killing himself, and The New York Times is talking about it (thereā€™s also a piece by Gary Marcus talking about a previous incident if youā€™re interested).

    Like the copyright situation I previously mentioned, I suspect this is also gonna make potential investors wary of investing in AI post-bubble. Even if you manage to convince investors that you wonā€™t get DMCAā€™d into oblivion, theyā€™re still gonna be wary of the potential for a Dasani-level PR nightmare.

    Of course, thatā€™s assuming that Section 230 protects you from being held liable for what your autoplag does - if Ms. Garcia, whose sonā€™s suicide prompted this entire mess, succeeds in court, the legal precedent set means youā€™re likely gonna have to worry about being sued if/when someone ends up injured/killed/defamed/otherwise fucked up because of its outputā€¦

    • FredFig@awful.systems
      link
      fedilink
      English
      arrow-up
      15
      Ā·
      edit-2
      10 days ago

      Skimming the reddit thread in search of general public sentiment about this, but unfortunately mostly just found a greatest hits compilation of very gross comments.

      According to these very smart people, parents should expect your teenager to die as an outcome of not being perfect people 24/7, technology can never be at fault even when it literally tells you to commit suicide in coded language, and itā€™s actually impossible to understand which parts of society are causing kids to be depressed, so we must take it as a given that we canā€™t do anything about it. I regret having done this to myself.