- cross-posted to:
- auai@programming.dev
- cross-posted to:
- auai@programming.dev
Good article. Captures the bubble growth and the lack of profit growth, with lots of examples. And that the capacity growth of AI is limited by non AI works, so no growth into functionality.
Good one to hand to people who needs to understand the nature of the bubble (and that it is a bubble).
What I think is extra relevant is that there’s no sign that the LLMs are magically achieving “sentience” - in that case there would be no further need for training material!
This article is excellent, and raises a point that’s been lingering in the back of my head–what happens if the promises don’t materialize? What happens when the market gets tired of stories about AI chatbots telling landlords to break the law, or suburban moms complaining about their face being plastered onto a topless model, or any of the other myriad stories of AI making glaring mistakes that would get any human immediately fired?
We’ve poured hundreds of billions of dollars into this, and what has it gotten us? What is the upside that makes up for all the lawsuits, lost jobs, disinformation, carbon footprint, and deluge of valueless slop flooding our search results? So far as I can tell, its primary use seems to be in creating things that someone is too lazy to do properly themself like cover letters or memes, and inserting Godzilla into increasingly ridiculous situations. There’s certainly something there, perhaps, but is it worth using enough energy to power a small country?
We’ve poured hundreds of billions of dollars into this, and what has it gotten us?
weaponized deepfakes, propaganda as a service, ai spam, proof that you can strip-mine internet for everything of value relatively quickly, proof that you can pollute web with drivel that pleases google and cover anything of value relatively quickly, tool for destruction of some media companies, and that sweet, sweet speculative stock revenue
oh you mean for commoners? then you get fuck all, or maybe you get to be replaced by plagiarism machine of your boss’s choice that is Good Enough™
This article is excellent, and raises a point that’s been lingering in the back of my head–what happens if the promises don’t materialize? What happens when the market gets tired of stories about AI chatbots telling landlords to break the law, or suburban moms complaining about their face being plastered onto a topless model, or any of the other myriad stories of AI making glaring mistakes that would get any human immediately fired?
If we’re lucky, we might end up with a glut of cheap GPUs/server space once the bubble pops.
For context, 5GW is massive amount of electricity. Two of largest european NPPs have 5.7GW (Zaporozhian NPP, pre-2022) and 5.6GW (Gravelines NPP), and that’s nameplate capacity, some part is always down for maintenance/refueling. That’s quite a significant part of these respective countries electricity generation (over 20% for Ukraine and almost 6% for France). If you want to have 5GW available at all times, then something closer to 8-10GW nameplate capacity would be in order. That’s larger that biggest current nuclear installations in the world (7GW-ish Chinese and Korean NPPs)
The next AI winter is coming, and it looks like its going to be a brutal one.
Yet AI researcher Pablo Villalobos told the Journal that he believes that GPT-5 (OpenAI’s next model) will require at least five times the training data of GPT-4.
I tried finding the non-layman’s version of the reasoning for this assertion and it appears to be a very black box assessment, based on historical trends and some other similarly abstracted attempts at modelling dataset size vs model size.
This is EpochAI’s whole thing apparently, not that there’s necessarily anything wrong with that. I was just hoping for some insight into dataset length vs architecture and maybe the gossip on what’s going on with the next batch of LLMs, like how it eventually came out that gpt4.x is mostly several gpt3.xs in a trench coat.