Hat tip also to the guy who tried to spin “oil is a commodity” as though there’s the same demand and economies of scale for chatbots as for liquid dinosaurs.
Hat tip also to the guy who tried to spin “oil is a commodity” as though there’s the same demand and economies of scale for chatbots as for liquid dinosaurs.
Given the proliferation of libertarians in SV, Ed probably wouldn’t even need to change the “A-minor” line is all I’m saying.
I think it’s also a case of thinking about form before function. It’s not quite as bad a case as the metaverse nonsense was, but there’s still a lack of curiosity about the sci-fi they read. In most stories that treat AI as anything less than a god, the replacement of people with artificial tools is about either what gets lost (the I, Robot movie, Wall-E) or the fact that effectively replacing people requires creating something with the same moral worth (Blade Runner, I, Robot, the Aasimov collection, etc).
So to throw my totally-amateur two cents in, it seems like it’s definitely part of the discussion in actual AI circles based on the for-public-consumption reading and viewing I’ve done over the years, though I’ve never heard it mentioned by name. I think a bigger part of the explanation has less to do with human cognition (it’s probably fallacious to assume that AI of any method effectively reproduces those processes) and more to do with the more abstract cognitive tests and games being much more formally defined. Our perception and model of a game of Chess or Go may not be complete enough to solve the game, but it is bounded by the explicitly-defined rules of the game. If your opponent tries to work outside of those bounds by, say, flipping the board over and storming off, the game itself can treat that as a simple forfeit-by-cheating. But our understanding of the real world is not similarly bounded. Things that were thought to be impossible happen with impressive frequency, and our brain is clearly able to handle this somehow. That lack of boundedness requires different capabilities than just being able to operate within expected parameters like existing English GenAI or image generators, I suspect relating to handling uncertainty or lacking information. The assumption that what AI is doing is a mirror to the living mind is wholly unproven.
To be fair, Luckey Palmer is an objectively funnier name.
Yet another word for the good ol’ rank-and-yank. Great way to instantly make number go up by suddenly laying off 10-20% of your employees. The trick is making sure you’ve moved on to another department or another company before the predictable consequences take hold.
Uber ran at a loss to undercut the competition (traditional taxis) and passed the costs of that onto the drivers. Then once people were onboard they increased prices while hanging the drivers out to dry, to the point where ultimately the consumer pays as much as they did for a normal taxi but there’s some ease-of-use improvements from the app, a hell of a lot of money ending up in silicon valley instead of local taxi companies, and an ever-growing mass of human suffering as the gig economy erodes the ability of the working class to find economic security.
Just one more teraflop, bro.
No, that’s stuff belonging to the tenth doctor. You’re thinking of ten ents.
Like, there is definitely racism in the hiring process and how writing is judged, but it comes from the fact that white people and white people alone don’t have to code switch in order to be taken seriously. The problem isn’t that bad writers are discriminated against it’s that nonwhite people have to turn on their “white voice” in order to be recognized as good writers. Giving everyone a white robot that can functionally take their place doesn’t actually make nonwhite people any more accepted. It’s the same old bullshit about how anonymity means 4chan can’t be racist.
I’m actually pretty sympathetic to the value of even the most sneer-worthy technologies as accessibility tools, but that has to come with an acknowledgement of the limitations of those tools and is anathema to the rot economy trying to sell them as a panacea to any problem.
I’m still partial to “spicy autocomplete” as a good analogy for how these systems actually work that people have more direct experience with. Take those Facebook posts that give you the first few words and say “what does autocomplete say your most used words are?” and make answering the question use as much electricity as a small city.
But not in the cool way that the people selling them say they endanger the survival of life on this planet, just in the boring climate catastrophe ways that people have been trying to get taken seriously since the fucking 70s.
Basically, yeah. At my last job working in vendor support the “customer success” team was entirely sales-focused. Support (as in “my product isn’t working as expected please help”) was under a different department that would sometimes get badgered by the customer success guys if it seemed like a case was making it harder to upsell, or if the customer’s problem was that they wanted to do something their current purchase didn’t cover.
Can we get a universe where Ed writes a verse on Kendrick’s hopefully-imminent Elon Musk dis track?
The Zitron-pilled among us probably suspect that part of the real reason for this is, ironically, to obscure the fact that OpenAI has no real profits because of how ludicrously expensive their models are to train and operate and how limited the actual use cases that people will pay for have proven. It’s better from a “getting investor money” perspective to have everyone talking about how terrible it is that investor profits are no longer capped for humanitarian reasons than to have more people ask whether we’re getting close to the peak of this bubble.
Also please fill in the obligatory rant about how LLMs don’t actually know any diseases or symptoms. Like, if your training data was collected before 2020 you wouldn’t have a single COVID case, but if you started collecting in 2020 you’d have a system that spat out COVID to a disproportionately large fraction of respiratory symptoms (and probably several tummy aches and broken arms too, just for good measure).
Of course, some months later as fall approached, travellers saw stretched between the ruined pillars a banner proclaiming: Spirit Halloween Now Hiring!
I would wager that, more than the costs of serving these API calls, preserving the opacity of the resultant network is probably part of the advantage these companies get from locking down their APIs. Given how much flak they already get for the mental and social damage done by social media and Twitter specifically, I suspect they’re very happy to preserve as much of the black boxiness as they can so they can point to the value users get and their ad revenue and say that all the costs are unfortunate coincidents rather than central problems with the paradigm.
No, see, if they chose anime that would at least represent an investment in the creation of something however questionable it’s overall value for the level of resources involved.
Instead they see anime as a thing people like and are trying to link their existing AI and crypto concepts to it in order to bouy their public perception and get a halo effect going.
They’re not choosing to put that value in anime, they’re hoping to use anime to make the things they did choose seem more valuable than they are, because otherwise they made horrible choices and won’t be given as large a share of society’s surplus output to use on the next thing.
So if I follow, basically everyone involved has been banking on users getting confused about what the “legit” version of WordPress is, with known transphobic asshole photomatt being particularly egregious with WordPress.com vs wordpress.org, and then known transphobic asshole photomatt remembered that he also had some more direct influence in WordPress.org that he could use to smite his enemies. Is that about right or am I missing some steps?