

Neither, actually. They were testing it by asking how many "T"s appeared in āLlama Fourā and it kept saying ā2ā so they decided to roll with it.
Neither, actually. They were testing it by asking how many "T"s appeared in āLlama Fourā and it kept saying ā2ā so they decided to roll with it.
I also legitimately canāt tell the degree to which they donāt understand theyāre LARPing a dystopia versus how much they completely understand that and thatās why itās gonna be so awesome for them once they make fetch happen.
A massive domestic infrastructure project with little actual demand? In China of all places? I donāt believe it
If he got into specifics people might think āDamn, Iāve never had that kind of experience with working-class New Yorkers. What gives?ā and he might have to consider let alone admit that he was an asshole to someone.
Given the apparent state of the art for autogenerated captions (and by extension the initial challenge of speech recognition) being firmly in the āgood enoughā range I would not trust the chain of speech recognition -> translation -> text-to-speech. Thatās a lot of room for errors to chain, multiply, and obscure themselves through GIGO even if the latter two steps did work as expected.
Jesus, fine, Iāll watch it already, God.
Your mistake, distant future ghost, was in developing RNA repair nanites without creating universal healthcare.
Thereās a particular failure mode at play here that speaks to incompetent accounting on top of everything else. Like, without autocontouring how many additional radiologists would need to magically be spawned inti existence and get salaries, benefits, pensions, etc in order to reduce overall wait times by that amount? Because in reality thatās the money being left on the table; the fact that itās being made up in shitty service rather than actual money shouldnāt meaningfully affect the calculus there.
By refusing to focus on a single field at a time AI companies really did make it impossible to take advantage of Gel-Mann amnesia.
Thereās inarguably an organizational culture that is fundamentally disinterested in the things that the organization is supposed to actually do. Even if they arenāt explicitly planning to end social security as a concept by wrecking the technical infrastructure it relies on, theyāre almost comedically apathetic about whether or not the project succeeds. At the top this makes sense because politicians can spin a bad project into everyone elseās fault, but the fact that theyāre able to find programmers to work under those conditions makes me weep for the future of the industry. Even simple mercenaries should be able to smell that this project is going to fail and look awful on your resume, but I guess these yahoos are expecting to pivot into politics or whatever administration position they can bargain with whoever succeeds Trump.
Thatās fascinating, actually. Like, it seems like it shouldnāt be possible to create this level of grammatically correct text without understanding the words youāre using, and yet even immediately after defining āunsupervisedā correctly the system still (supposedly) immediately sets about applying a baffling number of alternative constraints that it seems to pull out of nowhere.
OR alternatively despite letting it ācookā for longer and pregenerate a significant volume of its own additional context before the final answer the system is still, at the end of the day, an assembly of sochastic parrots who donāt actually understand anything.
I donāt think that the actual performance here is as important as the fact that itās clearly not meaningfully āreasoningā at all. This isnāt a failure mode that happens if itās actually thinking through the problem in front of it and understanding the request. Itās a failure mode that comes from pattern matching without actual reasoning.
write it out in ASCII
My dude what do you think ASCII is? Assuming weāre using standard internet interfaces here and the request is coming in as UTF-8 encoded English text it is being written out in ASCII
Sneers aside, given that the supposed capability here is examining a text prompt and reason through the relevant information to provide a solution in the form of a text response this kind of test is, if anything, rigged in favor of the AI compared to some similar versions that add in more steps to the task like OCR or other forms of image parsing.
It also speaks to a difference in how AI pattern recognition compared to the human version. For a sufficiently well-known pattern like the form of this river-crossing puzzle itās the changes and exceptions that jump out. This feels almost like giving someone a picture of the Mona Lisa with aviators on; the model recognizes that itās 99% of the Mona Lisa and goes from there, rather than recognizing that the changes from that base case are significant and intentional variation rather than either a totally new thing or a ācorruptedā version of the original.
Itās also the sound it makes when I drop-kick their goddamned GPU clusters into the fuckin ocean. Thankfully I havenāt run into one of these yet, but given how much of the domestic job market appears to be devoted towards not hiring people while still listing an opening it feels like Iām going to.
On a related note, if anyone in the Seattle area is aware of an opening for a network engineer or sysadmin please PM me.
I feel like cult orthodoxy probably accounts for most of it. The fact that they put serious thought into how to handle a sentient AI wanting to post on their forums does also suggest that theyāre taking the AGI āpossibilityā far more seriously than any of the companies that are using it to fill out marketing copy and bad news cycles. I for one find this deeply sad.
Edit to expand: if it wasnāt actively lighting the world on fire I would think thereās something perversely admirable about trying to make sure the angels dancing on the head of a pin have civil rights. As it is theyāre close enough to actual power and influence that their enabling the stripping of rights and dignity from actual human people instead of staying in their little bubble of sci-fi and philosophy nerds.
Only as a subset of the broader problem. What if, instead of creating societies in which everyone can live and prosper, we created people who can live and prosper in the late capitalist hell weāve already created! And what if we embraced the obvious feedback loop that results and call the trillions of disposable wireheaded drones that weāve created a utopia because of how high theyāll be able to push various meaningless numbers!
I read through a couple of his fiction pieces and I think we can safely disregard him. Whatever insights he may have into technology and authoritarianism appear to be pretty badly corrupted by a predictable strain of antiwokism. Itās not offensive in anything I read - heās not out here whining about not being allowed to use slurs - but he seems sufficiently invested in how authoritarians might use the concerns of marginalized people as a cudgel that he completely misses how in reality marginalized people are more useful to authoritarian structures as a target than a weapon.
The whole CoreWeave affair (and the AI business in general) increasingly remind me of this potion shop, only with literally everyone playing the role of the idiot gnomes.
I gave him a long enough chance to prove his views had changed to go read Hananiaās actual feed. Pinned tweet is bitching about liberals canceling people. Just a couple days ago he was on a podcast bitching about trans people and talking about how itās great to be a young broke (asian) woman because you can be exploited by rich old (white) men.
So yeah heās totally not a piece of shit anymore. Donāt even worry about it.
I thought you had to wait at least a few generations to start inventing bullshit evo-psych-adjacent explanations for stuff.
Also this joke was funny when XKCD did it in the alt text 16 years ago. Jesus how has it been 16 years what the hell