AI could be a viable test for bullshit jobs as described by Graeber. If the disinfotmatron can effectively do your job then doing it well clearly doesnāt matter to anyone.
AI could be a viable test for bullshit jobs as described by Graeber. If the disinfotmatron can effectively do your job then doing it well clearly doesnāt matter to anyone.
I mean, doesnāt somebody still need to validate that those keys only get to people over 18? Either you have a decentralized authority thatās more easily corrupted or subverted or else you have the same privacy concerns at certificate issuance rather than at time of site access.
I mean, the whole point of declaring this era post-truth is that these people have basically opted out of consensus reality.
Why donāt they just hire a wizard to cast an anti-tiktok spell over all of Australia instead? It would be just as workable and I know a guy who swears he can do it for cheaper than whatever server costs theyāre gonna try and push.
Okay apparently it was my turn to subject myself to this nonsense and itās pretty obvious what the problem is. As far as citations go Iām gonna go ahead and fall back to āwatching how a human toddler learns about the worldā which is something Iām sure most AI researchers probably donāt have experience with as it does usually involve interacting with a woman at some point.
In the real examples that he provides, the system isnāt āpicking up the wrong goalā as an agent somehow. Instead itās seeing the wrong pattern. Learning āI get a pat on the head for getting to the bottom-right-est corner of the levelā rather than āI get a pat on the head when I touch the coin.ā These are totally equivalent in the training data, so itās not surprising that itās going with the simpler option that doesnāt require recognizing ācoinā as anything relevant. This failure state is entirely within the realms of existing machine learning techniques and models because identifying patterns in large amounts of data is the kind of thing theyāre known to be very good at. But there isnāt any kind of instrumental goal establishing happening here as much as the system is recognizing that it should reproduce games where it moves in certain ways.
This is also a failure state thatās common in humans learning about the world, so itās easy to see why people think weāre on the right track. We had to teach my little on the difference between āDaddy doesnāt like musicā and āDaddy doesnāt like having the Blaze and the Monster Machines theme song shout/sang at him when Iām trying to talk to Mama.ā The difference comes in the fact that even as a toddler thereās enough metacognition and actual thought going on that you can help guide them in the right direction, rather than needing to feed them a whole mess of additional examples and rebuild the underlying pattern.
And the extension of this kind of pattern misrecognition into sci-fi end of the world nonsense is still unwarranted anthropomorphism. Like, weāre trying to use evidence that itās too dumb to learn the rules of a video game as evidence that itās going to start engaging in advanced metacognition and secrecy.
Thatās the goal. The reality is that it doesnāt actually reproduce the skills it imitates well enough to actually give capital access to them, but it does a good enough job imitating them that theyāre willing to give it a chance.
I mean a lot of the services that companies are using are cloud-hosted, meaning that especially if you have branch offices or a lot of remote workers a normal firewall in the datacenter introduces an unnecessary bottleneck. Putting the logical edge of your organizationās network in the cloud too makes sense from a performance perspective in that case, and then turning the actual firewalls into SaaS seems much less absurd.
Brief overlapping thoughts between parenting and AI nonsense, presented without editing.
The second L in LLM remains the inescapable heart of the problem. Even if you accept that the kind of āthinkingā (modeling based on input and prediction of expected next input) that AI does is closely analogous to how people think, anyone who has had a kid should be able to understand the massive volume of information they take in.
Compare the information density of English text with the available data on the world you get from sight, hearing, taste, smell, touch, proprioception, and however many other senses you want to include. Then consider that language is inherently an imperfect tool used to communicate our perceptions of reality, and doesnāt actually include data on reality itself. The human child is getting a fire hose of unfiltered reality, while the in-training LLM is getting a trickle of what the writers and labellers of their training data perceive and write about. But before we get just feeding a live camera and audio feed, haptic sensors, chemical tests, and whatever else into a machine learning model and seeing if it spits out a person, consider how ambiguous and impractical labelling all that data would be. At the very least I imagine the costs of doing so are actually going to work out to be less efficient than raising an actual human being and training them in the desired tasks.
Human children are also not immune to āhallucinationsā in the form of spurious correlations. I would wager every toddler has at least a couple of attempts at cargo cult behavior or inexplicable fears as they try to reason a way to interact with the world based off of very little actual information about it. This feeds into both versions of the above problem, since the difference between reality and lies about reality cannot be meaningfully discerned from text alone and the limited amount of information being processed means any correction is inevitably going to be slower than explaining to a child that finding a āHappy Birthdayā sticker doesnāt immediately make it their (or anyone elseās) birthday.
Human children are able to get human parents to put up with their nonsense ny taking advantage of being unbearably sweet and adorable. Maybe the abundance of horny chatbots and softcore porn generators is a warped fun house mirror version of the same concept. I will allow you to fill in the joke about Silicon Valley libertarians yourself.
IDK. Felt thoughtful, might try to organize it on morewrite later.
This is what the AI-is-useful-actually argument obscures. There are parts of this technology that can do legitimately cool things! Machine learning identifying patterns in massive volumes of data that would otherwise be impractical to analyze is really cool and has a lot of utility. But once you start calling it āMedical AIā then people start acting like they can turn their human brains off. āAIā as a marketing term is not a tool that can help human experts focus their own analysis or enable otherwise-unfeasible kinds of statistical analysis. Will Smith didnāt get into gunfights with humanoid iMacs because they were identifying types of bread too effectively. The whole point is that itās supposed to completely replace the role of a person in the relevant situations.
I mean, considering only the relationships between words and symbols in the complete absence of context and real-world referents is a good description of how a certain brand of tech dunce thinks.
Iām glad Iām not the only one who picked up on that turn. The implication that what we need is an actual Bismark instead of a wannabe like we keep getting makes sense (I too would prefer if the levers of power were wielded by someone halfway competent who listens to and cares about people around them) but there are also some pretty strong reasons why we went from Bismark and Lincoln to Merkel and Trump, and also some pretty strong reasons why the road there led through Hitler and Wilson.
Along with my comments elsewhere about how the dunce believes their area of hypothetical expertise to be some kind of arcane gift revealed to the worthy, I feel like I should clarify that not only do the current top of dolts not have it but that there is no secret wisdom beyond the ken of normal men. That is a lie told by the powerful to stop you fro tom questioning their position; itās the ābecause Iām your Dad and I said soā for adults. Learning things is hard and hard means expensive, so people with wealth and power have more opportunities to study things, but that lack of opportunity is not the same as lacking the ability to understand things and to contribute to a truly democratic process.
There are three kinds of programmers. From smallest to largest: Those smart enough to write good math-intensive libraries, those dumb about to think they can, and those smart enough to just use what the first kind made.
Youāve got to make sure youāre not over-specializing. Iād recommend trying to roll your own time zone library next.
First and foremost, the dunce is incapable of valuing knowledge that they donāt personally understand or agree with. If they donāt know something, then that thing clearly isnāt worth knowing.
There is a corollary to this that Iāve seen as well, and it dovetails with the way so many of these guys get obsessed with IQ. Anything they canāt immediately understand must be nonsense not worth knowing. Anything they can understand (or think they understand) that you donāt is clearly an arcane secret of the universe that they can only grasp because of their innate superiority. I think that this is the combination that explains how so many of these dunces believe themselves to be the ubermensch who must exercise authoritarian power over the rest of us for the good of everyone.
See also the commenter(s) on this thread who insist that their lack of reading comprehension is evidence that theyāre clearly correct and are in no way part of the problem.
A lot of the spamming at the SC2 tournament level is about staying warmed up so that when you get into a micro-intensive battle later on where all of those actions might count (splitting your marines to protect from AoE while target-firing the suicide bombing banelings, for example) you can do it. Doesnāt make it look less ridiculous, especially in the first couple of minutes before the commentary has anything to really talk about so they try to act like stealing 5 minerals at that stage could somehow decide the game. But there is a slightly more reasonable logic to it than just speed running an RSI to look cool.
The original StarCraft also offers a lot of opportunities to use your āextraā APM to optimize around the godawful AI pathing and other āquirksā of the engine. Itās not as bad as, say, DotA in terms of āthis was a limitation of the original engine that is now a major cornerstone of playing the game well and if you complain about it youāre just badā but itās definitely up there. As the game goes on youāll usually see players start getting slightly more fast and loose with, say, optimizing the mining at their new base because at that point in the game splitting your focus that much is more detrimental even if you can move that fast.
I definitely ended up in the occasional spectator and campaign player for all that, though. Especially now that Iām starting to have creaky old man wrists of my own.
Unfortunately it doesnāt look like he was properly banned, just booted out of his session for having suspiciously-high APM. Now, the true eSports nerds among us will already know that high APM is a staple of high-level play in some games but is also an easy way to check for certain types of cheaters. Because of the association with skill in e.g. StarCraft it also became a very easily gamable metric if for some reason you wanted to feel like you knew what you were doing or show off for your friends and strangers online. For example, certain key bindings let you perform some actions as fast as your keyboardās refresh rate allows by holding down a key or abusing the scroll wheel on your mouse. This can send your measured APM through the roof for a time. My gut says this is what Elon was doing that triggered the anticheat program, rather than any amount of actively gaming or actually cheating.
Please note that the hard-won knowledge of my misspent youth has no bearing on how pathetic it is for the richest man in the world to be doing the same kind of begging for clout that I did at 14, especially since Iām pretty 14-year-old me was frankly better at it.
I got bounced back to Casey Newtonās recent master class in critihype and found something new that stuck in my craw.
Occasionally, they get an entire sector wrong ā see the excess of enthusiasm for cleantech in the 2000s, or the crypto blow-up of the past few years.
In aggregate, though, and on average, theyāre usually right.
First off, please note that this describes two of the most recent tech bubbles and doesnāt provide any recent counterexamples of a seemingly-ridicilous new gimmick that actually stuck around past the initial bubble. Effectively this says: yes, theyāre 0 for 2 in the last 20 years, but this time they canāt all be wrong!
But more than that I think thereās an underlying error in acting like āthe tech sectorā is a healthy and competitive market in the first place. They may not directly coordinate or operate in absolute lockstep, but the main drivers of crypto, generative AI, metaverse, SaaS, and so much of the current enshittifying and dead-ending tech industry comes back to a relatively small circle of people who all live in the same moneyed Silicon Valley cultural and informational bubble. We can even identify the ideological underpinnings of these decisions in the TESCREAL bundle, effective altruism and accelerationism, and ādark enlightenmentā tech-fascism. This is not a ruthlessly competitive market that ferrets out weakness. Itās more like a shared cult of personality that selects for whatever makes the guys in top feel good about themselves. The question isnāt āhow can all these different groups be wrong without someone undercutting themā, itās āhow can these few dozen guys who share an ideology and information bubble keep making the exact same mistakes as one anotherā and the answer should be to question why anyone expects anything else!
To his frequent āno, people really are this stupidā refrain I would like to add an argument. If it didnāt work on enough people to be profitable, the business model wouldnāt have persisted and been replicated and refined into the dominant model of online advertising, and/or online advertising would never have been able to become the primary monetization framework for online content. Like, itās fucked how much of the existing Internet is effectively subsidized by exploiting people who donāt know better, and I donāt think people are really okay with this as much as the system is sufficiently obfuscated that we donāt have to notice or think about it.
Economics: the famously apolitical field that examines the distribution and creation of wealth, also a famously apolitical concept.
Ironically this whole exchange is an example of just how cooked American political discourse is. The culture war is so all-consuming that anything outside of that gets largely excised from political action entirely. Then when someone from outside the US tries to point out that basically unrestricted corporate looting and blatant violations of various human rights could be regulated or otherwise countered by political processes, people act like theyāre speaking Martian.
Itās not an exhaustive search technique, but it may be an effective heuristic if anyone is planning The Revolutionā¢.