read that sentence back in a mirror to me
This isn’t a joke either. Read it back in the mirror. To ME. What do you think you see when you look in the mirror? WRONG.
read that sentence back in a mirror to me
This isn’t a joke either. Read it back in the mirror. To ME. What do you think you see when you look in the mirror? WRONG.
Dogshit writing as well, “we would never wish for a war to occur”, read that sentence back in a mirror to me
War is the greatest human tragedy, but defence is indispensable. With our commitment to rebuilding the United States’ defensive industrial base, we at Ares aim to ensure this country is prepared to halt any conflict rapidly, and save countless live.
Yeah, it’s a philosophical question, which means you need a philosophical answer. Spitballing won’t help you figure shit out a priori because it turns out that learning how to think a priori effectively takes years of hard graft and is called “studying philosophy”. You should be asking people like me what “know” means in this context and what distinguishes memory in human beings from “memory” in an LLM (a great deal, as it happens!)
but all of this stuff is still relatively new and I’m sure it’ll get better with time
What is the exact point of taking this attitude? Anybody who cares to look knows exactly what’s wrong with this stuff. It’s an astonishingly, and I mean “astonishing” as in “actually beyond ordinary human comprehension” as in “literally awe-inspiring”, wasteful means (whether your energy source is fossil fuels or solar!) of doing - at the absolute outside best - extraordinarily basic shit. Every single day the window of useful applications and potential improvements narrows incredibly rapidly, and the people who are fundamentally steering the whole programme are proven liars and scam artists, and proven beyond any shadow of a doubt at that?
Who cares if it’s relatively new, or if there’s room for mild-mannered optimism? What practical teeth does that argument have? What purpose does it actually serve beyond satisfying a basically shallow political impulse to moderate perceivedly heightened emotive responses to these incredibly stark facts?
The only actually reasonable response to this farrago is full-throated opposition to every element of the whole show which is either a lie or covering for a lie, which is virtually every single element. If all that you’re left with is “hey, transformers are pretty cool, and I look forward to seeing how they contribute in their own partial way to our collective technical means of saving the planet, and incidentally anti-trust legislation should put people like Altman behind bars for the rest of their lives” then so be it! That’s a far more even-handed and fundamentally sensible response than blithely insisting that the occasional trinket has room for improvement - in fact if you’re liberal-minded it’s the essential output of any sensible thoughts on how to maintain a democratic society.
I want to add William H. Tucker’s posthumous “The Bell Curve in Perspective”, which came out I think right at the end of last year. It’s a short, thorough, assessment both of the history of The Bell Curve book itself and what has happened since.
Even the first chapter is just mindblowingly terse in brutally unpacking how (a) it was written by racists, (b) for racist ends, © Murray lied and lied afterwards in pretending that ‘only a tiny part of the book was about race’ or whatever
Well this is where I was going with Lakatos. Among the large scale conceptual issues with rationalist thinking is that there isn’t any understanding of what would count as a degenerating research programme. In this sense rationalism is a perfect product of the internet era: there are far too many conjectures being thrown out and adopted at scale on grounds of intuition for any effective reality-testing to take place. Moreover, since many of these conjectures are social, or about habits of mind, and the rationalists shape their own social world and their habits of mind according to those conjectures, the research programme(s) they develop is (/are) constantly tested, but only according to rationalist rules. And, as when the millenarian cult has to figure out what its leader got wrong about the date of the apocalypse, when the world really gets in the way it only serves as an impetus to refine the existing body of ideas still further, according to the same set of rules.
Indeed the success of LLMs illustrates another problem with making your own world, for which I’m going to cheerfully borrow the term “hyperstition” from the sort of cultural theorists of which I’m usually wary. “Hyperstition” is, roughly speaking, where something which otherwise belongs to imagination is manifested in the real world by culture. LLMs (like Elon Musk’s projects) are a good example of hyperstition gone awry: rationalist AI science fiction manifested an AI programme in the real world, and hence immediately supplied the rationalists with all the proof they needed that their predictions were correct in the general if not in exact detail.
But absent the hyperstitional aspect, LLMs would have been much easier to spot as by and large a fraudulent cover for mass data-theft and the suppression of labour. Certainly they don’t work as artificial intelligence, and the stuff that does work (I’m thinking radiology, although who knows when the bigs news is going to come out that that isn’t all it’s been cracked up to be), i.e. transformers and unbelievable energy-spend on data-processing, doesn’t even superficially resemble “intelligence”. With a sensitive critical eye, and an open environment for thought, this should have been, from early on, easily sufficient evidence, alongside the brute mechanicality of the linguistic output of ChatGPT, to realise that the prognostic tools the rationalists were using lacked either predictive or explanatory power.
But rationalist thought had shaped the reality against which these prognoses were supposed to be tested, and we are still dealing with people committed to the thesis that skynet is, for better or worse, getting closer every day.
Lakatos’s thesis about degenerating research programmes asks us to predict novel and look for corroborative evidence. The rationalist programme does exactly the opposite. It predicts corroborative evidence, and looks for novel evidence which it can feed back into its pseudo-Bayesian calculator. The novel evidence is used to refine the theory, and the predictions are used to corroborate a (foregone) interpretation of what the facts are going to tell us.
Now, I would say, more or less with Lakatos, that this isn’t an amazingly hard and fast rule, and it’s subject to different interpretations. But it’s a useful tool for analysing what’s happening when you’re trying to build a way of thinking about the world. The pseudo-Bayesian tools, insofar as they have any impact at all, almost inevitably drag the project into degeneration, because they have no tool for assessing whether the “hard core” of their programme can be borne out by facts.
Everything is. I don’t need anyone to tell me that the Red Scare ruined philosophy of science, going to America was the original problem.
My graduate degree was in philosophy of science, and I wouldn’t suggest Kuhn or, indeed, much philosophy of science as a salve for this particular problem. For much of the 20th century, the philosophy of science primarily theorised about two main sets of data: (1) idealised physics, which is to say the “final” theories of physics; (2) historical case studies, which is to say the experimental and theoretical debates which produced those theories. These are two distinct strands of research (of which Kuhn belongs to, and plays an important role in introducing, the second), but perspicuous observers will note that neither of them deal with people who get science wrong, rather they deal with either what is “scientific knowledge”, or how it is that scientific “knowledge” is produced.
Now understanding a little better how scientific knowledge is produced, or even that it is produced (and not intuited, Yudkowsky-style, as if given by a beam of pink energy from the future), could be a preliminary inoculation against behaving as if it is intuited, Yudkowsky-style, as if given by a beam of pink energy from the future. Or, in a twist of which many Kuhn readers have fallen afoul, it can be the radicalisation of a would-be “paradigmatic” thinker, who therefore learns that “normal” scientific knowledge is always local, partial, and primarily intended for the NPC types who populate laboratories. If I wanted to turn somebody with the quintessential rationalist personality into a monstrous basilisk-wraith I would give them Kuhn.
I’m not one for delivering the usual bromides against Kuhn’s supposed sloppiness (I think his treatment has been selective and unkind), but there are also better, more recent works in the same vein (and, naturally, Feyerabend did Kuhn better anyway). If I wanted to give somebody “the good shit” from philosophy of science, I would give them Nancy Cartwright, Ian Hacking, and Bas van Frassen. But the problem remains - how do I explain to these people that they aren’t participating in scientific discourse at all? - after all, as we get more and more recent even the very moderate non-objectivisms of Cartwright, Hacking, van Frassen et al. become diluted as, in practical terms, much of philosophy of science converges on the project of once again reifying a now complicated picture of scientific knowledge in the teeth of perceived worries about its objectivity.
Why is this a problem? Well the pragmatic image of science with which your rationalist is liable to come away from these texts is one in which the body of the whole thing is incredibly complex and everything has its role, including that of the rationalist. With Kuhn we will have deepened their appreciation of their own importance, and with the non-objectivists we will have challenged their STEMacism only to supply their project with an undeserved aura of validity!
(I here leave out the really technical stuff, naturally. Much of philosophy of science is of course concerned with resolving particular puzzles in particular areas. This is of course a lot more difficult and worth doing than any grand project we might have in mind, but it can’t help the people we’re discussing).
Only the hardcore realists remain, but what do they have to offer? Idealised physical models! This simply cannot help us at all.
Hell, if they’re anything like a gamut of arseholes I’ve run into over the years, at least a few of them proudly trumpet that back at the turn of the century Bruno Latour was expressing regret about the critical project in STS, and that it’s the only thing of his they’ve ever read.
The great demarcatory projects are, mostly, a thing of the past, but really this is what we need. Problematically, for the last 50 years it has been widely agreed that they were wrong, and there was no real standard of demarcation between “science” and other modes of thought. Nonetheless, and ignoring that there is one good Popperian still alive to do, we can’t use Popper - that’s absurdly dangerous territory - but we do have Lakatos.
Now that’s an idea I could have put at the top. We have to ignore that, as before, people don’t really believe in “degenerating research programmes” anymore (although perhaps philosophy of science is just a little too close to science to say so). But you know what? Fuck it. Make them read Lakatos.
But it won’t help, because their research programme is almost tailor made to outrun scientific testing. Along with history of science, which I advocate because it shows science in its particulars, the real solution is to starve the cult of oxygen. It’s an attritional war of pointing out that this is bullshit in its particulars.
and that “who talks to who” is basic journalism.
It’s always interesting to note when an apparently natural convention has metastasised and begun to sprout weird, ugly, distensions that no longer make sense. Sure, when the stakes are ideas, it’s important to stick to ideas and not over-focus on personalities! In fact you can take that principle fairly far, as when holding onto your ideals in the teeth of conflict which can abase you and cause you to lose all moral compass. But never talk about personalities? And in a big way we live in the century of metastasised conventions - the internet, but also everything else, both accelerates and robs us of any behavioural compass but strange and constantly shifting conventional guides for getting along (have a terrifying conversation with almost anyone in Gen Z for proof of that). In the same way “in-group/out-group” is hopelessly inadequate to capture this dynamic, but it’s another convention that this lot of have chosen to metastasise (and, paradoxically, it now looms larger in the rules governing their thinking than almost anywhere).
For them, it’s all become a strange conspiracy of the elect in which nobody knows who’s in charge and nobody is actually the elect, hence this constant bizarre resort to the counter-conspiracy whenever their strange values come into conflict with the outside: they no longer have a tool for reality-testing their values, because the rest of the world is either wrong or the enemy
Hey I think some of these are pretty good ideas
https://archive.org/details/2917616.0001.001.umich.edu/page/3/mode/1up
Fun to see gwern in there presumably telling a fib. I wonder what really happened when Metz “ghosted” him? I particularly enjoyed, this time, watching whatever uppers he’s on these days kick in (or wear off?) about halfway through writing the footnote he added to that comment.
Unbelievable kill shot, how the fuck did Davis leave it on this? Some secret agenda to hand Metz a fuckin’ victory wreath? Does he think this makes Metz look bad?
CM: What his argument to me was is that it violated the ethics of his profession. But that’s his issue, not mine, right? He chose to be a super-popular blogger and to have this influence as a psychiatrist. His name—when I sat down to figure out his name, it took me less than five minutes. It’s just obvious what his name is. The New York Times ceases to serve its purpose if we’re leaving out stuff that’s obvious. That’s just how we have to operate. Our aim—and again, the irony is that your aim is similar—is to tell people the truth, and have them understand it. If we start holding stuff back, then that quickly falls apart.
I get that out front Davis’s whole thing is total transparency, but if that’s really all that’s going on here, how did it not end on something utterly banal? How is this orbital homerun the end of the conversation?
Wait. Why the fuck is that weirdo talking to Cade Metz? What the hell is going on here!?!
You have to remember that this guy was 12 at the time
Holy shit, release the classics!
Wait, let me get this straight. His solution to achieve human escape velocity, which means “outpac[ing] AI’s influence and maintain human autonomy” (his words, not mine) is to increase AI’s influence and remove human autonomy?
Well how do YOU plan on shilling for the tech industry by scaring people up about LLMs?
Rage bait? My child, I am an anthropologist
THEY HAVE A THREAD ON HIP HOP!?> LINDA HOLD MY GODDAMN CLALS
The way tha “cuck” has been elevated to a genuine category in their armchair social science is such a warm breeze of insanity whenever I come across it
I knew they were writing under fake names