I actually also reviewed that one, except my review of it was extremely favorable. I’m so glad that you read it and I’d welcome your thoughts on my very friendly amendment to his analysis if you end up reading that post.
I write about technology at theluddite.org
I actually also reviewed that one, except my review of it was extremely favorable. I’m so glad that you read it and I’d welcome your thoughts on my very friendly amendment to his analysis if you end up reading that post.
Glad to hear it!
Totally agreed. I didn’t mean to say that it’s a failure if it doesn’t properly encapsulate all complexity, but that the inability to do so has implications for design. In this specific case (as in many cases), the error they’re making is that they don’t realize the root of the problem that they’re trying to solve lies in that tension.
The platform and environment are something you can shape even without an established or physical community.
Again, couldn’t agree more! The platform is actually extremely powerful and can easily change behavior in undesirable ways for users, which is actually the core thesis of that longer write up that I linked. That’s a big part of where ghosting comes from in the first place. My concern is that thinking you can just bolt a new thing onto the existing model is to repeat the original error.
This app fundamentally misunderstands the problem. Your friend sets you up on a date. Are you going to treat that person horribly. Of course not. Why? First and foremost, because you’re not a dick. Your date is a human being who, like you, is worthy and deserving of basic respect and decency. Second, because your mutual friendship holds you accountable. Relationships in communities have an overlapping structure that mutually impact each other. Accountability is an emergent property of that structure, not something that can be implemented by an app. When you meet people via an app, you strip both the humanity and the community, and with it goes the individual and community accountability.
I’ve written about this tension before: As we use computers more and more to mediate human relationships, we’ll increasingly find that being human and doing human things is actually too complicated to be legible to computers, which need everything spelled out in mathematically precise detail. Human relationships, like dating, are particularly complicated, so to make them legible to computers, you necessarily lose some of the humanity.
Companies that try to whack-a-mole patch the problems with that will find that their patches are going to suffer from the same problem: Their accountability structure is a flat shallow version of genuine human accountability, and will itself result in pathological behavior. The problem is recursive.
I have now read so many “ChatGPT can do X job better than workers” papers, and I don’t think that I’ve ever found one that wasn’t at least flawed if not complete bunk once I went through the actual paper. I wrote about this a year ago, and I’ve since done the occasional follow-up on specific articles, including an official response to one of the most dishonest published papers that I’ve ever read that just itself passed peer review and is awaiting publication.
That academics are still “bench-marking” ChatGPT like this, a full year after I wrote that, is genuinely astounding to me on so many levels. I don’t even have anything left to say about it at this point. At least fewer of them are now purposefully designing their experiments to conclude that AI is awesome, and are coming to the obvious conclusion that ChatGPT cannot actually replace doctors, because of course it can’t.
This is my favorite one of these ChatGPT-as-doctor studies to date. It concluded that “GPT-4 ranked higher than the majority of physicians” on their exams. In reality, it actually can’t do the exam, so the researchers made a special, ChatGPT-friendly version of the exam for the sole purpose of concluding that ChatGPT is better than humans.
Because GPT models cannot interpret images, questions including imaging analysis, such as those related to ultrasound, electrocardiography, x-ray, magnetic resonance, computed tomography, and positron emission tomography/computed tomography imaging, were excluded.
Just a bunch of serious doctors at serious hospitals showing their whole ass.
Not directly to your question, but I dislike this NPR article very much.
Mwandjalulu dreamed of becoming a carpenter or electrician as a child. And now he’s fulfilling that dream. But that also makes him an exception to the rule. While Gen Z — often described as people born between 1997 and 2012 — is on track to become the most educated generation, fewer young folks are opting for traditionally hands-on jobs in the skilled trade and technical industries.
The entire article contains a buried classist assumption. Carpenters have just as much a reason to study theater, literature, or philosophy as, say, project managers at tech companies (those three examples are from PMs that I’ve worked with). Being educated and a carpenter are only in tension because of decisions that we’ve made, because having read Plato has as much in common with being a carpenter as it does with being a PM. Conversely, it would be fucking lit if our society had the most educated plumbers and carpenters in the world.
NPR here is treating school as job training, which is, in my opinion, the root problem. Job training is definitely a part of school, but school and society writ large have a much deeper relationship: An educated public is necessary for a functioning democracy. 1 in 5 Americans is illiterate. If we want a functioning democracy, then we need to invest in everyone’s education for its own sake, rather than treat it as a distinguishing feature between lower classes and upper ones, and we need to treat blue collar workers as people who also might wish to be intellectually fulfilled, rather than as a monolithic class of people who have some innate desire to work with their hands and avoid book learning (though those kinds of people need also be welcomed).
Occupations such as auto technician with aging workforces have the U.S. Chamber of Commerce warning of a “massive” shortage of skilled workers in 2023.
This is your regular reminder that the Chamber of Commerce is a private entity that represents capital. Everything that they say should be taken with a grain of salt. There’s a massive shortage of skilled workers for the rates that businesses are willing to pay, which has been stagnant for decades as corporate profits have gone up. If you open literally any business and offer candidates enough money, you’ll have a line out the door to apply.
Investment giant Goldman Sachs published a research paper
Goldman Sachs researchers also say that
It’s not a research paper; it’s a report. They’re not researchers; they’re analysts at a bank. This may seem like a nit-pick, but journalists need to (re-)learn to carefully distinguish between the thing that scientists do and corporate R&D, even though we sometimes use the word “research” for both. The AI hype in particular has been absolutely terrible for this. Companies have learned that putting out AI “research” that’s just them poking at their own product but dressed up in a science-lookin’ paper leads to an avalanche of free press from lazy credulous morons gorging themselves on the hype. I’ve written about this problem a lot. For example, in this post, which is about how Google wrote a so-called paper about how their LLM does compared to doctors, only for the press to uncritically repeat (and embellish on) the results all over the internet. Had anyone in the press actually fucking bothered to read the paper critically, they would’ve noticed that it’s actually junk science.
Happy to be of service!
I don’t know enough about their past to comment on that.
I highly recommend Herman and Chomsky’s book, Manufacturing Consent. It’s about exactly this.
But at least the way I read it, Bennet is saying that the NYT has a duty to help both sides understand each other, and the way to do that would be by giving a voice to the right and centrists without necessarily endorsing any faction
I think that this is a superficially pleasing argument but actually quite dangerous. It ignores that the NYT is itself quite powerful. Anything printed in the NYT is instantly given credibility, so it’s actually impossible for them to stay objective and not take sides. Taking an army out to quash protestors gets normalized when it appears in the NYT, which is a point for that side of the argument, but the NYT can’t publish every side of every issue. There’s not enough space on the whole internet for that. This is why we have that saying that I mentioned in the other comment, that journalists should afflict the comfortable and comfort the afflicted, or that journalists ought to speak truth to power. Since it’s simply impractical to be truly neural, in the sense of publishing every side of every issue, a responsible journalist considers the power dynamics to decide which sides need airing.
The author of the OP argues that, because Cotton is already a very influential person, he ought to be published in the NYT, but I think that the exact opposite is true. Because Cotton is already an influential person, he has plenty of places that he can speak, and when the NYT platforms his view that powerful people like him should oppress those beneath them, they do a disservice to their society by implicitly endorsing that as something more worthy of publishing than the infinite other things that they could publish. For literally all of history, it’s been easy to hear the opinions of those who wield violence to suppress dissent. Journalism is special only when it goes against power.
No, we only agree that the NYT sucks, but disagree on basically everything else. We are coming from exact opposite directions. Yes, we both are attacking the NYT, but, like I already explained, the article attacks them for the opposite reason. For example:
Until that miserable Saturday morning I thought I was standing shoulder-to-shoulder with him in a struggle to revive them. I thought, and still think, that no American institution could have a better chance than the Times, by virtue of its principles, its history, its people and its hold on the attention of influential Americans, to lead the resistance to the corruption of political and intellectual life, to overcome the encroaching dogmatism and intolerance.
That is absurd bullshit. Like I said, the NYT’s principles and history are that of collaborating with American elite interests since its founding.
The article talks about “objectivity” over and over, and how the NYT used to strive for it, but that’s simply not true. The author’s concept of objectivity is what Gramsci calls cultural hegemony, in which the worldview of the ruling class becomes accepted as consensus reality. Like I said, the NYT and its ilk once had cultural hegemony, but it’s now been pierced. Another example:
There have been signs the Times is trying to recover the courage of its convictions. The paper was slow to display much curiosity about the hard question of the proper medical protocols for trans children; but once it did, the editors defended their coverage against the inevitable criticism.
Fuck that noise. This author is praising them for being “brave” on questioning trans people, but many activists groups have documented what this actually is: The NYT has an anti-trans editorial stance.
Again, like I said in my first comment, the author doesn’t understand the role of power in journalism: He thinks that the job of the journalist is to present all sides objectively, without any understanding that some people are in power and others are oppressed. Like the famous saying goes, the job of the journalist is to afflict the comfortable and comfort the afflicted. The NYT’s entire history, with some very notable exceptions, I grant you, is the opposite of that. Its apparent fall from grace now isn’t because it has lots it objectivity, but its hegemony over American information.
I very strongly disagree with almost every word in this article. The work of journalism is to hold power to account, not to publish the dangerous ideas of the already-powerful. Any so-called journalist who thinks that is their job ought to be fired. The NYT didn’t lose its way when it hesitated to publish a call to crush BLM protestors with the army, but when it decided to be the mouthpiece of the American elite, as it has been for most of its history. Remember when it collaborated with the Bush administration to invade Iraq? Manufacturing Consent came out even before that, and it documented decades of NYT editorializing in favor of specific American interests.
Over the decades the Times and other mainstream news organisations failed plenty of times to live up to their commitments to integrity and open-mindedness. The relentless struggle against biases and preconceptions, rather than the achievement of a superhuman objective omniscience, is what mattered.
Give me a break. The very people who did the Iraq WMD coverage are still famous and respected journalists, for crying out loud. Some of them are still at the fucking Times.
I agree with the author that the failure of journalism is a major cause of Trump, but in the exact opposite sense: It’s not that the NYT is no longer trying to be objective, but that its veneer of objectivity has become transparently bullshit. The only thing that has changed is that traditional media outlets no longer have a monopoly on what information Americans get. The many other sources that have risen to challenge them are extremely problematic, to say the least, but traditional media outlets created that opening themselves. Like so much MAGA bullshit, the attacks on the media as elite and biased and out of touch land because they are in fact grounded in some truth, though the “solutions” are always a nightmare.
I say this every chance that I get: There is no such thing as a technological revolution. Revolutions happen within human institutions, and technologies change what’s possible within them. It’s great to see a similar argument in such a mainstream magazine.
Dan McQuillan has been warning about this since forever, to the point where I would’ve assumed that he’d be referenced if not interviewed int his article, though he wasn’t. Here’s a pretty short one from him. His basic argument is that AI is best understood as algorithmic Thatcherism, in which they’ll silicon-wash the same austerity politics that neoliberalism has been feeding us forever.
I would love to read an actually serious treatment of this issue and not 4 paragraphs that just say the headline but with more words.
I have been predicting for well over a year now that they will both die before the election, but after the primaries, such that we can’t change the ballots, and when Americans go to vote, we will vote between two dead guys. Everyone always asks “I wonder what happens then,” and while I’m sure that there’s a technical legal answer to that question, the real answer is that no one knows,
Very well could be. At this point, I’m so suspicious of all these reports. It feels like trying to figure out what’s happening inside a company while relying only on their ads and PR communications: The only thing that I do know for sure is that everyone involved wants more money and is full of shit.
US Leads World in Credulous Reports of ‘Lagging Behind’ Russia. The American military, its allies, and the various think-tanks it funds, either directly or indirectly, generate these reports to justify forever increasing the military budget.
I know that this kind of actually critical perspective isn’t point of this article, but software always reflects the ideology of the power structure in which it was built. I actually covered something very similar in my most recent post, where I applied Philip Agre’s analysis of the so-called Internet Revolution to the AI hype, but you can find many similar analyses all over STS literature, or throughout just Agre’s work, which really ought to be required reading for anyone in software.
edit to add some recommendations: If you think of yourself as a tech person, and don’t necessarily get or enjoy the humanities (for lack of a better word), I recommend starting here, where Agre discusses his own “critical awakening.”
As an AI practitioner already well immersed in the literature, I had incorporated the field’s taste for technical formalization so thoroughly into my own cognitive style that I literally could not read the literatures of nontechnical fields at anything beyond a popular level. The problem was not exactly that I could not understand the vocabulary, but that I insisted on trying to read everything as a narration of the workings of a mechanism. By that time much philosophy and psychology had adopted intellectual styles similar to that of AI, and so it was possible to read much that was congenial – except that it reproduced the same technical schemata as the AI literature. I believe that this problem was not simply my own – that it is characteristic of AI in general (and, no doubt, other technical fields as well). T
Same, and thanks! We’re probably a similar age. My own political awakening was occupy, and I got interested in theory as I participated in more and more protest movements that just sorta fizzled.
I 100% agree re:Twitter. I am so tired of people pointing out that it has lost 80% of its value or whatever. Once you have a few billion, there’s nothing that more money can do to your material circumstances. Don’t get me wrong, Musk is a dumbass, but, in this specific case, I actually think that he came out on top. That says more about what you can do with infinite money than anything about his tactical genius, because it doesn’t exactly take the biggest brain to decide that you should buy something that seems important.