- 1 Post
- 27 Comments
Coll@awful.systemsto SneerClub@awful.systems•SBF's effective altruism and rationalism considered an aggravating circumstance in sentencingEnglish7·1 year agoembrace the narrative that “SBF died for our sins”
Huh? This is so absurdly self-aggrandizing that I struggle to comprehend what he’s even saying. What did he imagine “our sins” were, and how did getting imprisoned absolve them?
Coll@awful.systemsto SneerClub@awful.systems•it's outrageous the NYT called Scoot a racist like Charles Murray! also, Scoot agrees with race science, precisely as Murray does. Also, the leaked 2014 email is only outrageous if you hadn't read SSCEnglish3·1 year agoNo no, not the term (my comment is about how he got his own term wrong), just his reasoning. If you make a lot of reasoning errors, but two faulty premises cancel each other out, and you write, say, 17000 words or sequences of hundreds of blog posts, then you’re going to stumble into the right conclusion from time to time. (It might be fun to model this mathematically, can you err your way into being unerring?, but unfortunately in reality-land the amount of premises an argument needs varies wildly)
Coll@awful.systemsto SneerClub@awful.systems•it's outrageous the NYT called Scoot a racist like Charles Murray! also, Scoot agrees with race science, precisely as Murray does. Also, the leaked 2014 email is only outrageous if you hadn't read SSCEnglish31·1 year agoZack thought the Times had all the justification they needed (for a Gettier case) since he thought they 1) didn’t have a good justification but 2) also didn’t need a good justification. He was wrong about his second assumption (they did need a good justification), but also wrong about the first assumption (they did have a good justification), so they cancelled each other out, and his conclusion ‘they have all the justification they need’ is correct through epistemic luck.
The strongest possible argument supports the right conclusion. Yud thought he could just dream up the strongest arguments and didn’t need to consult the literature to reach the right conclusion. Dreaming up arguments is not going to give you the strongest arguments, while consulting the literature will. However, one of the weaker arguments he dreamt up just so happened to also support the right conclusion, so he got the right answer through epistemic luck.
Coll@awful.systemsto SneerClub@awful.systems•it's outrageous the NYT called Scoot a racist like Charles Murray! also, Scoot agrees with race science, precisely as Murray does. Also, the leaked 2014 email is only outrageous if you hadn't read SSCEnglish6·1 year agoIt made me think of epistemic luck in the rat-sphere in general, him inventing then immediately fumbling ‘gettier attack’ is just such a perfect example, but there are other examples in there such as Yud saying:
Personally, I’m used to operating without the cognitive support of a civilization in controversial domains, and have some confidence in my own ability to independently invent everything important that would be on the other side of the filter and check it myself before speaking. So you know, from having read this, that I checked all the speakable and unspeakable arguments I had thought of, and concluded that this speakable argument would be good on net to publish[…]
Which @200fifty points out:
Zack is actually correct that this is a pretty wild thing to say… “Rest assured that I considered all possible counterarguments against my position which I was able to generate with my mega super brain. No, I haven’t actually looked at the arguments against my position, but I’m confident in my ability to think of everything that people who disagree with me would say.” It so happens that Yudkowsky is on the ‘right side’ politically in this particular case, but man, this is real sloppy for someone who claims to be on the side of capital-T truth.
Coll@awful.systemsto SneerClub@awful.systems•it's outrageous the NYT called Scoot a racist like Charles Murray! also, Scoot agrees with race science, precisely as Murray does. Also, the leaked 2014 email is only outrageous if you hadn't read SSCEnglish31·1 year agoThe sense of counter-intuitivity here seems mostly to be generated by the convoluted grammar of your summarising assessment, but this is just an example of bare recursivity, since you’re applying the language of the post to the post itself.
I don’t think it’s counter-intuitive and the post itself never mentioned ‘epistemic luck’.
Perhaps it would be interesting if we were to pick out authentic Gettier cases which are also accusations of some kind
This seems easy enough to contstruct, just base an accusation on a Gettier case. So in the case of the stopped clock, say we had an appointment at 6:00 and due to my broken watch I think it’s 7:00, as it so happens it actually is 7:00. When I accuse you of being an hour late it is a “Gettier attack”, it’s a true accusation, but it isn’t based on knowledge because it is based on a Gettier case.
Coll@awful.systemsto SneerClub@awful.systems•it's outrageous the NYT called Scoot a racist like Charles Murray! also, Scoot agrees with race science, precisely as Murray does. Also, the leaked 2014 email is only outrageous if you hadn't read SSCEnglish71·1 year agoWhile the writer is wrong, the post itself is actually quite interesting and made me think more about epistemic luck. I think Zack does correctly point out cases where I would say rationalists got epistemically lucky, although his views on the matter seem entirely different. I think this quote is a good microcosm of this post:
The Times’s insinuation that Scott Alexander is a racist like Charles Murray seems like a “Gettier attack”: the charge is essentially correct, even though the evidence used to prosecute the charge before a jury of distracted New York Times readers is completely bogus.
A “Gettier attack” is a very interesting concept I will keep in my back pocket, but he clearly doesn’t know what a Gettier problem is. With a Gettier case a belief is both true and justified, but still not knowledge because the usually solid justification fails unexpectedly. The classic example is looking at your watch and seeing it’s 7:00, believing it’s 7:00, and it actually is 7:00, but it isn’t knowledge because the usually solid justification of “my watch tells the time” failed unexpectedly when your watch broke when it reached 7:00 the last time and has been stuck on 7:00 ever since. You got epistemically lucky.
So while this isn’t a “Gettier attack” Zack did get at least a partial dose of epistemic luck. He believes it isn’t justified and therefore a Gettier attack, but in fact, you need justification for a Gettier attack, and it is justified, so he got some epistemic luck writing about epistemic luck. This is what a good chunk of this post feels like.
Coll@awful.systemsto SneerClub@awful.systems•the [simulated] are a convenient group of people to advocate forEnglish1·1 year agoI don’t know, when I googled it this 80000 hours article is one of the first results. It seems reasonable at first glance but I haven’t looked into it.
Coll@awful.systemsto SneerClub@awful.systems•Rationalist org bets random substack poster $100K that he can't disprove their covid lab leak hypothesis, you'll never guess what happens nextEnglish17·1 year agoWait they had Peter’s arguments and sources before the debate? And they’re blaming the format? Having your challenger’s material before the debate, while they don’t have yours is basically a guaranteed win. You have his material, take it with you to the debate and just prepare answers in advance so you don’t lose $100K! Who gave these idiots a $100K?
Coll@awful.systemsto SneerClub@awful.systems•the [simulated] are a convenient group of people to advocate forEnglish11·1 year agoThe way this is categorized, this 18.2% is also about things like climate change and pandemics.
Coll@awful.systemsto SneerClub@awful.systems•the [simulated] are a convenient group of people to advocate forEnglish2·1 year agothe data presented on that page is incredibly noisy
Yes, that’s why I said it’s “less comprehensive” and why I first gave the better 2019 source which also points in the same direction. If there is a better source, or really any source, for the majority claim I would be interested in seeing it.
Speaking of which,
AI charities (which is not equivalent to simulated humans, because it also includes climate change, nearterm AI problems, pandemics etc)
AI is to climate change as indoor smoking is to fire safety, nearterm AI problems is an incredibly vague and broad category and I would need someone to explain to me why they believe AI has anything to do with pandemics. Any answer I can think of would reflect poorly on the one holding such belief.
You misread, it’s 18.2% for long term and AI charities [emphasis added]
Coll@awful.systemsto SneerClub@awful.systems•the [simulated] are a convenient group of people to advocate forEnglish2·1 year agoThe linked stats are already way out of date
Do you have a source for this ‘majority’ claim? I tried searching for more up to date data but this less comprehensive 2020 data is even more skewed towards Global development (62%) and animal welfare (27.3%) with 18.2% for long term and AI charities (which is not equivalent to simulated humans, because it also includes climate change, nearterm AI problems, pandemics etc). Utility of existential risk reduction is basically always based on population growth/ future generations (aka humans) and not simulations. ‘digital person’ only has 25 posts on the EA forum (by comparison, global health and development has 2097 post). It seems unlikely to me that this is a majority belief.
Coll@awful.systemsto SneerClub@awful.systems•the [simulated] are a convenient group of people to advocate forEnglish51·1 year agoI spend a lot of time campaigning for animal rights. These criticisms also apply to it but I don’t consider it a strong argument there. EA’s spend an estimated 1.8 million dollar per year (less than 1%, so nowhere near a majority) on “other longterm” which presumably includes simulated humans, but an estimated 55 million dollar per year (or 13%) on farmed animal welfare (for those who are curious, the largest recipient is global health at 44%, but it’s important to note that it seems like the more people are into EA the less they give to that compared to more longtermist causes). Farmed animals “don’t resent your condescension or complain that you are not politically correct, they don’t need money, they don’t bring cultural baggage…” yet that doesn’t mean they aren’t a worthy cause. This quote might serve as something members should keep in mind, but I don’t think it works as an argument on its own.
Coll@awful.systemsto SneerClub@awful.systems•did you know that the libertarian Charter City dream is Effective Altruism akhuallyEnglish2·1 year agoI’m not that good at sneering. ‘EA is when you make Fordlândia’? Idk, you found the original post and you’re much better at it, it’s better if you do it.
Coll@awful.systemsto SneerClub@awful.systems•did you know that the libertarian Charter City dream is Effective Altruism akhuallyEnglish2·1 year agoWhen he posted the finished video on youtube yesterday, there were some quite critical comments on youtube, the EA forum and even lesswrong. Unfortunately they got little to no upvotes while the video itself got enough karma to still be on the frontpage on both forums.
Coll@awful.systemsto SneerClub@awful.systems•e/acc has solved the "is-ought" problem with thermodynamics!English15·1 year agoHe solved the is-ought problem? How did he do that?
what ought to be (what is probable)
Hey guys I also solved the is-ought problem, first we start with is (what we should do)…
Coll@awful.systemsto SneerClub@awful.systems•"if you're not stupid, it doesn't matter if COVID was a lab leak"English4·1 year agopeople who are my worst enemies - e/acc people, those guys who always talk about how charity is Problematic - […] weird anti-charity socialists
Today I learned that ‘effective accelerationists’ like CEO of Y-combinator Garry Tan, venture capitalist Marc Andreessen and “Beff Jezos” are socialists. I was worried that those evil goals they wanted to achieve by simply trying to advance capitalism might reflect badly on it, but luckily they aren’t fellow capitalists after all, they turned out to be my enemies the socialists all along! Phew!
Coll@awful.systemsto SneerClub@awful.systems•LessWrong: but what about some eugenics, tho?English7·1 year agoWell of course, everything is determined by genetics, including, as the EA forum taught me today, things like whether someone is vegetarian so to solve that problem (as well as any other problem) we need (and I quote) “human gene editing”. /s
Coll@awful.systemsto SneerClub@awful.systems•loving the EA forum on how the problem with spending the charity money on a castle was the public relationsEnglish3·1 year agoWhen the second castle (bought by ESPR with FTX-money) was brought up on the forum, Jan Kulveit (one of the main organizers of ESPR) commented:
Multiple claims in this post are misleading, incomplete or false.
Then never bothered to actually explain what the misleading and false claims actually were (and instead implied the poster had doxxed them). Then under the post this thread discusses he has the gall to comment:
For me, unfortunately, the discourse surrounding Wytham Abbey, seems like a sign of epistemic decline of the community, or at least on the EA forum.
I guess Jan doesn’t think falsely implying the person who is critical of your chateau purchase is both a liar and a doxxer counts as ‘epistemic decline’.
The article does say/link:
As for
In the footnote it does say:
Although there’s likely still an overestimation of how much it would help