I vaguely remember mentioning this AI doomer before, but I ended up seeing him openly stating his support for SB 1047 whilst quote-tweeting a guy talking about OpenAIās current shitshow:
Iāve had this take multiple times before, but now I feel pretty convinced the āAI doom/AI safetyā criti-hype is going to end up being a major double-edged sword for the AI industry.
The industryās publicly and repeatedly hyped up this idea that theyāre developing something so advanced/so intelligent that it could potentially cause humanity to get turned into paperclips if something went wrong. Whilst theyāve succeeded in getting a lot of people to buy this idea, theyāre now facing the problem that people donāt trust them to use their supposedly world-ending tech responsibly.
Have you considered