OpenAI blog post: https://openai.com/research/building-an-early-warning-system-for-llm-aided-biological-threat-creation
Orange discuss: https://news.ycombinator.com/item?id=39207291
I don’t have any particular section to call out. May post thoughts tomorrow today it’s after midnight oh gosh, but wanted to post since I knew ya’ll’d be interested in this.
Terrorists could use autocorrect according to OpenAI! Discuss!
I have not. Is it good?
Keep in mind that it was written in 2014, the Field of bioengineering in the past ten years has advanced considerably.
Yeah. It addresses your points I think.
I’ll have to check it out.
The general point seems to be yours, that intellectual availability is the largest restriction on bioterrorism. I don’t disagree, but a big part of my argument is that access to this information has never been higher (which is better than not for a variety of reasons) and access to resources usable for this has never been higher. We have plenty of garage scale bio labs as it is. So yes, the biggest limit is availability of people with knowledge to do it, that’s not a hard roadblock, at least not anymore.
And the prediction horizon on biotech is tiny. Give it another ten years? Twenty? It’s not a zero threat because nobody has done it right now yet.
Not just intellectual availability, but the complexity of the job itself. iirc it goes into the Russian experience.