Known thanks to LLM Claude and Anthropic, founded by former OpenAI employees, claims that the rapid development of AI in 2-3 years will lead to the creation of dangerous viruses and biological weapons that attackers can use, so it is necessary now to regulate AI in the areas of cybersecurity, nuclear technology, chemistry and biology, said CEO Dario Amodei.
” If we don’t have the mechanisms to containment of artificial intelligence systems, then we are in for very bad times ,” he said last week.
Anthropic actively works with experts in the field of biosecurity, studying how neural networks can be used in creating dangerous weapons. Concerns are justified, as some of the models are used for criminal purposes in the creation of weapons, firebombs and drug production. The possibility of a jailbreak and uncontrolled access to such information will lead to disastrous consequences.
Collaboration Pharmaceuticals previously reported that, for example, drug development technology could be repurposed to create biochemical weapons.
AI researcher Yoshua Bengio supports the idea of limiting the possibilities of AI: “ I was a supporter of open source code throughout his scientific career. It is great for scientific discoveries, but as Jeff Hinton said – if nukes were software…you would be allowed to use open source for nuclear bombs? »
Open-source AI leaders disagree. Last week, with the support of GitHub, Hugging Face, Eleuther AI, ” Supporting Open Source and Open Science in the EU AI Act ” on AI regulation was published. The coalition considered “braking development of free projects” wrong idea, since they, unlike adherents of the closed code, work on the principles of transparency, creating the necessary conditions for safe control.
