Table of Contents
ToggleAI presents an extinction level threat to human species
According to a government-commissioned report, artificial intelligence (AI) poses serious national security threats that require “quick and decisive” action from the US in order to prevent a “extinction-level threat to the human species.”
After more than a year of study, including interviews with over 200 government employees, experts, and staff members at top AI firms including OpenAI, Google DeepMind, Anthropic, and Meta, the paper, titled “An Action Plan to Increase the Safety and Security of Advanced AI,” was published. It presents a thorough collection of legislative initiatives meant to drastically change the present course of the AI sector.
What Does This Report On AI Indicate?
The report sheds lights on key elements that make it unlawful to train AI models with more computer power than a certain threshold—which will be established by a new federal AI agency— which is one of the main recommendations.
The goal of this report is to slow down the advancement of the chip industry in producing speedier hardware and to control the competition among AI developers. Stricter regulations on the production and export of AI chips are also demanded, as is more federal funding for studies aimed at enhancing the security of sophisticated AI systems.
The research highlights the industrial race dynamics that put the development pace ahead of safety and discusses the hazards involved with weaponizing AI systems as well as the potential loss of control over sophisticated AI. It implies that protecting international safety and security from the risks posed by AI may require regulating the hardware used to train AI systems.
The report makes it clear that, despite the ground-breaking nature of its recommendations, they do not represent the official opinions of the US government or the US Department of State.
Gladstone AI included a number of well-known figures, including Elon Musk, Federal Trade Commission Chair Lina Khan, and a former top executive at OpenAI, who have expressed concern about the existential hazards posed by AI in its paper. Gladstone AI claims that other workers at AI companies are privately voicing similar worries.
This suggests that the potential for catastrophic hazards unmatched by any the US has experienced could arise from the development of AI and its capabilities.
AI systems might be used, for example, to plan and carry out highly destructive cyberattacks that could destroy vital infrastructure. Artificial intelligence (AI)-driven disinformation campaigns have the potential to destabilise society and undermine confidence in institutions. Other risks include psychological manipulation, weaponized biological and material sciences, weaponized robotic applications like drone swam attacks, and power-hungry AI systems that are hostile to humans and impossible to control.
Conclusion
Political obstacles are anticipated to be encountered by the idea. It is thought that the US government has very little chance of implementing this suggestion. According to current U.S. government AI policy, compute thresholds above which more regulatory requirements and transparency monitoring apply are allowed, but restrictions above which training runs would be prohibited are not.
The study concentrates on two distinct risk groups. The paper describes the first category as “weaponization risk,” and the second as “loss of control” danger, which refers to the potential for sophisticated AI systems to outmanoeuvre their creators. “Race dynamics” in the AI business, the paper claims, aggravate both types of danger.
According to the report, firms are motivated to prioritise speed over safety because it is likely that the first company to achieve Artificial General Intelligence will receive the majority of the financial incentives. The report does point out, though, that any measures taken will have to take into consideration the chance that excessive regulation may strengthen overseas chip businesses and reduce American influence in the supply chain.