Рубрики TechnologiesNews

Risks outweigh benefits. OpenAI and Google DeepMind employees call for tighter control over AI

Опубликовал
Ігор Шелудченко

Former and current employees of OpenAI and Google DeepMind have signed an open letter demanding stronger security measures in the development of artificial intelligence.

In an open letter experts call for the creation of independent regulatory bodies and the establishment of strict standards to control the development and implementation of AI technologies. They note that Without proper control and regulation, the potential risks will outweigh the benefits that these technologies can bring.

The signatories express concern about the possible abuses and dangers associated with the uncontrolled development of AI. They emphasize the importance of interdisciplinary cooperation to ensure the safe and ethical use of AI technologies, as well as the need for transparency and accountability in this area.

Employees put forward the following requirements for leading AI companies:

  • Do not enter into agreements that prohibit criticism of the company’s risks and do not retaliate against employees for such criticism.
  • Ensure that warnings can be communicated (including anonymously) to the board of directors, regulators and other organizations.
  • Foster a culture of open dialogue and allow employees to make public statements subject to the protection of trade secrets.
  • Do not sanction employees who publicly share confidential risk information after other means of communicating their concerns have failed.

The open letter has already received wide support in academic circles and among public organizations dealing with ethics and security issues in the technology sector.

OpenAI responded to the appeal by stating that it already has channels for informing about possible AI risks, including the «hotline». The company also does not release new technologies until appropriate security measures are implemented.

It is worth noting that AI companies, including OpenAI, have repeatedly used aggressive tactics to prevent employees from speaking freely. According to Vox, last week it became known that OpenAI forced employees who left the company to sign extremely restrictive non-compete and non-disclosure documents, otherwise they would lose all their accrued shares. OpenAI CEO Sam Altman apologized after the report, saying he would change the company’s firing procedures.

The letter comes shortly after two key employees left OpenAI — co-founder Ilya Sutskever and key security researcher Jan Leike. Following the departures, Leike claimed that OpenAI had abandoned a culture of security in favor of «shiny products».

AGI — is not far off, experts say

At the same time, experts in the field of artificial intelligence express concern about the pace and vector of AI development.

For example, former OpenAI engineer Leopold Aschenbrenner wrote document as much as 165 pages about how in 2027 we will have AGI (human-equivalent intelligence), which will quickly train itself to the level of ASI (superior human intelligence). The main risk, according to Leopold, is that AI is now like the second nuclear bomb, and the military is already eager to get (and is getting) access to AI.

Disqus Comments Loading...