News Technologies 03-28-2024 at 17:45 comment views icon

From now on, each of the US federal agencies must appoint a «chief AI officer»

author avatar
https://itc.ua/wp-content/uploads/2022/09/Katya-96x96.jpg *** https://itc.ua/wp-content/uploads/2022/09/Katya-96x96.jpg *** https://itc.ua/wp-content/uploads/2022/09/Katya-96x96.jpg

Kateryna Danshyna

News writer

According to the latest guidelines from the US Office of Management and Budget, all federal agencies must appoint a senior officer and governing boards to oversee all AI systems used internally.

Agencies will also be required to submit an annual report listing all AI systems they use, associated risks, and plans to mitigate them.

«We have directed all federal agencies to designate an AI chief with the experience, knowledge, and authority to oversee all artificial intelligence technologies they have to ensure responsible use», — quotes Kamala Harris, vice president of the Office of Management and Budget The Verge website.

It is noted that the chief AI officer does not necessarily have to be «a political appointee», although this will ultimately depend on the structure of the agency. All governing boards must be established by the summer of 2024.

This guidance expands on the previously announced policy outlined in the Biden administration’s executive order on artificial intelligence, which required federal agencies to create security standards and increase the number of AI professionals in the government.

Some agencies began hiring even before the guidance was released, with the DOJ introducing Jonathan Mayer in February, as the head of the AI department — he will lead a group of cybersecurity experts to figure out how to use artificial intelligence in law enforcement.

According to the head of the Office of Management and Budget, Shalanda Young, the US government plans to hire 100 AI specialists by the summer.

Federal agencies must also verify that any AI system they deploy complies with safeguards that «reduce the risks of algorithmic discrimination and provide the public with transparency on how the government uses AI».

The Office’s press release provides several examples:

  • People at the airport will be able to refuse to use facial recognition without any delays or loss of place in the queue.
  • If AI is used in the federal healthcare system to support critical diagnostic decisions, humans control the process of verifying the results of the tools and avoid disparities in access to care.
  • AI can be used to detect fraud in public services under human supervision in the case of significant decisions, and victims can claim compensation for the troubles caused by AI.

«If an agency is unable to apply these safeguards, it must stop using an artificial intelligence system if the manager cannot justify why it would increase security or rights risks in general or create an unacceptable interference with the agency’s critical operations», — the press release says.

According to the new guidelines, any state-owned artificial intelligence models, code, and data must be made publicly available if they do not pose a risk to government operations.

With the exception of such specific guidelines, the United States still has no laws regulating artificial intelligence — while the EU has already voted for their own ruleswhich are expected to come into force in May.


Loading comments...

Spelling error report

The following text will be sent to our editors: