News Software 05-08-2024 at 13:38 comment views icon

A chatbot for spies: Microsoft launches artificial intelligence model without an Internet connection

author avatar
https://itc.ua/wp-content/uploads/2022/04/ad81c83e9fbf757ce8a90d0eb41dee5b-96x96.jpeg *** https://itc.ua/wp-content/uploads/2022/04/ad81c83e9fbf757ce8a90d0eb41dee5b-96x96.jpeg *** https://itc.ua/wp-content/uploads/2022/04/ad81c83e9fbf757ce8a90d0eb41dee5b-96x96.jpeg

Vadym Karpus

News writer

Microsoft has created a generative artificial intelligence model based on GPT-4, designed specifically for the US intelligence services. It works without an Internet connection.

This is reportedly the first time that Microsoft has deployed a core language model in a secure environment, designed to allow spy agencies to analyze top-secret information without the risk of connectivity, and to provide secure conversations with a chatbot like ChatGPT and Microsoft Copilot.

According to Bloomberg, the new artificial intelligence service (which has no public name yet) responds to the growing interest of intelligence agencies in using generative AI to process sensitive data while reducing the risks of data leaks or hacking attempts. ChatGPT typically runs on cloud servers, which can pose risks of data leakage and interception. Therefore, last year, the CIA announced its plan to create a service similar to ChatGPT, but this Microsoft solution is a separate project.

William Chappell, Microsoft’s chief technology officer for strategic missions and technology, noted that the development of the new system included 18 months of work on modifying an AI supercomputer in Iowa. The modified GPT-4 model is designed to read files provided by users but does not have access to the open Internet.

«This is the first time we’ve had an isolated version — where isolated means it’s not connected to the Internet — and it’s on a special network that only the U.S. government has access to», Chappell said.

The new service was activated on Thursday and is now available to about 10 thousand people from the intelligence community. It is ready for further testing by the relevant agencies. According to Chappell, it is now «answering» questions.

One of the serious drawbacks of using GPT-4 to analyze sensitive data is that it has the potential to create inaccurate summaries, draw inaccurate conclusions, or provide inaccurate information to its users. Because trained AI neural networks are not databases and operate on the basis of statistical probabilities, they can produce poor factual resources unless they are supplemented with external access to information from another source using a technique such as advanced search generation. Given this limitation, it is possible that GPT-4 could potentially misinform or mislead U.S. intelligence agencies if not used appropriately.

Source: arstechnica


Loading comments...

Spelling error report

The following text will be sent to our editors: