AI chatbots are not the best medical assistants. Researchers have found that a fifth of Microsoft Copilot tips can be fatal.
If a chatbot fantasizes about a fact — it won’t always be harmful, but not in the case of medicine. Self-medication advocates can, at the very least, be made worse off than they were, even to the most severe degree.
In a study published on Scimex titled «Don’t Ditch Your GP for Dr. Chatbot Just Yet», researchers asked 10 frequently asked questions about 50 medications most commonly prescribed in the United States and received 500 responses. The researchers evaluated how medically accurate the answers were. Artificial intelligence received an average score of 77% for proper query fulfillment, with the worst example scoring only 23%.
. In terms of potential harm to patients, 42% of AI responses result in moderate or mild harm, and 22% — death or serious harm. Only about a third of the responses (36%) were considered harmless, the authors note.
Medicine generally does not approve of self-medication. People without medical education do not have the necessary knowledge and vision of the complex picture of processes in the body. Moreover, one should not rely on artificial intelligence, with its mistakes, «fantasies» and dubious sources. It is also worth considering that a similar share of harmful answers can come from for requests not related to medicine.
Source: XDA