The World Health Organization (WHO) has come up with an AI integrated by a chatbot which is named SARAH. SARAH’s mission is to cover access to information on global health. We mention the innovative approach of SARAH. Nonetheless, it still manages to make some essential false medical information.
Technological Innovation Meets Health Education
REFLECTING the efforts made by WHO to this cause, SETHARA is an acronym for Smart AI Resource Assistant for Health. She represents the latest digital health means used by the institution. Developed to meet care demands anytime and anywhere, SARAH can deliver information in eight languages, including mental health and life choices.
This is one of such actions that form part of WHO’s strategy to on-board technology in public health education and to close the global gap in health coverage. However, WHO admitted the embryo has weaknesses despite that it is the prototype in history. It has been so far commonly tailored by the programmers to remain within WHO’s mandate field and offers information on how to find professional help on particular issues. The bot can’t provide such service continuously and it cannot be compared to tools like WebMD.
Challenges in AI Accuracy and Security
SARAH has not been done only easy ways from deployment. The AI chatbot is based on the OpenAI’s ChatGPT 3.5 model; it has had problems with serving outdated information as well as some wrong answers. An instance of this is its incorrect mention of the approval status for a new Alzheimer’s drug, a reflection of its lack of ongoing trained data which only included data as far back as September 2021.
The other thing that SARAH is often experiencing is presenting the answers that may be outside a topic or purpose, the known problem in the AI development is referencing as hallucinations Summing up, these inaccuracies could raise the danger of public misinformation. The World Health Organization (WHO) is therefore looking at getting the views of researchers and government agencies to improve resilience, especially in human health emergencies.
This is also accompanied by apprehensions about the use of AI by the health workforce. SORAH applies facial recognition techniques to detect and deduce the users’ emotions. They promise to avoid showing their faces by providing data protection and privacy. The open-source perpetuity model is also marked by increased exposure to cyber attacks on the foundation of AI.
Future directions and ethical considerations
WHO is still maintaining the SARAH, and soon updates are to come like the introduction of the changing avatar apparance and interactive abilities. Indeed, WHO announced ethical principles for its stakeholders and put forward the notions of data transparency and user security. SARAH, which is a technology based on artificial intelligence, is taken to another level of its design as WHO is wary about integrating AI in spreading health messages. However, the technology has several weaknesses which the organization promptly points out.
The organization makes it transparently clear that SARAH is not a transformer of medical services which is not what was aimed for, but a supportive tool to spread public health awareness and education worldwide. Despite some obstacles, the WHO‘s effort demonstrates significant progress in using artificial intelligence in health education since it will be directed toward a wider range of audience and help health information more available globally.
This article originally appeared in Fortune