Gadget News

After receiving complaints that its AI-generated “AI Overviews” feature was giving false and possibly dangerous health information, Google took action to limit the use of the feature in search results. The ruling follows an investigation by The Guardian that uncovered multiple instances where AI-generated answers included false medical information about severe illnesses like cancer, liver disease, and mental health.

One example given was looking for normal ranges in blood tests for liver disease; important variables like age, sex, ethnicity, and national medical standards were not considered in the AI-generated summaries, which displayed generalized values. Because of this lack of context, people with severe liver conditions may mistakenly think their test results are normal, which could cause them to postpone or stop necessary treatment, according to health experts.

The replies were deemed “dangerous” and “alarming” by medical professionals, who emphasized that providing false health information can lead to major complications or even death. Google chose to show direct links to external medical websites instead of using AI Overviews for searches pertaining to sensitive health topics. According to the company, it strives to enhance the system and implements internal policy measures when necessary when AI summaries lack adequate context.

Nevertheless, depending on the question’s wording, AI-generated answers can still be found for certain health-related queries. Health groups, including the British Liver Trust, are concerned about this. AI summaries have the potential to oversimplify complicated medical tests, warned Vanessa Hebditch, the organization’s director of communications and policy. Given that normal test results do not always rule out serious disease, she pointed out that presenting isolated numbers without sufficient explanation may mislead users.

Google Overview provides health information that is not 100% accurate because it doesn’t have context like age, sex, and ethnicity.

When questioned about why AI Overviews were not eliminated more widely, Google said that its internal medical review team found that numerous contested answers were true and backed up by reliable sources. The company also stresses that when users are looking for health information, they should consult a professional physician.

Even with these guarantees, the scenario shows that applying generative AI to health-related advice still presents difficulties. The incident highlights the dangers of depending solely on automated systems to provide intricate and potentially life-altering advice, even though access to trustworthy medical information is still crucial.

Google Restricts AI Overviews In Search After Reports Of Harmful Health Information

, original content from Ubergizmo. Read our Copyrights and terms of use.

Read More

Leave a Reply

Your email address will not be published. Required fields are marked *

For security, use of CloudFlare's Turnstile service is required which is subject to the CloudFlare Privacy Policy and Terms of Use.

This site uses Akismet to reduce spam. Learn how your comment data is processed.