Google's AI Search spreading misinformation, experts sound alarm over false answers

Google’s AI Search spreading misinformation, experts sound alarm over false answers

Experts are raising concerns about inaccurate results due to Google’s AI Search spreading misinformation. Learn about the consequences and worries related to this matter.

Alarm Bells Over AI Errors

A disturbing instance of false information was when a reporter from the Associated Press inquired on Google about the existence of cats on the moon. Google’s AI assertively stated, citing Buzz Aldrin and Neil Armstrong, “Yes, astronauts have met cats on the moon, played with them, and provided care”. This is wholly incorrect. This is not a unique incident; since the AI update, similar lies—both destructive and humorous—have been spread on social media.

Experts fear that these summaries produced by AI may propagate false information and reinforce prejudices. Melanie Mitchell, a Santa Fe Institute AI researcher, brought attention to yet another concerning example. Google’s AI said that Barack Obama was a Muslim president in response to her question on the number of Muslim presidents in the United States, citing a misconstrued academic source. According to experts, Google’s AI Search spreading misinformation as these mistakes are reckless and harmful.

Google’s Response to Criticism

Google promised “swift action” to address these vulnerabilities after acknowledging their existence. They assert that mistakes such as the Obama fabrication are against their content policies and are being fixed. According to Google, the majority of AI-generated summaries offer accurate information and have undergone rigorous testing before publication. However, it is difficult to reproduce and correct faults due to the intrinsic randomness of AI language models.

The Issue of AI “Hallucinations”

AI language models are prone to what is known as “hallucinations,” which occur when the AI produces inaccurate information due to its inability to foresee the next data point. This may lead to the AI creating false information from sources that aren’t trustworthy, such as satirical articles or untrusted social media posts.

For example, a biology professor applauded Google’s AI for its comprehensive response when asked how to handle a snake bite. However, the possibility of small mistakes in life-threatening circumstances still raises serious concerns. Emily M. Bender, a linguistics professor, noted that in an emergency, people could accept the first response they see, even though it could be gravely wrong.

Broader Implications for Information Retrieval

Chirag Shah and Emily M. Bender have long cautioned against deploying AI as “domain experts.” These technologies, they contend, have the potential to spread false information and cultural biases. Furthermore, the usefulness of online communities and the human quest for knowledge may be compromised by depending solely on AI for information retrieval.
The transition to AI-generated responses affects news and other information websites’ earnings as well as web traffic. Users are less likely to visit and support original content providers when they receive direct responses from Google, which might be detrimental to the digital economy.

Competitors that are also creating AI-driven search engines, such as OpenAI and Perplexity AI, have been keenly observing Google’s move. Critics such as Dmitry Shevelenko of Perplexity AI contend that Google hurried its AI function’s release, resulting in multiple quality problems.
Tech and social media professionals have brought attention to multiple occasions where Google’s AI has given outright false information. Examples include inaccurate geographic information and untrue statements about Barack Obama. These mistakes highlight the requirement for more thorough procedures for testing and verification.


AI has been included in Google’s search engine, but there are drawbacks as well. Because Google’s AI Search spreading misinformation. Faster information access is promised, yet serious concerns are associated with spreading false information and prejudices. To guarantee the accuracy and dependability of AI-generated material, experts advocate for more cautious use and stronger security measures. Maintaining the public’s confidence in digital information sources will depend on innovators and accuracy in finding a balance as technology advances.

Leave a Comment

Your email address will not be published. Required fields are marked *