Google has recently come under scrutiny for the inaccurate, amusing, and occasionally peculiar answers provided by its AI Overviews in search results. These AI-generated responses, which Google expanded earlier this month, have yielded mixed outcomes. For instance, a user seeking advice on how to make cheese adhere to pizza was advised to use glue — a piece of advice sourced from an old Reddit post. Meanwhile, another user was told to consume “one small rock per day,” a suggestion stemming from The Onion.
If you’re unable to replicate these results, or if you encounter different outcomes, it’s because Google is actively working to eliminate inaccurate information. According to a company spokesperson, Google is taking “swift action” and utilizing these examples to make broader improvements to its systems.
“The majority of AI Overviews deliver high-quality information, complete with links for further exploration,” stated the spokesperson. “Many examples we’ve encountered involve rare queries, and we’ve also identified instances that appear to be altered or are non-reproducible. We conducted thorough testing prior to launching this new feature, and similar to our other Search functionalities, we welcome feedback.”
Therefore, it’s reasonable to anticipate that the quality of these results will improve over time, and some of the screenshots circulating on social media might have been fabricated for humorous effects.
However, observing these AI-driven search results prompts an important question: What is their true purpose? Even in an ideal scenario with flawless functionality, what advantages do they offer over traditional web searches?
Evidently, Google aims to provide users with direct answers, minimizing the need to navigate through multiple web pages. According to the company’s early tests of AI Overviews, “people use Search more frequently and express higher satisfaction with the results.”
The concept of eliminating the “10 blue links” is not new. Although Google has already reduced their prominence, I believe it is too early to completely phase them out.
Conducting a self-serving query such as “what is TechCrunch” provided a generally accurate summary, though it felt unnecessarily verbose, resembling a student’s attempt to meet a minimum word count. Additionally, the traffic statistics appeared to originate from a Yale career website, which seemed out of place. When I searched for “how do I get a story in TechCrunch,” the results referenced an outdated article about submitting guest columns—a practice that TechCrunch no longer follows.
The objective isn’t merely to identify even more instances where AI Overviews fall short, but to highlight that many of its errors are likely to be mundane rather than spectacular or entertaining. Acknowledging — to Google’s credit — that these Overviews provide links to the original source material, the process of discerning which answer is derived from which source still demands significant user engagement and numerous clicks.
Google has also pointed out that the inaccuracies highlighted on social media typically occur in areas with insufficient accurate information available online. This is a valid observation but reinforces the notion that for AI to function optimally, much like traditional search engines, it requires a robust open web replete with precise information.
Regrettably, AI might pose an existential threat to that same open web. If users predominantly consume AI-generated summaries—accurate or not—the motivation to produce detailed and accurate how-to articles or to break significant investigative stories diminishes.
Google asserts that AI Overviews lead people to visit a broader range of websites for answers to more intricate questions, and that the links in these overviews generate more clicks than if the site merely appeared as a standard search listing. While this assertion is appealing, its truthfulness is critical. If it doesn’t hold up, no amount of technical refinement can compensate for the potential disappearance of large portions of the web.