Alphabet, makers of Google and other software that mostly doesn’t work, have announced the introduction of “AI Overviews” into their search engine results. Alarmingly, they will begin rolling out this feature in the U.S. this week.

Initially, it will be a boon to many people. In effect, all it is doing is reverting to the original 1990s search paradigm wherein you Asked Jeeves a question, and the rudimentary search engine attempted to provide an answer. Since then, Google has been providing extracts from its most popular search results to answer a query. Ask Google, for instance, if a movie is good, and it will either quote the pithiest piece of a popular review or sort of miss the mark by showing you the plot summary distributed by the movie’s makers. It isn’t really answering your question; it’s merely providing an extract of the most popular search results for that particular query.

It’s likely the initial iteration of AI Overviews will take this one step better–using its AI to parse info from multiple sites and provide a quick summary. It will use its learning-based system to (sort of) understand what you want and give you a better response. For instance, as cited by Wired, the AI Google will allow users to:

“… ask Google to meal-plan for you, or to find a [P]ilates studio nearby that’s offering a class with an introductory discount. In the Googley-eyed future of search, an AI agent can round up a few studios nearby, summarize reviews of them, and plot out the time it would take someone to walk there.”

Sounds cool, right? Well, yeah, it is, provided the source material is correct and if the AI doesn’t embellish it. I mean, we’re sure an AI system knows the difference between writing factual and fictional stories, right? (No, we aren’t sure of that at all.) But let’s put that aside for a moment. Suppose you are down the road, asking a complex question like, “How does quantum theory work and what is the role of Dark Matter in it?” Well, the Googly AI will surely summarize the prevailing information and provide you an answer. Maybe it will even find those growing dissident voices who are beginning to insist that Dark Matter never really existed and that maybe the whole theory is wrong. What is more likely, however, is that it will present you information as fact that is nothing more than generally accepted theory. Contrary to what YouTube will show you, those two things ARE NOT THE SAME.

Along those same lines, maybe you are investigating treatment options for a serious disease your elderly mom has, since you aren’t convinced her doctors are diligent or honest. Maybe a robust Google AI search tells you about new treatment options instead of providing you with links to the journals where those treatments are described. AI Overviews or its descendants may tell you research has shown such options are promising, even if a read of the actual journal will show the bloody study hasn’t even been peer-reviewed yet. Not all information available on the internet comes from reliable sources. Not all world-famous physicists are right about the origins of the universe. Not all medical researchers have a frigging clue about what they are doing or are capable of putting The Science ahead of their need for more funding. Knowing what information to accept or reject is HARD. Shortcuts invariably send you on paths you shouldn’t be on.

Who gets to make the determination that information you find on the internet is reliable? Now, you do. (Sort of1.) In the future, the answer is not you.

In the near future, if we search for the same things, we will get the same answers, most likely. That is because neither you nor I will be able to tell Google to mind its damned business and stay out of our search results. Those of us who are asking it to provide source materials will be overwhelmed by those who just want someone smart to tell them the quick answer. I want that too; I just want that smart person to be me.

Here’s what 65 years of life has taught me: the quick answers are almost never right.

Dearest Google, when you roll this software out, please provide some of us with the option of turning AI Overviews the fuck off. I’m currently smarter than AI, and I’d like to keep it that way.

P.S. The easiest way to ensure AI increases human-level intelligence is by making humans’ base intelligence lower. In other words, if I want to seem like a guru, I can get smarter or just make you dumb. Guess which one is easier?

  1. My caveat here is that to get reliable info from Google you need to have trained it to provide the right sources. I was an analyst and researcher for decades, so Google shows me websites not typically visited by most people. If three of us search for a piece of information, I promise you I will find stuff you did not find. Google’s and Bing’s search “enhancements” essentially move us from being secondary researchers to tertiary researchers or worse. We can no longer review what the primary researcher found and agree or disagree with ell’s conclusions. In the future, we have to accept that the Almighty AI got it right. ↩︎

Leave a comment