Home » ‘You Can’t Lick a Badger Twice’: Google Failures Highlight a Fundamental AI Flaw

‘You Can’t Lick a Badger Twice’: Google Failures Highlight a Fundamental AI Flaw

by Bella Baker
0 comments


Here’s a nice little distraction from your workday: Head to Google, type in any made-up phrase, add the word “meaning,” and search. Behold! Google’s AI Overviews will not only confirm that your gibberish is a real saying, it will also tell you what it means and how it was derived.

This is genuinely fun, and you can find lots of examples on social media. In the world of AI Overviews, “a loose dog won’t surf” is “a playful way of saying that something is not likely to happen or that something is not going to work out.” The invented phrase “wired is as wired does” is an idiom that means “someone’s behavior or characteristics are a direct result of their inherent nature or ‘wiring,’ much like a computer’s function is determined by its physical connections.”

It all sounds perfectly plausible, delivered with unwavering confidence. Google even provides reference links in some cases, giving the response an added sheen of authority. It’s also wrong, at least in the sense that the overview creates the impression that these are common phrases and not a bunch of random words thrown together. And while it’s silly that AI Overviews thinks “never throw a poodle at a pig” is a proverb with a biblical derivation, it’s also a tidy encapsulation of where generative AI still falls short.

As a disclaimer at the bottom of every AI Overview notes, Google uses “experimental” generative AI to power its results. Generative AI is a powerful tool with all kinds of legitimate practical applications. But two of its defining characteristics come into play when it explains these invented phrases. First is that it’s ultimately a probability machine; while it may seem like a large-language-model-based system has thoughts or even feelings, at a base level it’s simply placing one most-likely word after another, laying the track as the train chugs forward. That makes it very good at coming up with an explanation of what these phrases would mean if they meant anything, which again, they don’t.

“The prediction of the next word is based on its vast training data,” says Ziang Xiao, a computer scientist at Johns Hopkins University. “However, in many cases, the next coherent word does not lead us to the right answer.”

The other factor is that AI aims to please; research has shown that chatbots often tell people what they want to hear. In this case that means taking you at your word that “you can’t lick a badger twice” is an accepted turn of phrase. In other contexts, it might mean reflecting your own biases back to you, as a team of researchers led by Xiao demonstrated in a study last year.

“It’s extremely difficult for this system to account for every individual query or a user’s leading questions,” says Xiao. “This is especially challenging for uncommon knowledge, languages in which significantly less content is available, and minority perspectives. Since search AI is such a complex system, the error cascades.”



Source link

You may also like

Leave a Comment

Editors' Picks

Latest Posts

© 2024 trendingai.shop. All rights reserved.