Google AI Reviews Stroke again after Fatal Air India disaster

If you are buying an independently reviewed product or service using a link on our site, BRG can receive a subsidiary commission.

Google’s I/O 2025 said AI reviews are quite popular with consumers, but I have always found that they are the worst -type AI product. Google translates the results as many Google search queries as possible as they can. Not because consumers want Ai in the search for a review.

Separate AI mode is the generative AI when doing Google search correctly. This is a separate tab or a user’s intentional choice to improve their search experience using Gemini’s conversation.

Today’s most popular offers

The reason I don’t like that AI reviews are aggressively translated by consumers is well -known problems with their accuracy. We learned a difficult way to look at the hallucinate poorly. The “glue on pizza” incident will not be forgotten soon. While Google has improved reviews, the search results are still making mistakes.

The latest is the Fatal Air India disaster since earlier this week. For some people who rushed to Google search to find out what happened, there was an AI overview, claiming that Airbus, owned by Air India, crashed on Thursday, shortly after the carpet.

Some AI reviews even mentioned the type of plane, Airbus A330-243. In reality, it was the Boeing 787.

I have repeatedly said that Google should give up Ai reviews. There was one thing on pizza hallucinations. They were funny. Many people probably realized that he made a mistake. However, this week’s hallucination is different. This makes the wrong information about a tragic event, and it can have serious consequences.

The last thing we want from the “genes” products is to mislead the fake news. PG reviews do exactly that when they hallucinates. It doesn’t matter if these issues are rare. One mistake, such as the Air India disaster, is enough.

It’s not just about Google’s reputation. Airbus can be directly affected. Imagine that investors or travelers make decisions based on that search results. Of course, they could seek real news sources. However, not everyone will be tired of checking the fragment at the top of the page.

Google’s refusal that “Ai answers may include mistakes” is not enough. Not everyone notices or even read that great imprint.

At least Google repaired this hallucinations and gave Ars technica This statement:

As with all search features, we make improvements strictly and use examples to update our systems. This answer is no longer displayed. We oversee a high quality bar with all search features, and the AI ​​reviews are similar to other functions such as fragments.

I will also emphasize that not all AI reviews may have been listed in Airbus as a broken plane. The results may vary depending on what you ask and how you explain it. Some users were able to get the correct answer during the first test. We do not know how many times Airbus detail has proven to be an error.

PG reviews can make similar mistakes on tragic news events. We do not have the ability to know how often they hallucinate, no matter what Google says about accuracy.

If you have been observing changes in the past few years, you will probably understand why these hallucinations are happening. Pg does not think about a man. This can combine data from messages that mention both Airbus and Boeing, then mix the facts.

And these are not just a review. We also saw other hallucinated genetic tools. Studies have even shown that the most advanced ChatgPt models are more than the previous hallucinations. That is why I always argue with Chatgpt when he does not give me sources for his statements.

But here’s a big difference. You can’t give up on the reviews. Google has pushed this AI search experience to all, not before ensuring that s is not hallucinated. Ai mode, on the contrary, is a much better use of the search. It can sincerely improve your experience.

I will also add that instead of talking about AI reviews and their hallucinations, I could praise an initiative other than Google. Deepmind uses AI to predict hurricane forecasts that can be extremely useful. But here we are, focusing on AI reviews and their mistakes, because misleading users with AI are a serious problem. Hallucination remains a security problem that no one has solved yet.

Don’t miss it: Today’s Offers: Nintendo Switch Games, $ 5 Smart Plugs, $ 150 Visio Soundbar, $ 100 Beats pill, even more

More of the best offers

Sign up to the Bgr information bulletin. To get the latest news, follow us on Facebook, Twitter and Instagram.

View the original version of this article bgr.com

Leave a Comment