‘Trump shooting didn’t happen’: Meta’s AI assistant says; company blames hallucinations for incorrect response 

Meta’s AI assistant is blaming AI hallucinations for inaccurate responses from its chatbot that said that the recent attempted assassination of Donald Trump didn’t happen 

Published - July 31, 2024 12:49 pm IST

Meta’s AI assistant is blaming AI hallucinations for inaccurate responses from its chatbot.

Meta’s AI assistant is blaming AI hallucinations for inaccurate responses from its chatbot. | Photo Credit: Reuters

Meta’s AI assistant incorrectly said that the recent attempted assassination attempt on former U.S. President Donald Trump did not happen. The tech giant on its part is now blaming AI hallucinations as the cause behind the inaccurate response, calling the incident “unfortunate”.

Meta also denied that bias in the models could have caused the inaccurate responses.

The company further said that “it’s a known issue that AI chatbots, including Meta AI, are not always reliable when it comes to breaking news or returning information in real time” and that the company is working to address the problem.

“These types of responses are referred to as hallucinations, which is an industry-wide issue we see across all generative AI systems and is an ongoing challenge for how AI handles real-time events going forward,” the company said in a blogpost.

(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)

Earlier, Google also refuted claims that its search autocomplete feature was censoring results about the assassination attempt.

Donald Trump, the current Republican nominee has been a vocal critic of tech companies. Trump in a post on Truth Social said, “Here we go again, another attempt at RIGGING THE ELECTION!!!”, asking his followers to “Go after Meta and Google”.

Hallucination in AI chatbots is when a machine provides convincing but completely made-up answers. It is not a new phenomenon and developers have warned of AI models being convinced of completely untrue facts, responding to queries with made-up answers. This example highlights how difficult it can be to overcome what large language models are inherently designed to do, which is to generate information based on the available data.

0 / 0
Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.