You know how Google’s new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won’t slide off (pssst…please don’t do this.)

Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these “hallucinations” are an “inherent feature” of  AI large language models (LLM), which is what drives AI Overviews, and this feature “is still an unsolved problem.”

  • helenslunch@feddit.nl
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    4 months ago

    You could only train AI with good sources

    I mean yes, but also no. If you only train it with “good sources” then you miss out on a whole bunch of other valuable information.

    Just like scholar.google.com only has “good sources” but generally it’s not going to have the information that 90% of your search queries will be about.