Artificial intelligence (AI) is making its way into an internet search, months after chatbot ChatGPT stunned the world with its amazing ability to write articles and respond to queries in the same way.
Three of the world’s largest search engines, Google, Bing, and Baidu, announced last week that they would add chatGPT or comparison technology to their search products. This will enable users to directly interact with search results instead of getting a list of links after typing in a word or query. How will this affect how users interact with search engines? Are there any risks associated with this type of human-machine interaction?
Technologies such as ChatGPT, developed by San Francisco, California-based OpenAI, are used by Microsoft’s Bing. After all, all three businesses employ substantial language models (LLMs). By repeating patterns of textual data they encounter in a huge database, LLMs generate compelling sentences. A small number of testers are now using Bard, Google’s AI-powered search engine that was unveiled on February 6. The Microsoft version is currently generally accessible, but there is a waiting list for unrestricted access. ERNIE Bot from Baidu will be accessible in March.
Before these announcements, a few small startups have even built AI-powered search engines. According to Arvind Srinivas, a computer scientist in San Francisco and co-founder of Perplexity, an LLM-based search engine that offers answers in conversational English, “search engines are evolving into this new state, where you can actually start talking to them and talk to them like you would a friend.”
Unlike traditional Internet searches, interactions are much more intimate, which can influence the way search results are viewed. According to Alexandra Erman, a computational social scientist at the University of Zurich in Switzerland, people may be naturally more inclined to trust the responses of a chatbot that interacts with them than a disinterested search. From the engine.
According to a 2022 study by a group at the University of Florida in Gainesville, participants who interacted with chatbots employed by businesses such as Amazon and Best Buy were more likely to trust the company. They think that dialogue is human.
This can be beneficial as it will make searching easier and faster. Still, considering that AI chatbots are fallible, overconfidence can be detrimental. In his tech demonstration, Google’s Bard misjudged a question on the James Webb Space Telescope and confidently answered it incorrectly. Additionally, ChatGPT has a tendency to invent answers to questions for which it has the knowledge, a behavior experts in the field called hallucinating.
Unfortunately a simple google search would tell us that JWST actually did not “take the very first picture of a planet outside of our own solar system” and this is literally in the ad for Bard so I wouldn’t trust it yet https://t.co/OS8AMyLQRu
— Isabel Angelo (@IsabelNAngelo) February 7, 2023
Bard’s mistake, according to a Google spokesperson, “highlights the importance of a thorough testing process, which we’re starting this week with our trusted tester program,” the company added. Nevertheless, some argue that if these errors are found, rather than promoting greater trust, they will make users less confident in chat-based searches. Early perception can have a significant impact, according to Sridhar Ramaswamy, chief executive of Neva, an LLM-powered search engine and computer scientist headquartered in Mountain View, Calif., introduced in January. Due to the debacle, Google suffered a $100 billion loss as worried investors sold the stock.
The relative lack of transparency exacerbates the problem of accuracy. Typically, search engines give users a list of links that represent their sources and let them choose which to believe. The data the LLM was trained on, by contrast, is rarely known—was it Britannica or a gossip website?
If the language model is flawed, deceptive, or spreads false information, “it’s completely unclear how [AI-powered search] will work,” Urman contends.
According to Urman, search bots can potentially undermine users’ perceptions of search engines as objective arbiters of truth if they make enough mistakes to undermine trust rather than foster it.
She has conducted as yet unpublished research indicating that high levels of trust currently exist. It looked at how users react to two features that Google currently uses to improve the search experience: “knowledge panels,” which are summaries that Google automatically generates in response to searches; is, for example, a person or organization, and “featured snippets,” which display an excerpt from a page deemed particularly relevant to the search above the link. About 80% of those polled by Urman believed these characteristics to be true, and more than 70% believed they were objective.
Chatbot-powered search blurs the line between machines and humans, according to Jada Pestelli, a senior ethicist at Hugging Face, a data science platform in Paris that encourages the appropriate use of AI. “We’ve had these new technologies thrown at us constantly without any controls or educational frameworks for how to use them,” she says of her concern about how quickly businesses are adopting AI advances.