Is Google in danger? This is the question recently asked by an employee in the management of the Mountain View company. The success of ChatGPT AI has actually caused a cold sweat in the company, as the answers from the OpenAI machine seem more accurate, efficient and contextualized than the results from the Google search engine.
The balance between brave and responsible
ChatGPT is an artificial intelligence capable of generating text with a style very close to human writing. It is possible to ask him for jokes, to invent stories, to write articles and even to formulate requests as one would do on Google. The advantage of ChatGPT, the answer is synthesized, understandable by anyone and at your fingertips, no need to click on a link.
The success of this AI, which has invaded social networks for a few weeks, is being scrutinized by Google, itself the owner of a “conversational robot” called LaMDA, which is currently not available to the public. According to Google CEO Sundar Pichai and head of AI Jeff Dean, the company has not yet entered the conversational AI race because of the significant problems these machines can encounter.
“This is an area where we have to be brave and responsible. So we have to find a balance”, explains Sundar Pichai. Jeff Dean points that out on his page “For research, questions of truth are really important; and for other applications, issues of bias, toxicity and safety are also paramount.”. In summary, Google would have too much to lose by launching a product capable of answering all the questions in the world, but whose reliability has not been proven.
A false impression of perfection
Because if ChatGPT is great with its ability to contextualize, explain and describe certain situations, the AI is sometimes wrong. As the CEO of OpenAi explained on Twitter, “ChatGPT is incredibly limited, but good enough […] to create a deceptively perfect impression. It’s a mistake to trust it for anything important right now. […] We have a lot of work to do with robustness and truthfulness”.
For the head of AI at Google, it’s not about running the company “a reputational risk” with public availability of LaMDA. Jeff Dean explains that it is normal that Google “act more cautiously than small start-ups”. LaMDA has already been at the center of recent discussions when a former Google engineer claimed that the machine had a “conscience”. Until imaginations calm down and conversational AI improves, Google therefore intends to remain discreet.