Vint Cerf, one of the Internet’s founding fathers, has some harsh words for the suddenly hot technology behind the ChatGPT AI chatbot: “Snake oil.”
Google’s Internet evangelist wasn’t completely put off by the AI technology behind ChatGPT and Google’s own competitor Bard, called Big Language Model. But speaking Monday at TechSurge’s Celesta Capital Summit, he warned of the ethical issues of a technology that can generate plausible-sounding but false information even when trained on factual material.
If an executive tried to get him to apply ChatGPT to some business problem, his response would be to call it snake oil, referring to bogus medicines quacks sold in the 1800s, he said. Another ChatGPT metaphor involves kitchen appliances.
“It’s like shooting lettuce — you know how the lettuce goes all over the place,” Cerf said. “The facts are all over the place and he’s mixing them up because he doesn’t know any better.”
Cerf shared the 2004 Turing Award, the highest award in computer technology, for helping develop the Internet foundation called TCP/IP, which transfers data from one computer to another by breaking it up into small, individually addressable packets , which can take different routes from source to destination. He is not an AI researcher, but he is a computer engineer who would like to see his colleagues improve on the shortcomings of AI.
OpenAI’s ChatGPT and competitors like Google’s Bard have the potential to significantly transform our online lives by answering questions, drafting emails, summarizing presentations, and performing many other tasks. Microsoft has begun embedding OpenAI’s language technology into its Bing search engine in a significant challenge to Google, but is using its own web index to try to “ground” OpenAI’s flights of fancy with authoritative, reliable documents.
Cerf said he was surprised to learn that ChatGPT could fabricate false information from a factual basis. “I asked him, ‘Write me a biography of Vint Cerf.’ A bunch of things went wrong,” Cerf said. Then he learned the inner workings of the technology—that it uses statistical patterns discerned from vast amounts of training data to construct its response.
“He knows how to string together a sentence that is grammatically likely to be correct,” but has no real knowledge of what he’s saying, Cerf said. “We are very far from the self-awareness we want.”
OpenAI, which earlier in February released a $20-a-month plan to use ChatGPT, was clear about the technology’s shortcomings but aims to improve it through “continuous iteration.”
“ChatGPT sometimes writes believable-sounding but incorrect or nonsensical responses. Fixing this problem is challenging,” the AI research lab said when it released ChatGPT in November.
Cerf also hopes for progress. “Engineers like me should be responsible for trying to find a way to tame some of these technologies so they’re less likely to cause problems,” he said.
Cerf’s comments contrasted with those of another Turing Award winner at the conference, chip design pioneer and former Stanford president John Hennessy, who offered a more optimistic assessment of AI.
Editor’s note: CNET uses an AI engine to create some personal finance explanations that are edited and fact-checked by our editors. For more see this post.