Much ink has been spilled of late about the tremendous promise of OpenAI’s ChatGPT program for generating natural-language utterances in response to human prompts.
The program strikes many people as so fresh and intriguing that ChatGPT must be unique in the universe.
Also: What is ChatGPT and why does it matter? What you need to know
Scholars of AI beg to differ.
“In terms of underlying techniques, ChatGPT is not particularly innovative,” said Yann LeCun, Meta’s chief AI scientist, in a small gathering of press and executives on Zoom last week.
“It’s nothing revolutionary, although that’s the way it’s perceived in the public,” said LeCun. “It’s just that, you know, it’s well put together, it’s nicely done.”
Such data-driven AI systems have been built in the past by many companies and research labs, said LeCun. The idea of OpenAI being alone in its type of work is inaccurate, he said.
Also: People are already trying to get ChatGPT to write malware
“OpenAI is not particularly an advance compared to the other labs, at all,” said LeCun.
“It’s not only just Google and Meta, but there are half a dozen startups that basically have very similar technology to it,” added LeCun. “I don’t want to say it’s not rocket science, but it’s really shared, there’s no secret behind it, if you will.”
LeCun noted the many ways in which ChatGPT, and the program upon which it builds, OpenAI’s GPT-3, is composed of multiple pieces of technology developed over many years by many parties.
“You have to realize, ChatGPT uses Transformer architectures that are pre-trained in this self-supervised manner,” observed LeCun. “Self-supervised-learning is something I’ve been advocating for a long time, even before OpenAI existed,” he said.
Also: Can AI detectors save us from ChatGPT? I tried 3 online tools to find out
“Transformers is a Google invention,” noted LeCun, referring to the language neural net unveiled by Google in 2017, which has become the basis for a vast array of language programs, including GPT-3.
The work on such language programs goes back decades, said LeCun.
“Large language models, the first neural net language model — at the time, it was large, by today’s standards, it’s tiny — was by Yoshua Bengio, about 20 years ago,” said LeCun, referring to the head of Canada’s MILA institute for AI. Bengio’s work on the concept of attention was later picked up by Google for the Transformer and became a pivotal element in all language models.
OpenAI’s program has, moreover, made extensive use of a technique called reinforcement learning through human feedback, which gets human agents to help rank the output of the machine in order to improve it, much like Google’s Page Rank for the web. That approach was pioneered not at OpenAI, but at Google’s DeepMind unit, he said.
“So there’s a whole history of this, and this didn’t come out of a vacuum,” said LeCun, meaning, ChatGPT.
Also: OpenAI’s ChatGPT is scary good at my job
The ChatGPT program is a case less of scientific breakthroughs than it is an instance of decent engineering, said LeCun. He compared the program to IBM’s Watson computer that competed in 2011 in the game show Jeopardy!, and entrepreneur Sebastian Thrun’s self-driving vehicle that won DARPA’s 2005 Grand Challenge. Thrun’s award-winning tech “wasn’t particularly innovative in terms of the underlying science,” said LeCun, “it was just very well engineered.”
“That’s kind of what OpenAI has done,” he said. “I’m not going to criticize them for that.”
LeCun was an invited speaker for an hour and a half talk hosted by the Collective[i] Forecast, an online, interactive discussion series that is organized by Collective[i]which bills itself as “an AI platform designed to optimize B2B sales.”
Also: ChatGPT’s next big challenge: Helping Microsoft to challenge Google search
LeCun made his remarks about OpenAI in response to a question during the colloquium posed by New York Times journalist Cade Metz. Metz asked if Meta’s AI team, FAIR, which LeCun built, will ever be identified in the public mind with breakthroughs the way that OpenAI is.
“Are we going to see this from Meta? Yeah, we’re going to see this,” replied LeCun. “And not just text generation, but also creation aids,” he said, including “generative art,” which, he said, “I think is going to be a big thing.”
Meta will be able to help small businesses promote themselves by automatically producing media that promote a brand, he offered.
“There’s something like 12 million shops that advertise on Facebook, and most of them are mom and pop shops, and they just don’t have the resources to design a new, nicely designed name,” observed LeCun. “So for them, generative art could help a lot.”
Also: How to use DALL•E 2 to turn your wildest imaginations into tangible art
At another point in the talk, LeCun observed, “You might ask the question, Why aren’t there similar systems from, say, Google and Meta,” referring again to ChatGPT.
“And, the answer is, Google and Meta both have a lot to lose by putting out systems that make stuff up,” said LeCun with a laugh.
LeCun is the winner of the 2019 Turing Award for contributions to computer science, the equivalent of computing’s Nobel Prize, along with MILA’s Bengio and University of Toronto professor, and Google fellow, Geoffrey Hinton. The three helped pioneer today’s deep learning era of AI.
OpenAI is funded by Microsoft, which has exclusive access to the code being produced by the startup, and which is gradually incorporating the programs into its various software offerings, including its Azure cloud service.