ChatGPT is the newest and most impressive artificially intelligent chatbot to date. It was released two weeks ago and in just five days it reached one million users. It is so heavily used that its servers have repeatedly reached capacity.
OpenAI, the company that developed it, is already being discussed as a potential Google killer. Why search for something on a search engine when ChatGPT can write an entire paragraph explaining the answer? (There’s even a Chrome extension that lets you do both, side-by-side.)
But what if we never know the secret sauce behind ChatGPT’s capabilities?
The chatbot takes advantage of a number of technical advances published in the open scientific literature over the past two decades. But all of its own innovations are secret. OpenAI may well be trying to build a technical and commercial moat to keep others out.
What it can (and can’t) do
ChatGPT is very capable. Want a haiku on chatbots? Sure.
Chatbots, oh so wise
Endless processing
Empty responses.
How about a joke about chatbots? No problem.
Why did the chatbot cross the road?
To go to the other side – of your screen!
ChatGPT can do many other tricks. He can write computer code to a user’s specifications, write business letters or rental agreements, write assignments, and even pass college exams.
Equally important is what ChatGPT cannot do. For example, he has trouble distinguishing between truth and lies. He is also often a persuasive liar.
What is the fastest marine mammal?
The fastest marine mammal is the sailfish.
The sailboat is not a mammal.
You are right, the sailfish is not a mammal. I apologize for the error. The fastest marine mammal is the peregrine falcon.
ChatGPT is kind of like auto-complete on your phone. Your phone is trained on a dictionary of words so it completes words. ChatGPT is trained on almost the entire web, so it can complete entire sentences, even entire paragraphs.
However, he doesn’t understand what he’s saying, just what words are most likely to come next.
Open only by name
In the past, advances in AI have been accompanied by peer-reviewed publications.
In 2018, for example, when the Google Brain team developed the BERT neural network that most natural language processing systems are now based on (and we believe ChatGPT is too), the methods were published in peer-reviewed scientific papers and the code was open. -source.
And in 2021, DeepMind’s AlphaFold 2 protein folding software was Sciences Breakthrough of the year. The software and its results were open source so that scientists around the world could use them to advance biology and medicine.
Following the release of ChatGPT, we only have a short blog post describing how it works. There has been no hint of an accompanying scientific publication, or that the code will be open-source.
To understand why ChatGPT might be kept secret, you need to understand a bit about the company behind it.
OpenAI is perhaps one of the strangest companies to emerge from Silicon Valley. It was established as a nonprofit in 2015 to promote and develop “friendly” AI in a way that “benefits humanity as a whole”. Elon Musk, Peter Thiel and other top tech figures have pledged US$1 billion to achieve his goals.
Their thinking was that we couldn’t trust for-profit companies to develop increasingly better AI aligned with the prosperity of humanity. AI therefore had to be developed by an association and, as its name suggests, in an open manner.
In 2019, OpenAI became a for-profit capped company (with investors limited to a maximum return of 100 times their investment) and took a US$1 billion investment from Microsoft so it could scale and compete with the tech giants.
It seems that money got in the way of initial OpenAI plans.
Take advantage of users
On top of that, OpenAI seems to be using user feedback to filter out fake answers that ChatGPT hallucinates.
According to his blog, OpenAI initially used reinforcement learning in ChatGPT to downgrade wrong and/or problematic answers using an expensive hand-built training set.
But ChatGPT now seems to be settled by its more than one million users. I imagine that kind of human feedback would be extremely expensive to acquire any other way.
We now face the prospect of a significant advance in AI using methods that are not described in the scientific literature and with datasets limited to a company that appears to be open in name only.
What is the next step ?
Over the past decade, rapid advances in AI have largely been driven by the openness of academics and businesses. All major AI tools we have are open source.
But in the race to develop better AI, that could end. If the openness of AI decreases, we could see progress in this area slow down accordingly. We may also see new monopolies develop.
And if history is anything to go by, we know that a lack of transparency is a trigger for bad behavior in tech spaces. So, while we praise (or criticize) ChatGPT, we should not overlook the circumstances in which it came to us.
Unless we’re careful, what appears to be the golden age of AI may actually be coming to an end.
Toby Walsh, AI Professor at UNSW, Research Group Leader, UNSW Sydney
This article is republished from The Conversation under a Creative Commons license. Read the original article.
#ChatGPT #revolutionize #internet #secrets #experts #worried