No other topic is currently keeping the AI world as busy as ChatGPT, the chatbot released in November. ChatGPT was developed by AI startup OpenAI and already reached 1 million users within 5 days. This was not even achieved by music streaming service Spotify and social network Instagram. What is behind the chatbot’s technology and what opportunities and challenges does it present? Magility provides an overview.
A promising multi-talent?
ChatGPT is based on OpenAI’s GPT 3.5 language model, and its extension with a dialog format allows it to respond to a wide range of human queries and tasks. Thus, the chatbot can answer questions on arbitrary topics, write different types of texts such as essays or poems, and even output lines of code. Unlike previous chatbots, ChatGPT can also recognize contexts of user queries and thus respond to follow-up questions, which could be a breakthrough for customer service automation in particular.
Using ChatGPT is simple: All you need is an OpenAI account and you can start chatting. For example, you can ask the chatbot “What is the highest mountain in the world?”. The bot will most likely answer “Mount Everest” and you can finally ask “What was the name of the first climber of this mountain?”. In doing so, the chatbot will know that the question refers to the previous input and that the mountain means Mount Everest. If you are not satisfied with the bot’s answer, you can use the “Regenerate response” button to generate an alternative answer.
ChatGPT learns with the help of human feedback
The AI behind ChatGPT has been fed with a huge amount of data that currently extends to the end of 2021. Based on this data, the bot generates suitable and natural answers to user queries. But the chatbot’s answers are only as good as the data or the questions themselves. For example, if the data has knowledge gaps or is biased, these weaknesses may also show up in the bot’s answers. For this reason, OpenAI relies, among other things, on so-called “reinforcement learning from human feedback” to train the bot. In this method, humans first respond to various user inputs. Then the AI itself answers user input and generates several answer options per input. These answers are in turn rated by humans and ranked from best to worst. ChatGPT ultimately uses this feedback to optimize its responses. Users can also provide feedback to the bot while chatting by pressing either the “thumbs up” or “thumbs down” button and optionally providing their ideal answer in a comment window and pointing out possible inappropriate answers.
Wide range of applications of ChatGPT
With its ability to understand and generate natural language, numerous opportunities open up for both companies and private individuals to use the bot, as ChatGPT has been publicly available to date. Companies can use the chatbot in human resources, for example, to automate internal processes and have documents such as contracts or job descriptions written. In customer service, on the other hand, the chatbot could be trained to respond to follow-up questions from customers and provide more flexible answers based on existing chat histories with customers. Furthermore, ChatGPT holds great potential for content creation in areas such as marketing, journalism and public relations. In this context, ChatGPT can not only speed up content creation, but also improve its quality. Finally, the code community could also benefit from the bot in the future. ChatGPT can help programmers find errors in their code or suggest improvements.
Chatbot makes misinformation sound plausible
But the bot is not yet fully developed. The coding platform Stack Overflow has now even banned the use of this bot. ChatGPT does not generate any expertise, but rather invents facts. Instead of pointing out missing data, the AI can generate a confident but questionable and ultimately incorrect answer based on insufficient knowledge, which can lead to misinformation. For example, in response to a question, the chatbot may have invented a study and its results, which never existed. The user would not notice this because the answer sounds plausible. Automatically generated texts must therefore be checked for accuracy by the user.
The ChatGPT developer OpenAI itself even warns about its chatbot:
„it’s a mistake to be relying on it for anything important right now.“ (OpenAI-CEO Sam Altman on Twitter)
Countermovement: GTPZero aims to expose machine-generated content
Schools and universities are also concerned that the chatbot could encourage poor writing skills and plagiarism on the part of students if ChatGPT is increasingly relied upon to complete school assignments and homework. The latter is to be countered by AI recognition software programs that were developed immediately after the release of ChatGPT and are currently being tested. An American computer science student has developed GPTZero, an application that should be able to quickly and effectively determine whether a text was written by a human or an AI. While machine-generated texts tend to have more uniform and constant complexity and rarely have typos, humans, on the other hand, tend to have more variation and more typos in their sentences. The application examines texts according to these patterns.
The Office of Technology Assessment at the Bundestag long ago called for labeling for machine-generated texts. OpenAI, meanwhile, announced that it will watermark texts generated by ChatGPT to prevent plagiarism.
Microsoft integrates bot into search engine
While the online world is hotly debating the chatbot, Microsoft has now linked an advanced version of ChatGPT to its Bing search engine. However, the chatbot had to be put on a leash again shortly after its release, after it became abusive towards users on several occasions. Soon, Microsoft also plans to integrate AI into Office apps like Word and PowerPoint. The software giant had already invested $1 billion in chat GPT developer OpenAI in 2019. Now Microsoft has announced that it will invest another 10 billion in the company and is challenging Google for the top position in search engines.
Tech groups go on the offensive
But the rapid rise of ChatGPT has not left tech companies Google and Meta untouched. Both companies are working flat out on a competing AI. Google had been hesitant to publicly demonstrate its own LaMDA language model until now. Now, a few weeks ago, the company presented the chatbot Bard, which is based on LaMDA, and inadvertently revealed the bot’s first weaknesses. It turned out that the bot answered the question about interesting discoveries of the James Webb telescope in a Google commercial with a wrong statement, which Nasa confirmed afterwards. Meta has also announced its intention to launch its own language model Llama. But the blunders by Google and Microsoft show that the development of chatbots still has a long way to go before reliable and harmless interaction is possible.
ChatGPT has already changed the online world for good
Although ChatGPT still has major weaknesses, the immense potential of the chatbot is undisputed. There are already many possible uses for the bot, and the release of ChatGPT has once again enormously accelerated the development of AI-based language models.
And the development remains rapid, because the release of GPT-4 has already been announced for next week. The new version of the chatbot will then be able to generate videos from text, according to Andreas Braun, CTO of Microsoft Germany.
Can ChatGPT stay at the top? Or will the AI soon be surpassed by competing language models? And how would the future planned regulation of AI systems affect the use of chatbots?
We at magility are excited about further developments in the field of artificial intelligence, are diligently testing ChatGPT and other AI-based technologies, and are happy to keep our customers and blog readers up to date.
What are your thoughts on the use of ChatGPT? Feel free to contact our experts at magility for an exchange on the topic of chatbots and AI-based applications or follow us on LinkedIn and stay up-to-date.