Humans are mesmerized every time they see animals or robots showing human-like abilities. That's why parrots have been a popular pet for centuries, and videos of gorillas observing caterpillars or dancing Boston Dynamics robots still garner interest years after their internet debut. Open AI's ChatGPT, an AI-powered chatbot, was the most recent example of this phenomenon, taking the internet by storm in December 2022. People rushed to find out what it was all about, and 1 million people had used it within the first week of its launch.
We previously talked about AI-powered coding assistants like Github Copilot and Open AI Codex on our blog. However, neither of those two predecessors generated the buzz ChatGPT did. This might have something to do with how Open AI trained its latest chatbot. While developing ChatGPT, AI trainers ranked the answers given by the language model, providing the AI with a benchmark to train on and fine-tune its answers. As a result, the ChatGPT's reactions turned out to be more human-like. The chatbot can get into lengthy arguments with users, deflect questions that can get it in trouble, and backpedal when it is wrong. Reactions like these make the experience more believable and all the more awe-inspiring for people who give it a try.
However, although ChatGPT seems to have nailed the style, its content needs to improve significantly. The good news is that ChatGPT appears to be a marked improvement over its predecessors when it comes to bias and discrimination in the texts generated. While Copilot and Codex were frequently criticized for insensitive remarks, ChatGPT does a much better job in this regard, which is a testament to Open AI's meticulous moderation efforts to avoid scandal.
The real surprise here is the failure of ChatGPT to reliably retrieve factual information, something one would think would be one of the strong suits of such a tool. But it turns out context matters more than we give it credit for, and ChatGTP is not as good at reasoning as it is at pattern recognition. Failure to retrieve information spelled doom for Galactica, too. Meta's attempt at creating an AI-powered chatbot, Galactica, was launched with the express purpose of finding facts and information, but it was pulled out of service after just two days of operations as it provided its users with wrong information. Similarly, ChatGPT tends to make up facts whenever it feels like it, and it may be challenging to determine how much of what it says is pure fabrication. The fact that it lets you down when you want precise, factual information is what will keep ChatGPT from replacing Google as the go-to search engine unless Open AI does something about it.
For now, ChatGPT looks like a social media trend that we go through every few months. However, the AI power that underpins this tool can have dramatic effects on different aspects of our lives once it gets more polished. Here are three possible implications of ChatGPT that is worth keeping an eye on:
One of the first areas of implementation for ChatGPT will be content writing. AI-powered apps have been helping creative people produce content for some time. SEO-oriented texts lend themselves particularly well to this type of arrangement. Ranking high on a Google search becomes the priority over content quality, and AI can do a great job of generating texts that tick all the boxes for SEO success. If content quality is not a concern, ChatGPT will churn out as much content as you like at an incredible rate and saturate certain communication channels.
Cassie Kozyrkov, Chief Data Scientist at Google, demonstrated ChatGPT's writing skills in a blog post. While reading her piece on Open AI's latest tool, you find out halfway through that what you read up to that point was written by ChatGPT and, thus, not factually correct. The style was a bit cheesy compared to Cassie's usual writing, but the extent to which ChatGPT can mimic a human writer is quite scary.
Essays have been part and parcel of scholarly work for centuries. Scientists wrote essays to expound their theories or refute existing ones. Scientific essays not only helped communicate knowledge but also built the scientific community as we knew it. It seems this is about to change as ChatGPT enters the picture.
Texts generated by ChatGPT will not automatically replace scientific essays as they include glaring factual errors that are easy to detect. But with a little bit of improvement, this tool can soon be expected to write essays for college students, rendering these assignments useless as a way of performance assessment. Then, it will be just a matter of time before ChatGPT starts writing master's and Ph.D. theses and articles. Considering the central role of academic writing in college education and scholarly work, it looks like how we define academic competence will need a thorough revision soon.
The average person is producing multiple orders of magnitude more data than his parents were doing a few decades ago. We use a plethora of electronic devices every day that generate data in different formats and employ software specifically designed to create, aggregate, and organize data. Most of this data is unstructured: Audio files, video footage, photos, etc.
However, unstructured data does not readily lend itself to use unless you are a data scientist. Converting it to structured data and presenting it in a tabular form would make it easier for people without a data science degree to search and work on unstructured data. It turns out ChatGPT can help with that: Maximilian Evans from Climate AI has demonstrated how ChatGTP can convert field notes from a farm into a CSV table upon being given a simple prompt. In another example, it can help analyze a given text, extract relevant information, and present it as JSON.
ChatGPT is an impressive tool for demonstrating the current capabilities of AI and giving us a hint of its future potential. However, it is not the radical AI breakthrough that people have been looking forward to for some time. It is not dependable and polished enough to produce end products yet. But it still is a step up from tools like Copilot and Codex in terms of taking some burden off people and increasing efficiency. The hope is that ethical concerns will be front and center while using tools like ChatGPT, and they will be employed with an eye to taking humanity forward.