Is ChatGPT reliable?

It took little time for ChatGPT, an artificial intelligence (AI) capable of responding to us in natural language, to grab the spotlight and become the topic of the moment. An enthusiasm comparable to that experienced by the first Internet users in the 1990s. Or the first contact with programming in American universities. But, although it seems so, the machine is not speaking to them. It only calculates which word will accompany the next one in your answer. He neither thinks nor suffers, although it is easy to forget about it when contemplating his skill.

However, calling these assistants simple word calculators can be just as counterproductive as giving human qualities to an inert machine. Conversing with the AI of ChatGPT offers hitherto unprecedented utilities in the use of any electronic device. It summarizes and breaks down texts instantly. It makes us menus telling him what we have in the fridge. Translate texts. It tells us what laws we have to study to pass our next exam… It also does it quickly and without hesitation.

ChatGPT also presents the information in an orderly and clearly written way. He behaves like an expert in any subject, bringing it down to our level and explaining everything according to our knowledge. The problem is that he is not such an expert; it just seems so. He never hesitates or says, “Sorry, I have no knowledge on this subject.” They are machines programmed to know everything and always have an answer.

Should we believe everything the ChatGPT AI says?

These features that so fascinate their users, however, inherently raise clear and verifiable risks, such as the spread of false or misinformation. It may be the case, for example, that the information with which the AI in question has been fed contains errors. Or that it is incomplete. Or, also, that the mathematical calculations carried out to find the requested answer lead to erroneous conclusions. And the end user – unless he is an expert in the field – has no way of knowing if what he is reading is true. Numerous are the articles that have warned about the subject both in the general media and in the scientific journals of the Universities. And both its developers and governments are already discussing how to minimize these undesired effects.

Another potential risk is that certain malicious actors use ChatGPT AI or similar to produce false, biased, or misleading information for political or lucrative purposes. This technology allows you to do it in a completely personalized way and on demand. It also allows you to target each demographic with the tone and references that a subject matter expert would typically use. And, to top it off, it’s fast and relatively cheap to use.

Society has never faced something of this caliber. The printing press put the information channels in check, and already at that time it was used to, among other things, produce biased or erroneous information on a large scale. Then came the internet and social networks, which made it increasingly easy to propagate ideas to the rest of the world regardless of their value or veracity. Even so, a person with the appropriate skills and enough time to write each text was always required. However, ChatGPT’s AI removes those limitations: it writes in any language, knows how to write in different tones, constantly learns and improves its skills and, to top it off, it does it instantly and for a ridiculous price.

His dangers, however, do not only come from his talent falling into the wrong hands. In social networks it is easy to find cases in which the ChatGPT AI, as the conversation progresses, links errors that, in certain cases, even end up in harmful advice. Responsible companies carefully monitor that their AIs do not offer opinions or information on topics marked as sensitive or dangerous. They also explicitly warn users not to take anything as true as it is a product still under development. But the risk is there.

However, the most dangerous thing is when the user comes across information on the Internet that seems to come from a person, but is actually a text written by ChatGPT or derivatives. Within the chat with the AI, the user knows at all times that he is conversing with an artificial intelligence that can say false or wrong things. But outside of that scope there is no type of watermark that certifies that what we read has been written by a person or by a robot. For example: a seller can generate, today and with the ChatGPT AI, hundreds of different reviews that praise the product they have put up for sale and invite the consumer to make an apparently informed purchase but, in reality, based on manipulation.

L. Gordon Crovitz, founder of NewsGuard, a company that provides the most popular tool for rating the credibility of online media, says that “this tool is the most powerful tool for spreading disinformation that has ever been on the Internet.” A conclusion that is not new. In 2019, the OpenAI engineers themselves, the company behind the ChatGPT AI, showed some concern in the results of their research because “its capabilities could reduce the costs of disinformation campaigns” and facilitate strategies that promote “economic benefits, a particular political agenda and/or the desire to create chaos or confusion» to certain actors.

The challenge of determining what is the truth

The first measures to stop the spread of false information and malicious propaganda is the conscientious training of these models so that they do not replicate as true anything written on the Internet, which is where they mostly acquire their knowledge. The problem is that there are always gaps. The line between what is true and what is false is not always clear, even within a University or a public institution. In addition, what we think we know for sure today is likely to be disproved in the future thanks to the advancement of our collective intelligence. There are innumerable examples that we have throughout history.

Who controls what is true entails various dangers and also philosophical, moral and even democratic debates. At the moment they are private companies advised by experts in the field. However, it is important to agree as a society who will be in the future, what control mechanisms will intervene, etc.

On the other hand, excessively regulating what an AI such as ChatGPT can or cannot say also entails inconveniences and limitations for a part of its users. The maxim of any tool is to do exactly what the user wants. And the position of the technology companies, in line with this, is to gradually offer users fewer and fewer restrictions on their use within the limits that society may determine as acceptable.

For OpenAI, as detailed in their blog, pluralism, freedom and allowing our own morality to evolve over time are important. Just as our societies have done in the past, no matter how many challenges these tools present to us today. But, again, it is society that must reach a consensus.