Can ChatGPT be detected?

ChatGPT, which is also included in the new Bing, is an extremely useful tool for easily generating texts or making quick queries. This and other similar AIs, however, have a downside. They are so precise that on many occasions it is practically impossible to know if a text is made with ChatGPT or if, instead, it has been written by a human being, something that could lead to misleading or manipulative content. In fact, we have already seen how many teachers have positioned themselves against AI because it leads their students to use it to, among other things, do their homework. But how will we detect texts made with ChatGPT and the like?

The artificial intelligence ecosystem is in full expansion, and is home to amazing tools such as ChatGPT, a chatbot capable of generating text in a coherent and natural way, very similar to human writing. All in a matter of seconds and for free. This has generated some debate about the need to learn to differentiate if the contents are created by machines or people.

Nowadays, knowing if a text is made with ChatGPT or a similar AI model is a very complicated task. Above all, if we take into account the natural language that these types of tools have. For example, we could detect texts made with ChatGPT by looking for inaccurate or incorrect content. And it is that the AI is not perfect and, on many occasions, makes mistakes, especially in those texts related to finance.

The problem is that human beings can also make mistakes when writing text. Therefore, the way to know for sure if it has been written by ChatGPT and the like, is to check if somewhere in the text there is a message detailing that it has been generated by an AI. For example, some scientists have used ChatGPT for their studies, naming artificial intelligence as the author. Detecting texts made with ChatGPT and the like without authorship, however, could be easier in the future.

There are tools capable of detecting texts made with ChatGPT

Companies that have developed models capable of generating text, such as OpenAI, are also working on AIs that can detect text made with ChatGPT. They are actually very easy tools to use. The user only has to copy the text, paste it into the platform and wait for it to detect if it is made with ChatGPT, Bard or another similar model.

These types of tools, however, are not totally reliable. For example, the recently announced OpenAI only correctly identifies 26% of text written by AI. In addition, 9% of the time it gives false positives. In other words, that text may have been written by a human, but the platform labels it as written by an AI.

The tool capable of detecting texts made with ChatGPT or similar, yes, can show different results depending on the precision when checking if something has been written by an AI. For example, if it can’t tell if it’s something generated by a human or by an AI model, it will label the result as “possible”. If, on the other hand, the platform is more certain that this text is made with ChatGPT or similar, it will label it as “very unlikely”.

Unfortunately, it is very unlikely that these types of tools will be able to detect texts made with ChatGPT if they are very predictable. For example, it is extremely difficult to know if a list with the names of the provinces of the United States arranged alphabetically was written by an AI or by a human.

In the future, the only way to detect it will be to somehow incorporate watermarks or metadata, in the same way that images often carry EXIF data. It would not be a surprise if ways to remove that associated metadata emerged, in the same way that you can edit the EXIF data of a photograph. It is there where authorities and main players in the field of artificial intelligence will have to continue working in parallel to the advancement of the models.

How to differentiate if a text has been written by a human or Chat GPT3?

Chat GPT3 has unimaginable reach in the field of educational innovation and other areas where natural language processing and text creation is key. However, its excessive use has ethical implications that have required methods to discover when he is the author of content that should not have been written by him.

The response to this need has been effective and, currently, there are tools specialized in the detection of texts created by AI, among which the following should be highlighted:

Writer

Writer is an AI writing assistant for teams, which also incorporates a content detector generated by artificial intelligence tools. It is very easy to use. You just have to paste the text or the URL where it is hosted into the application, and in a matter of seconds the system performs an analysis that yields a probability result that the writing has been generated by a human.

GPTZero

Developed by Edward Tian, a student at Princeton University, GPTZero is a tool capable of detecting the texts written by a chatbot, based on the “perplexity” or randomness of the text, a measure of how well a bot can predict a text. In other words, the higher this indicator is, the more chaos there will be in the writing and, therefore, the more likely it was created by a tool like ChatGPT.

Chat GPT Detector

ChatGPT Detector has the power to detect if a text is produced by ChatGPT, using linguistic functions or PLM-based classifiers. The results of their analyzes are classified as human or ChatGPT, providing a percentage of probability.

How effective are these applications to know if a text was written by Chat GPT3?

Some tests that have been carried out have yielded dissimilar and variable results. They have even shown differences in rating when using a different language or changing the extension, and some AI-generated text has passed off as legitimate.

That is, the tools still have room for improvement. However, it is known that they are in the process of evolution and it is likely that in a short time they will offer more consistent results, managing to win the race against a GPT-3 that increases its potential as time goes by.

In the short term, they will be key tools on the path towards the responsible and ethical use of artificial intelligence, since this technology is unavoidable in the process of innovation in education and in different areas, and the problem does not lie in chatbots but in the use made of them.