The transformers are here

The following conversations have been fully generated by an AI that I just supplied with the first three bold sentences:

-/-<>

The magic behind this supercharged autocomplete is GPT-3, an algorithm trained to convincingly continue texts. In some cases, it manages to capture such subtle and human traits as humor while maintaining complex conversations around a topic.

GPT-3 belongs to the family of algorithms known as transformers, which have proven to be very effective in recognizing complex patterns in sequences of elements. By treating language as a sequence of words, they can be trained so that, given an input text (context), they complete it in the most credible way.

Thanks to the Information Sciences Institute in Los Angeles, I have received an invitation to test this tool, one of the most advanced currently available, the Optimus Prime of natural language. It has already achieved many successes in solving text-based challenges such as maintaining conversations indistinguishable from those of a human, helping write novels, or even programming software on demand.

A transformer robot destroying a city

Such a deep understanding of natural language can greatly improve our way of communicating with conversational interfaces like Google Assistant or Alexa. And as great power entails great responsibility, the company behind this tool is gradually and cautiously opening it to the public. Currently, it is only possible to use it by invitation or by paying a subscription with a rate per number of words generated.

But what has already been invented cannot be uninvented, and an infinity of projects have emerged that try to replicate the recent successes of GPT-3 openly. In fact, anyone without any knowledge of machine learning can now install the open-source version, GPT-Neo, and try it with a single click (and a good graphics card).

These alternatives are quite recent - they are less than 6 months old - so the tsunami of bots indistinguishable from a human is still to come, although there have already been some cases of misuse, mainly aimed at manipulating public opinion to make a crowd in the followers of a political party or alter the valuation of cryptocurrencies.

Voight-Kampff test clip in Blade Runner

Recent studies say that 15% of Twitter accounts could be bots, and our mechanisms to detect them are still quite ineffective. And like fake news, they are here to stay, so we will have to start thinking about scalable solutions to distinguish between humans and bots. At least until we have a Voight-Kampff test.

Join the discussion

Get a mail
4d8cd43bbbfbbd2b7aed08d9a2b0ef251cebfd3e2603b74b710a2d38b7f8ec39