A college student has used GPT-3, the most advanced language AI, to generate publications on self-help and productivity that have soon gone viral. Almost no internet user realized that they had not been written by a human, showing the potential of the system to disinform
Earlier last week, Liam Porr had only heard of GPT-3. At the end of it, this college student used this artificial intelligence (AI) model to create a completely fake blog with an invented author name.
He devised it as a fun experiment. But then one of his publications reached number one on Hacker News. Few people perceived that their blog was completely generated by AI and some even subscribed.
Although many have speculated about how GPT-3, the most powerful text-generating artificial intelligence tool to date, could affect content production, it is one of the few known cases that illustrate its potential. According to Porr, who is studying computer science at the University of California, Berkeley, USA, this was the most remarkable thing about the experience: “It was very easy actually and that’s the most alarming thing.”
GPT-3 is the latest and largest language AI model. OpenAI, a research lab based in San Francisco, USA, began using it in mid-July. In February last year, OpenAI grabbed headlines with the previous version of the algorithm, GPT-2, announcing that it would not publish it for fear that it would be misused. The decision immediately provoked a negative reaction, as investigators accused the lab of throwing a publicity stunt. In November, the lab changed its mind and published the model, stating that it had not detected “any solid evidence of its misuse so far.”
The lab took a different approach with GPT-3: neither hid it nor granted public access. Instead of those options, the algorithm selected researchers who requested a private beta version to be discussed and market the technology by the end of the year.
Porr submitted his request. Completed a form with a simple questionnaire on the intended use. He didn’t wait too long. After communicating with some members of the Berkeley AI community, he quickly found a PhD student who already had access. When he agreed to collaborate, Porr wrote a small script [a piece of code] for him to run. He gave GPT-3 the title and introduction for a blog post and had AI create several final versions. Porr’s first post (the one that appeared in Hacker News) and all subsequent ones were copied and pasted from the results with little or no editing.
“It was maybe a couple of hours from the time I came up with that idea and I contacted the PhD student until I actually created the blog and the first post went viral,” he says.
The trick to generating content without the need to edit it too much was to understand the strengths and weaknesses of GPT-3. “It’s good enough to create beautiful language, but it’s not very logical or rational,” Porr explains. So he chose a popular category of blogs that doesn’t require rigorous logic: productivity and self-help.
From there, he wrote his headlines using a simple formula: he searched Medium and Hacker News to see what was successful in those categories and created something relatively similar. “Do you feel unproductive? Maybe I should stop thinking too much,” he wrote as a headline. “Boldness and creativity triumph over intelligence,” said another. In some cases, the headlines didn’t work. But keeping the issues right, the process was easy.
After two weeks of almost daily posts, he ended the project with a final, cryptic, self-written message. Entitled “What I would do with GPT-3 if I had no ethics,” it described its process as hypothetical. On the same day, he posted a direct confession on his royal blog.
Porr admits that he wanted to show that GPT-3 could pose as a human writer. In fact, despite the algorithm’s somewhat odd writing pattern and some occasional errors, only three or four of the dozen people who commented on its first Hacker News post were suspicious that it might have been generated by an algorithm. All of those comments were immediately criticized by other members of the community.
For experts, this has been the concern of these language generation algorithms for a long time. And ever since OpenAI first announced the GPT-2, it has been said to be vulnerable to abuse. In its own blog post, the lab highlighted the potential for this artificial intelligence tool to become a mass-produced disinformation weapon. Others have wondered if it could be used to produce keyword-laden spam posts to fool Google.
Porr thinks his experiment also shows a more banal use, but worrying nonetheless: The tool could be used to generate a large amount of clickbait content. “There may be a flood of mediocre blog content because now the barrier to entry is easier,” he says. “I think the value of online content will go down a lot.”
Porr plans to do more experiments with GPT-3, but is still waiting to gain access to OpenAI. “They may be upset because I did this,” he muses. “I mean, it’s a little silly.”