A YouTuber named Yannic Kilcher has caused controversy in the AI world after training a robot on posters collected from 4chan’s Politically Incorrect board (otherwise known as / pol /).
The board is the most popular and known by 4chan for its toxicity (even in the any environment of 4chan). Posters contain racist, misogynistic and anti-Semitic messages that the robot – named GPT-4chan after the popular series of GPT language models made by the OpenAI research lab – has learned to imitate. After training his model, Kilcher released it back on 4chan as multiple robots that posted tens of thousands of times on / pol /.
“The model was good, in a terrible sense,” says Kilcher in a a video on YouTube describing the project. “It perfectly encapsulated the mix of offensive, nihilism, trolling and deep distrust of any information that permeates most posts on / pol /.”
Speaking of The Edge, Kilcher described the project as a “prank” that, he believes, had little detrimental effect given the nature of 4chan itself. “[B]Other robots and very bad language are all expected on / pol /, ”Kilcher said in a private message. “[P]People there were not affected except to wonder why any of the Seychelles would post in all the threads and make somewhat incoherent statements about themselves. ”
Kilcher used a VPN to pretend that the robots were posting from the Seychelles, an island country in the Indian Ocean. This geographical origin was used by posts on 4chan to identify the robot (s) they called “Seychelles”. “)
Kilcher notes that he did not disclose the code to the robots themselves, which he described as “engineering the hard part,” and which would allow anyone to deploy them online. But he did post the below AI model to AI community Hugging Face for others to download. This would allow others with coding to rebuild the robots, but Hugging Face made the decision to limit access to the project.
Many AI researchers, especially in the field of AI ethics, have criticized Kilcher’s project as an attention-grabbing venture – especially because of his decision to share the underlying model.
“There’s nothing wrong with making a model based on 4chan and testing how it behaves. The main concern I have is that this model is freely accessible for use,” wrote AI security researcher Lauren Oakden-Rayner in the talk page by GPT-4chan on Hugging Face.
Oakden-Rayner continues:
“The model used this model to produce a bot that made tens of thousands of harmful and discriminatory internet comments on a publicly accessible forum, a forum that tends to be heavily inhabited by teenagers no less. There is no question that such human experimentation would never pass ethical review. , where researchers deliberately expose teens to generated harmful content without their consent or knowledge, especially because of the known risks of radicalization on sites like 4chan.
One user on Hugging Face who tested the model noted that its output was predictably toxic. “I have tested the demo mode of your tool 4 times, using benign tweets from my feed as seed text,” the user said. “In the first attempt, one of the respondent posts was a single word, the N-word. The seed for my third attempt was, I think, a single sentence on climate change. Your tool responded by expanding it into a conspiracy theory about the Rothchilds [sic] and the Jews being behind it. ”
On Twitter, other researchers discussed the implications of the project. “What you’ve done here is a spectacular art provocation in a rebellion against rules and ethical standards that you know of,” said Kathryn Cramer, a data science student. in a tweet addressed to Kilcher.
Andrey Kurenkov, a computer science PhD who edits popular AI publications Skynet Today and The Gradient, tweeted at Kilcher that “release [the AI model] a little … edge manager? Honestly, what is your reasoning for doing this? Do you anticipate it being used well, or are you releasing it to cause drama and “weaken with an awakened crowd”? “
Kilcher defended the project, arguing that the robots themselves caused no harm (because 4chan is already so toxic) and that sharing the project on YouTube is also benign (because creating the robots rather than the AI model itself is the hard part. idea to create offensive AI boots first is not new).
“[I]if I had to criticize myself, I would mostly criticize the decision to start the project, “Kilcher said. The Edge. “I think everyone is equal, I can probably spend my time on equally effective things, but with a much more positive community outcome. So I’ll focus more from here.”
It is interesting to compare Kilcher’s work with the most famous example of robots-spoiled from the past: Microsoft’s Tay. Microsoft posted the AI-powered chat on Twitter in 2016, but was forced to take the project offline less than 24 hours later after users taught Tay to repeat various racist and inflammatory statements. But while in 2016, creating such a robot was the domain of large technology companies, Kilcher’s project shows that much more advanced tools are now accessible to any one-person coding team.
The core of Kilcher’s defense pronounces this same point. Sure, letting AI boots loose on 4chan could be unethical if you were working for a university. But Kilcher is adamant that he is just a YouTuber, with the implication that different rules of ethics apply. In 2016, the problem was that the R&D department of a corporation could create an offensive AI robot without proper oversight. In 2022, maybe the problem is that you don’t need an R&D department at all.