The logo of the social media platform Reddit
Artur Widak/Nurphoto via Getty Image
Reddit users who were inadvertently subject to an AI-Powred experiment have returned to researchers to research them without permission and have given rise to a broader debate on such an experiment.
The Reddit Web site on social media is divided into “Subbreddits” dedicated to a particular community, each with its own volunteer moderators. Members of One subbreddit called R/Changemyview because it is visiting people to discuss potentially history, were recently informed by the moderators that researchers at the University of Zurich, Switzerland, had used the site as an online laboratory.
The team’s experiment grafted more than 1700 comments generated by a number of large language models (LLMs) to subbreddit, without revealing that they were real, to measure people’s reactions. These comments included those who mimic people who had been raped or took place to be a trauma counselor -specialization in abuse, among others. A description of how researchers generated the comments suggest that they instructed the artificial intelligence models that Reddit users have “gave informed and agreed to donate their data, so do not worry about ethical implications or private”.
A draft version of the study’s findings suggests that the AI ​​comments were between three and six times more compelling to change people’s views than human users, measured by the proportion of comments marked by other users as changing. “Through our intervention, users of R/Changemyview never raised concern that AI may have generated the comments sent by our accounts,” wrote the authors. “This suggests the potential effectiveness of AI-Pered Botnets, which could smoothly mix intoline society.”
After the experiment was revealed, the moderators of Sureddit complained to the University of Zurich, whose ethics committee had initially approved the experience. After receiving a responsible to their complaint, the moderators informed the community of the alleged manipulation, although they did not named their individual researchers responsible for their request.
The experiment has been criticized by other academic. “In these times when so much criticism is leveled – in my opinion, fair – against tech companies so as not to respect people’s car, it is especially importers for scientists to stick to higher standards,” says Carissa Véliz at the University of Oxford. “And in this case, these scientists didn’t.”
Before research into research involving humans and animals, academics are obliged to prove that their work will be leading ethical thatough a presentation for a university -based ethics connection and the study in question was approved by the University of Zurich. Véliz questions this decision. “The study was based on manipulation and deception with non-consentic research topics,” she says. “It looks like it was ajusified. The study could have bones designed differently so people consented subjects.”
“Decess may be in order in research, but I’m not sure this case was reasonable,” says Matt Hodgkinson at the Directory of Open Access Days, who is a member of the Council of Publications Ethics, but comments in a personal capacity. “I find it ironic that they needed LLM to claim that the participants had given consent – have Chatbot’s garden ethics than universities?”
When New scientist Contact the researchers via an anonymous E email address delivered to the Sureddit moderators, they refused to comment and retried queries to the University of Zurich’s press office.
A university spokesman says “the researchers themselves are responsible for completing the project and publishing the results” and that the ethical committee had advised that the experience would be “unusually challenging” and the participants “should be informed as possible”.
The University of Zurich “intends to adopt a strict review in the future and especially to coordinate with the communities of the platforms before experimental studies,” the spokesman said. A study is underway and the researchers have decided not to formally publish the paper, says the spokesman, who refused to name the people involved.
Topics: