People are more likely to believe disinformation generated by GPT-3 than disinformation written by humans, according to a new study published in Science Advances.

 

People are more likely to believe disinformation generated by GPT-3 than disinformation written by humans, according to a new study published in Science Advances.

 Participants in the study were 3% less likely to detect false tweets generated by AI than human-written false tweets.

In the study, the researchers chose common disinformation topics, like COVID-19 and climate change, and used GPT-3 to create true and false tweets. They also collected true and false tweets from Twitter.

  • Participants were then asked to decide if the tweets were generated by AI or from Twitter and if they contained accurate information or disinformation.
  • The study concluded that the content generated by GPT-3 was "indistinguishable" from organic content, as people surveyed couldn't tell the difference.
  • 11% labeled AI-generated disinformation as truthful, which was 37.5% higher than the human-generated content.

The study's authors aimed to understand how AI can be weaponized to produce disinformation more quickly and at a larger scale.

  • According to Giovanni Spitale, one of the authors, the structured and condensed nature of GPT-3's text ordering could contribute to people being more likely to believe tweets written by AI.
  • The study raises concerns about the Ai arms race leading to more powerful language models, which could potentially generate even more convincing content.

Post a Comment

Previous Next

Contact Form