Hannah Heinzekehr | March 20, 2020
A new AI early warning system to combat disinformation and prevent violence previewed in the Bulletin of Atomic Scientists
In a new article published in the Bulletin of the Atomic Scientists, three University of Notre Dame researchers preview the development of an Artificial Intelligence (AI) early warning system meant to monitor the ways manipulated content online (i.e., altered photos, misleading memes, edited videos) can lead to violent conflict, societal instability, and interfere with democratic elections. The article includes research on the 2019 Indonesian election as a prime example of the ways online disinformation campaigns can have real world consequences.
The piece was co-authored by Michael Yankoski, doctoral candidate in theology and peace studies at the Kroc Institute for International Peace Studies, Walter Scheirer, assistant professor in the Department of Computer Science and Engineering, and Tim Weninger, associate professor in the Department of Computer Science and Engineering. The three scholars met while participating in a Notre Dame panel discussion on the ethics of AI, and started to think about the possible intersections between computer science and peace studies.
“As an ethicist and a scholar of peace studies, I tend to orient my work toward anticipating future threats,” said Yankoski. “I'm fascinated by the questions: what challenges will scholars and practitioners of peace studies face in the next 25 years and how might we be as prepared as possible for what lies ahead? While there is a lot of concern that AI will be deployed for purposes that undermine human well-being, I'm convinced that AI can also be designed for good ends and to help build a more peaceful and flourishing world.”
Read more here.