Have Social Media, Powered by AI, Become a Threat to Democracy?
By KatalinFeherFulbrighterProf. Andreas Kaplan, Dean ESCP Business School Paris, Sorbonne Alliance
Within only a decade, social media, enabled by artificial intelligence (AI) and big data, might have gone from being a facilitator of democracy to a severe threat of the same. Around ten years ago, it was said that social media would restore power to citizens. Information can rapidly be disseminated on platforms such as LinkedIn, Twitter, or YouTube. By applying such platforms, democracy could be experienced more directly and in a more participatory manner. For example, during the Arab spring – a series of anti-government protests, uprisings, and armed rebellions against oppressive regimes that spread across North Africa and the Middle East in late 2010 – social media played a crucial role by facilitating communication and interaction among participants of these protests.
Nowadays, however, social media are increasingly used to spread targeted misinformation, or so-called fake news, to manipulate entire societies. The rapid evolutions in the area of artificial intelligence, defined as “a system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation,” will render this trend even more accentuated. Instead of fake news via text only, in the future, literally everybody will be able to produce videos where one can insert one’s own words into another’s speech, making the latter appear to say things that they would never have said in reality. Such deepfakes actually already exist.
AI-powered social media applications can definitely represent a serious threat to democracy and democratic mechanisms by manipulating voters, controlling and supervising a country’s citizens, or simply frustrating them until they decide to drop out of political life. Luckily enough, at least three ways exist that potentially help avoid such misuse and limit its danger to democracy: technology, regulation, and education. Technology and AI itself can be applied to detect unethical and illegal behavior. Regulation will be necessary and can define voter manipulation and the dissemination of fake news and deepfakes. Finally, and maybe most importantly, it will also be a question of how to educate (future) citizens and make them more conscious of the various manipulation techniques.
This text is an adapted excerpt from “Andreas Kaplan (2020) Artificial Intelligence, Big Data, and Fake News: Is this the End of Democracy? ” in Gül, A. A., Ertürk, Y. D. and Elmer, P., Digital Transformation in Media and Society, Istanbul University Press Books, 149-161.”
About the author: Expert in artificial intelligence and digital transformation, Andreas Kaplan looks back on a decade of leadership roles in higher education and academia. After having led ESCP Business School Berlin, he now has the Rector and Dean’s position at ESCP Paris. Previously, Kaplan served as the School’s Provost, in charge of around 6,000 students and overseeing thirty degree programs ranging from undergraduate to Master’s and MBAs to Ph.D. programs. With several seminal articles and ~35k citations on Google Scholar, a widely publicized Stanford study recognizes Professor Kaplan as one of the most-cited and impactful researchers and scientists worldwide.