Rewiring Media for Trustworthy AI

By KatalinFeherFulbrighter

BetweenBrains: Taking Back our AI Future, a 2020 book, is a comprehensive summary of AI movement, also discussing AI-related media in detail. In chapter six, the authors interpret the AI technology in context of social media, journalism, public dialogue and democracy for responsible thinking and proper AI ethics. 

“Trust has been eroding globally for a long time, but the decline has accelerated and been further disrupted by digital waves of increasing intensity. We are at a precarious point in history when our fundamental institutions – government, public institutions, the media, corporations – are not seen overall as trustworthy by a majority around the world. The public perceives the most important drivers of trustworthiness to be reliability, transparency, and responsible behavior: there is clearly a gap as of now. Digital communications and social media behavioral phenomena have aggravated the “perils of perception”: perceptions of trust are often out of line with reality. With Millennials slowly taking charge, societal trust is significantly lower than with preceding generations: low-trust environments are fertile ground for disinformation as the public loses confidence in impartial arbiters of a common set of truths.” 

“The media made several mistakes fighting economic decline and the decimation of the journalist workforce including: the pressure of optimizing content for social media, failing ad-based business models, 24-hour “Breaking News” attention desperation, ethical decline, and in some cases unbound partisanship. One-third of the public trusts media less than they did five years ago; more than 6 in 10 think that online news sources contain a ”great deal” or ”fair amount” of disinformation. These are complex times indeed. Big social media platforms are the primary gatekeepers for news outlets: even emerging media business models cause media to possibly increase the bottom line but ultimately to forfeit their destiny to the social media aggregator. Platforms accidentally gained an overwhelming amount of power and discretion to decide what content reaches us, and what counts as harmful: their business interests are not easily bridgeable with the public interest of constructive dialogue. 

Digitalization has led to the atrophying of online public dialogue and to the unexpected realization that “connectedness disconnects.” In the social media world, a concentrated, loud minority opinion of a few percent can create the illusion of being the aggressive majority, especially if those who want that opinion spread magnify it with paid ads and posts from fake agents, both humans and bots. Both the disappearing middle and waning empathy are detrimental to democracy: we are being pushed into corners/camps owing to a combination of exploitative tech (e.g. subpar AI labeling of our views, recommendation engines keeping us in our echo chambers) and psychological weaknesses such as vilification of dissent, or groupthink. Understanding and engaging the silent, confused, fearful “bystander” majority, who accidentally handed the town square to extremists, is key.“ 

The expert recommendations for the broadly interpreted policy world extend these issues for a trustworthy AI along with  media topics as follows: 

Thematic policy recommendations 

  1. Empowering digital citizens 
  2. Rewiring media models 
  3. Fact-checking boost 
  4. Rebuilding a democratic core 
  5. Public interest technology 
  6. Public dialogue 
  7. Open-sourcing AI Policy frameworks 
  8. Protecting elections 
  9. Regulating social media 
  10. Algorithmic audits 
  11. Mandatory transparency 
  12. New civil(ity) code 
  13. Trusted accounts 
  14. Data sharing 
  15. Regulating adversarial digital campaigns 

Please find their thought-provoking book via betweenbrains.ai

The authors dedicate this work to humanity. 

  • DR. OMAR HATAMLEH, the Former Chief Innovation Officer, Engineering at NASA, and former Executive Director of the Space Studies Program at International Space University  
  • DR. GEORGE A. TILESCH is the president of the PHI Institute for Augmented Intelligence. Dr. Tilesch a senior global innovation and AI expert who is a conduit and trusted advisor between the US and EU ecosystems, specializing in AI Strategy, Ethics, Impact, Policy, and Governance, working with governments, corporations, robust startups, academia, and international organizations.