These talking points on AI and mis/disinformation were not generated by ChatGPT
AI technologies can threaten our ability to ensure that the information people receive is credible and accurate, so they can take steps to protect their health.
However, the solution to the threats of AI-generated misinformation will require
- More productive conversations with the private sector and with technology and medicines regulators
- Health authorities who have the right expertise in AI development and production of digital content to address these threats
- Systems and societal level solutions to this problem – we can no longer rely on each individual person’s ability to discern accurate from inaccurate information because AI can make it impossible to distinguish between the two.
Currently, a lot of attention is paid to how AI can be used in generating digital content, which could be misused to rapidly amplify of multilingual content online for disinformation campaigns.
- low and middle-income countries are actually most vulnerable in general to mis/disinformation and any tools that may exacerbate them, because there are no or few enforceable regulations or policies that govern how AI technologies are used AND there are few regulations on how social media is governed.
- For example, Meta had historically spent over 87% of its content moderation budget on US and Europe, because it is most heavily regulated there. It was recently fined by the EU for 1.3 billion USD for violating EU data privacy rules.
- Most LMICs do not have medicines and device regulators, or data protection regulators, so mis/disinformation harms, including AI-generated content, are not regulated neither in general information society nor from the health information perspective.
AI is already built into the fabric of the information environment – it impacts how we search for, receive and act on different types of information – we cannot avoid this, so calls for moratoriums are futile and favor those who won’t stop development these tools
- AI-based algorithms can profile each of us individually, when we use our phones, apps, web sites and search engines – to deliver what the algorithm thinks will catch our attention the most. This doesn’t mean that the algorithm will give us factual information – it will give us what it thinks we want to see, we will find interesting, or what it has been programmed to give us.
- Generative AI (like ChatGPT) can make it a lot harder for people to discern credible accurate information, by manipulating and tailoring text to a specific user and their information-seeking needs, as well as offer video and image content that is entirely fictitious but looks very real
- AI is even built into analysis of social media, where it profiles by sentiment and by gender what kinds of topics are trending online. These kinds of tools have their own biases that are caused by the lack of standards on how AI-based analytics should be documented, developed, and monitored for performance.
AI algorithms and AI-generated content could be used to manipulate individuals, especially in the absence of regulation and auditing enforcement that is needed with this type of technology that guides decision-making.
- Currently, there are strong calls for a moratorium for development of AI tools that generate content.
- Unfortunately, these calls may not have desired impact, because of the widespread use and development of generative AI technology world-wide.
- Instead, we need to accelerate standards building, ethical AI development, and regulatory frameworks that guide innovation into use cases that are serving humanity.
Responsible/ethical AI principles have been defined for general overarching frameworks on how to develop, introduce and govern AI-based technologies.
- However, in absence of globally applicable or enforceable guidance and regulations, AI-based technologies in general, and especially in health, there is great variability in quality, transparency and utility of AI-based tools
- There are general issues in AI-based technologies that in absence to regulations and oversight have brought in challenges of ethical design, and maintenance of AI-based tools, transparency on what kinds of data patterns AI tools were build/trained on, and transparency in how AI-based tools change over time based on the patterns of users evolving/training the tools over time
I wrote this LinkedIn blog in the summer of 2023, after having been asked for briefing notes several times. Follow me on LinkedIn. if you’d like to read more of my commentaries.