I visited Marcia Castro and the Department of Global Health and Population, Harvard T.H. Chan School of Public Health for a health security symposium for the 60th anniversary of the department’s establishment.
On a panel with Joan Donovan, Guilherme Canela de Souza Godoi, and Kevin Croke, we discussed #disinformation, #informationecosystem, and how they impact #healthsecurity. Disinformation is inaccurate information that is specifically designed to achieve an agenda, and it is a component of an #infodemic during an emergency, alongside questions, concerns, information voids, and circulating narratives and #misinformation.
My full notes are on my LinkedIn blog, and here are highlights:
- One of best practices to address mis- and disinformation is to ensure that credible, accurate health information is easy to find, easy to understand, and easy to share.
- Mis- and disinformation often uses emotionally manipulative content to accelerate its spread, which health authorities struggle to counter effectively with their own messaging. This means that addressing disinformation narratives may require out-of-the-box thinking.
- Algorithms need to get smarter but so does human oversight, because memes and emotion and conveyed meaning are culturally specific and don’t translate well to automated detection.
- Determining what are likely to be the most common narratives and disinformation-related narratives ahead of time can help us plan and develop that risk matrix that would also help us determine what kinds of events would trigger a “high risk” alert versus a “low risk” one, with different actions to match.
- If we don’t make health information as compelling as the mis- and disinformation, we will continue waging a losing battle, and that means investing in better content, better ways of sharing and amplifying it, and translating it to communities through new trusted messengers working in trusted spaces.