Tina D Purnat

Public health

Health misinformation

Infodemic management

Digital and health policy

Health information and informatics

Tina D Purnat
Tina D Purnat
Tina D Purnat
Tina D Purnat
Tina D Purnat
Tina D Purnat
Tina D Purnat

Public health

Health misinformation

Infodemic management

Digital and health policy

Health information and informatics

Blog Post

AI-generated content will change how we think about health information

Just five years ago, automatic generation of content was a curiosity for most users, and oftentimes it was just funny (try “inspirobot“, the bot generator of inspirational quotes, and read up on the creators behind this project).

But recent advances and applications of AI have massively diversified the frequency, complexity, formats and modes of accessing, searching and receiving content and information. Misinformation discussions have addressed topics like deepfakes, autogenerated content for social media, and algorithmic bias.

This is coming at a time when the importance of the digitized information environment has never been more acutely on the radar of health policy and health systems. It has been harder for health authorities to maintain trust with communities they serve and to promote accurate and reliable health information in an information environment that is designed against promoting uninteresting factual and scientific information.

Public health has been lagging behind the changes in the information environment

The recent advances in AI-generated content will reshape the information environment permanently. These technology changes come rapidly and every 3-5 years – much faster than public health adapt our understanding of its social, user, design and business impacts.

For example, when the online platforms moved to ad-based business models about 10-15 years ago, this changed everything in the way content was promoted digitally, how users interacted with it, how apps designed user interfaces, and what kind of strategies are successful in increasing spread of health information and reducing harms form health misinformation.

Later on, changes in content promotion and moderation increased by design the virality of emotional and polarised content online – again creating changes in the information environment and further challenges for public health and health authorities. That is also the time when misinformation evolved beyond clickbait, became more sophisticated and widespread, and when the internet platform design was optimized to influence its users, and when online sentiment reverberated in offline life as well.

In public health, these changes didn’t really change how we thought about promoting health behaviors in populations and vulnerable groups. Organizations working in health have used social media for fundraising and raising the visibility of their organizations to make their brand more visible and therefore recognizable to donors. But then, the online information environment became hostile to public health. For example, major polio vaccination campaigns in Pakistan were halted a day after they started because of viral misinformation in communities prioritized for vaccination, and several international outbreaks (Zika, ebola, SARS) were accompanied with transborder sharing of digitised information and misinformation. Then, health authorities started looking at social media and other online platforms as new places where they needed to promote quality health information and mitigate the spread of misinformation.

When COVID-19 pandemic started, we more systematically looked at the challenge of the whole of information environment, not only social media. In addition, we’ve had to rethink how we manage and explain the publication of evidence in relation to medicines regulation, how science is effectively translated to populations, what are actions that health authorities can do to provide accessible and acceptable guidance to communities, and what barriers to health information promotion they can remove.

But still – actions we take to shape the health information environment of vulnerable populations and the ways we generate insights that guide health communication (especially in digital channels) have lagged behind. Even nowadays we still most commonly use metrics that only look at how successfully we have increased people’s exposure to certain types of information online – when we know that only a few people who are exposed to information actually act on it with the behavior we promote.

Only now through infodemic insights innovation are we developing a routinized systematic approach to metrics that are more informative and go down the funnel through to health behavior change. And here it is – another change in the information environment. Public health systems are sleeping at the wheel driving through another technological change that will shape societal and individual relationship with and trust in evidence, information and our behavior.

Can public health lead the use of advances exemplified by ChatGPT?

ChatGPT is a discussion-optimized chatbot that made a splash before the holidays. This advancement in technology has showcased huge opportunities of using AI for use of information, scientific knowledge, evidence. It could be applied to use cases for health workers, patients, users, communities.

But we should be working NOW with library and information science practitioners, designers, responsible tech experts, health communication professionals, knowledge translation experts, and others to discuss uses of AI-generated health information content within and adjacent to health systems.

In discussions of AI-generated content and misinformation, there has been attention paid to topics like deepfakes, autogenerated content for social media, and algorithmic bias. But we should be actively also create space to define how the technology can be used responsibly in public health and in what instances.

This is an opportunity for public health to clearly formulate ways to address algorithmic bias, ethics, governance, transparency, and accountability of another set of AI uses in health, and how we can monitor for unintended consequences of technologies like ChatGPT in health and in promoting health information, translating evidence, and communicating science.

The health sector digitalization lags 20 years behind some other sectors of economy. Public health should be at the vanguard of leveraging advances in AI content generation for public health. The essential tools informing public health are evidence, information, and intelligence – and the digitized society is expanding their meaning and usage.

I wrote this LinkedIn blog in the summer of 2023, when the coverage of ChatGPT exploded over the holidaysFollow me on LinkedIn. if you’d like to read more of my commentaries.