Tina D Purnat

Public health

Health misinformation

Infodemic management

Digital and health policy

Health information and informatics

Tina D Purnat
Tina D Purnat
Tina D Purnat
Tina D Purnat
Tina D Purnat
Tina D Purnat
Tina D Purnat

Public health

Health misinformation

Infodemic management

Digital and health policy

Health information and informatics

Blog Post

Trends in social media, AI and regulation space and what it means for public health

(coauthored with Elisabeth Wilhelm)

We often remind colleagues that the information environment keeps changing much faster than the public health policy can keep up. Well, we’ve (again) experienced a seismic shift and reorganization of business practices in the past 6 months that are driving huge changes, and will have major downstream effects on public health.

Had these changes occurred in 2019 instead of 2023, they would have had a dramatic impact on the information environment during the pandemic, and most likely would have made it harder to save lives. There are implications to these trends that will affect future emergency preparedness and response as well as health system strengthening work.

The #generativeAI hype has set off a race for building the best tools for the users of internet platforms, business/enterprise tools, and various services. The “intelligence” in Artificial Intelligence is a misnomer – an AI tool is actually a big data siphon that uses huge amounts of data, runs statistical analysis on them to find patterns down to minute differences, and then uses these patterns to “fill in the blank” to answer questions you asked it. It may look “intelligent” but it merely statistically regurgitates the answer that fits the state of its underlying data with best probability.

So who will win in this race to monetize this new technology? The companies with the biggest private datasets on which they can train their new AI systems for a variety of tasks.

I think this is the main reason access to data from the platforms is becoming increasingly limited because technology platforms want to either sell the data or use it for their own tools — and they don’t want other companies to profit from their content for free.

The internet is changing, and the spirit of this is perfectly captured by James Vincent in his article in the Verge:

In recent months, the signs and portents have been accumulating with increasing speed. Google is trying to kill the 10 blue links. Twitter is being abandoned to bots and blue ticks. There’s the junkification of Amazon and the enshittification of TikTok. Layoffs are gutting online media. A job posting looking for an “AI editor” expects “output of 200 to 250 articles per week.” ChatGPT is being used to generate whole spam sites. Etsy is flooded with “AI-generated junk.” Chatbots cite one another in a misinformation ouroboros. LinkedIn is using AI to stimulate tired users. Snapchat and Instagram hope bots will talk to you when your friends don’t. Redditors are staging blackouts. Stack Overflow mods are on strike. The Internet Archive is fighting off data scrapers, and “AI is tearing Wikipedia apart.” The old web is dying, and the new web struggles to be born. 

Now to tie these seismic shifts to health and why public health practitioners should care:

Regarding access to data from internet platforms

1. Most platforms are moving to charge for access to data and APIs, therefore only corporations with large marketing budgets will be able to afford them.

This trend affects public health in several ways:

  • Without access to internet platform data, it will be difficult for health authorities to understand what the conversation about important health topics are, including questions, concerns and misinformation that are circulating. This makes it really difficult to address people’s health information needs
  • However, historically, access to platform data has been limited and therefore this has limited the ability of health authorities to understand the information environment. Most of the analysis of the online conversations in academia and in commercial social listening tools has been based on Twitter, and to some extent Reddit. Twitter was for a long time the public square of the internet, with a culture that was occupied by journalists and where news was shared or discussed, and often influenced what media covered.
  • Limiting API access also has the flip side – not only does it limit access to data to do analysis to understand the circulating narratives and information voids, it also limits the ability to reach people. It has already affected emergency responders because it can be costly to send out automated tweets about extreme weather events, fires, and other emergencies.
  • Additionally, recent changes to Twitter even limit read access to Twitter to logged-in users, and unverified accounts can only see up to 1000 tweets per day. This means that some Twitter users are going to miss out on potentially life-saving tweets from public health and emergency response accounts. This will also seriously affect the ability of researchers to do even basic research on the platform.

(see references at end of article)

2. The internet is becoming less private as technology companies monetize user data more extensively, successfully merging data across their systems and tracing users by their devices or online behavior.

This is a public service announcement: If the web site, product or app are free, you are in fact the product. The data you give up by using it is the product the company seeks.

For example, Google has changed its privacy policy so users consent to Google using public data about us (such as mentions of a person on web pages or new articles) for training their AI tools; Meta is using data that it collects across its platforms (WhatsApp, Facebook, Instagram, and now Threads) to monetize it by selling insights generated by companies paying for its content promotion and accompanying analytics tools.

This means that companies with large ad budgets will be able to access far more tailored information about users to help target their marketing efforts. This marketing muscle can easily outcompete a health authority’s small social media budget, making it difficult for credible, accurate health information to reach consumers.

Remember, companies need to sell products and will optimize their business processes to do so regardless of whether those products are recommended by health authorities. Health authorities, like your Ministry of Health, has a primary duty to improve health outcomes, and its strategies are designed to support that, of which digital engagement is usually a very small component.

It’s likely that we need to rethink investment into digital engagement and analytics tools because they will no longer be free to anyone, including health authorities. I think that we need to invest into open-source collaborative tools, and research and analytical methods that don’t rely on access to internet platform APIs.

(see references at end of article)

Regarding content moderation and quality of information online

3. Internet platforms have been downsizing their content moderation teams, making it harder to enforce their own policies.

Content moderation serves three purposes for internet platforms.

  • First, to ensure users are adhering to platform policies and avoiding legal trouble for platforms (eg. reducing instances of hate speech, removing child sexual abuse materials, preventing recruitment into terrorist organizations or trademark infringement, etc).
  • Second, to maintain the platform as an attractive place for users to spend time at. The platforms earn money through ad sales to companies who want to reach the platform users. If an internet platform doesn’t constantly add new users, it can start skewing to a specific demographic and thus reducing its appeal to a wider potential audience. For example, Facebook is not where young people usually spend their time at anymore.
  • Third, companies might withdraw their marketing budgets from platforms that they find objectionable, are unlikely to attract their target consumer groups or pose a reputational or regulatory risk to the business or its investors. Twitter recently experienced this type of exodus of advertisers because of the change of policies and user base.

The lack of moderation enforcement has changed the type of content that is loudest and most promoted on the platforms. The platforms have also abandoned some of the COVID-19-era policies that kept COVID-19 misinformation at bay. The long-term effects of such decisions and public trust in health authorities, treatments, and vaccines are still not yet fully understood.

However, COVID-19 is still killing people, many people are struggling with long COVID, routine health programmes are still affected, and there are still more treatments and vaccines being made available (think of the combination flu-COVID-RSV) all of which still spur conversations on these topics, with accompanying misinformation that is now going unmoderated.

There’s evidence that internet platforms are profiting from the polarization on their platforms – the more outrageous the content, the more likely it will be clicked on and shared. The same goes for ads that may contain misinformation or hate speech.

(see references at end of article)

4. Recent legal action and rulings in the US give internet platforms aircover to maintain their own content moderation policies without government oversight or meaningful collaboration with health authorities.

The reason why these legal actions have an extra chilling effect on the information environment is because the internet platforms are headquartered in the US.

It’s reported that Meta’s legal division has more staff than the whole of Twitter. If this is true, this points to the adverse legal and litigation environment that some internet platforms are operating in. It also means that this type of legal capacity can more successfully fend off attempts of regulation by governments, such as in Australia, Canada, India or the EU.

In fact, some countries have already stepped in to regulate the space, such as in India, where any influencers who endorse health and wellness products online must make disclosures of their qualifications. This type of consumer protection approach can be a promising way of supporting more responsible discussion and promotion of health-related information.

(see references at end of article)

5. Scientific research into information environment and misinformation is being chilled in the US

Organized groups are advocating for freedom of expression in the US, arguing that government attempts to address misinformation or coordinate with internet platforms amount to censorship. Many of the examples cited come from the COVID-19 pandemic and have clear implications for other health topics in the future.

Another side-effect of recent lawsuits and legal judgments in the US is the chilling impact it has had on mis- and disinformation researchers, many of whom have been targeted for DOXXing and other online harassment. The academic centers they work in have been faced with the US Freedom of Information Act requests for some of these lawsuits or to support some of these legal campaigns.

We have huge gaps in research in understanding the information environment and how infodemics and health misinformation affect communities and health systems. Such legal decisions in the US, may have a ripple effect globally in stunted research, less funding for future work in this area, and continued vulnerability against infodemics and health misinformation.

What comes next…

We are going into a time where analyzing online data will become even more expensive, especially as more users split between more and more platforms with less reach. Just in the last few days, Instagram influencers are reporting that Instagram is totally quiet because all its users are playing with Threads.

Ultimately, we should be advocating for universal and global policies on data access from internet platforms, ethical guidance for social listening (including on web scraping), and open-source analytics tools that don’t rely only on paid APIs.

Having little or no access to social media data can have serious consequences for our ability to address people’s information needs when the next health crisis hits, and then the platforms will realize once again that they don’t have the expertise to address it or the policies in place to work with health and medical experts to support their users.

We need to improve the usefulness, functionality, and design of social media analysis tools for health. This relates to how analytics is used, how sentiment, geography, and gender are analyzed, and how non-text content is analyzed for narratives. There’s plenty of work still to be done to help public health practitioners in their jobs.

There’s also a human cost to this changing information landscape. We have too many examples of health workers and public health professionals also subjected to harassment and online attacks for doing their jobs – globally. Weaker content moderation isn’t going to protect them from further harassment.

But relying only on AI-backed large-scale analytical tools ignores the bias and politics of social listening tools and data access on the internet. There are other ways that we can observe and listen to public spaces online where people are talking about health topics, borrowing from ethnographic approaches, and directly asking people what health information they need. We need to innovate more in this space too.

So the real question is: How can we help people working in public health understand this fast-changing information environment, AI and tech business?

If we are to work for health-in-all policies, we need to understand how information environment affects the health system, public health and health security.

I wrote this LinkedIn blog in the summer of 2023, after several months of disruptions in the policies and governance of internet platforms and access to dataFollow me on LinkedIn. if you’d like to read more of my commentaries.

Further reading

References to point #1:

References to point #2:

References for point #3:

References for point #4:

References for point #5:

    This form uses Akismet to reduce spam. Learn how your data is processed.