Regulation of internet platforms is about to get more exciting for public health and more uncomfortable for the platforms
2023 is a year of intensified discussion, lobbying and influencing in the area of internet platform regulation. This will certainly have impact on the narratives circulating about mis/disinformation online, as well as on how platforms operate in certain geographies.
I’ve been thinking about how health could feature in the discussions of regulating internet platforms.
Here are some questions to ask ourselves:
- How is data about online user behavior similar to other personally identifiable information that is used for health analysis
- How is sensible analysis of data from internet platforms dependent on individual platform policies at any one-time and limited by the absence of standards and legal frameworks
- How the options we do have available for promoting health information do not address health information equity
- Why are we focusing measurement of information exposure on individual platforms, instead of individual wholistically?
Below are just some thoughts I have as I think through this – part 1.
Ongoing discussions about regulating internet platforms
Here are just a few things that are taking place this year:
- It is expected that the impact of the new European Digital Services Act (it came into force in November last year) will be significant in the area of mis/disinformation. It requires that the platforms put into place structures and transparent procedures about how they remove illicit content. This could make a significant impact on the ways digital platforms serve and interact with their users globally. (For background reading, see this Wired article).
- The US has done less to regulate the platforms, but there are several cases coming before the US Supreme Court that may be significant as well (see this explanation from the New York Times).
- Then there is the model regulatory framework that UNESCO is discussing at the upcoming conference in February 2023: “Internet for Trust – Towards Guidelines for Regulating Digital Platforms for Information as a Public Good“. This framework reflects long-standing discussions in the domains of freedom of expression and universal access to information. It emphasizes both: consultation with the civil society and cooperation with the private sector.
- edit 2023-01-26: The Australian government will legislate to provide the Australian Communications and Media Authority (ACMA) with new powers to hold digital platforms to account and improve efforts to combat harmful misinformation and disinformation in Australia.
Why are all these important? Because the internet platforms will invest into content moderation and related tools only for the markets where there is regulation. For example, facebook spends 87% of its content moderation funds on the US content, although most of its users live outside the US and do not use English on the platform.
Ads and consumer protection in marketing of health products
After the Cambridge Analytica scandal, a lot of pressure was put on the platforms on how they use their users and data and protect their privacy, especially in political advertising. It’s harder for the platforms to monetize personally identifiable information and they have no incentives or legal mechanisms to provide access to it in a systematic way for societal and policy benefit.
However, this isn’t just a matter of marketing products or political candidates. Paid-for-boosted content can influence the online conversation and readily available information sources in other areas of discourse, including health. For example, in every health emergency inevitably businesses emerge to sell evidence-based and non-evidence-based health products to respond to the concerns in communities. Marketing practices online have become very sophisticated, using patients as online influencers to sell pharmaceutical products..
The online “health and wellness industry” is built on using these ads across the platforms. Deceptive marketing techniques in health, especially targeting certain segments of populations like children, have been so pervasive that consumer protection laws have been enacted by countries to address this (read about it in a recent WHO infodemic management newsflash).
Some countries may have consumer protection laws that require reporting of the use of ads in relation to health in regular advertising. But this is not defined for the digital information ecosystem or is inconsistently applied. We’d need to get better at recognizing that the social media and internet platform content is not only a matter of each piece of content but that classes, format and topics of content have disproportional effects on our society.
Has regulating internet platforms and content moderation thoughtfully looked at the interaction with advertising, consumer protection, and deceptive marketing laws?
Health-specific analysis of data from internet platforms
Promoting the use of health information requires multilevel considerations – and internet platforms can help generate evidence for some of these.
When we want to effectively promote health information and resilience to health misinformation we need to think across multiple levels of strategies (at minimum, society-health system-community-individual levels, but take your pick of the onion leaves of the socio-ecological model of your choice).
So when we analyze questions, concerns, narratives and interactions on social media, there are major challenges in how to ask the question of concern and analyze the data in a way that the output is actionable by a health programme. These considerations are slightly different than perspectives on hate speech and freedom of expression, because we are trying to measure impact on individual’s
For example, most online social media studies tend to describe issues of information or misinformation exposure in specific contexts or platforms, but rarely ask questions that can link it to a health outcome or health behavior, (see for example this commentary by Adam G Dunn in Big Data & Society). This is in part due to limitations that we have in terms of access to platform and cross-platform data.
Processing of personal data for health
Social listening touches on considerations linked to digitized information exchange, including personally produced, shared and consumed online content.
I keep circling back to the question: in what ways is this data similar or different to other health information data sources that we use in public health to analyze and monitor population health?
Data privacy laws regulate how personally identifiable information can be collected, processed and used. They normally also describe procedures for how to responsibly access personally identifiable data for the purpose of disease investigation, control, treatment and prevention.
These exceptional processes are not always the easiest to implement and sometimes it is still difficult to access data in ways that enable a meaningful analysis to inform public health decision-making and policies. But they are a legal basis on which we collect and analyze essential information on how the disease impacts populations and what policies work to improve health.
Still, the need to access data sources to investigate and respond to disease is recognized as a special cases in data processing because public health is of high interest to society. I do not yet see that we have fully defined how we can in the same way responsibly access and analyze data that users produce when they use and interact on digital platforms, apps and web tools.
We treat the platforms as partners that can help generate or provide data (with limitations – see part 2 article), but not as a part of a data exchange system for analysis in the public health interest.
I’d be curious to know what those of you who are reading this think about the need for meaningful incorporation of novel data sources into data processing regulations in relation to health analysis.
I wrote this LinkedIn blog in early 2023, to put into context the back-and-forth dynamic between regulators and internet platforms. Follow me on LinkedIn. if you’d like to read more of my commentaries.