Tina D Purnat

Data, tech & health policy

Public health

Healthy information environment

Health information and informatics

Infodemic management

Tina D Purnat
Tina D Purnat
Tina D Purnat
Tina D Purnat
Tina D Purnat

Data, tech & health policy

Public health

Healthy information environment

Health information and informatics

Infodemic management

Examples to discuss critical health literacy in digital spaces

I’ve compiled a repository of real-world case studies categorized into five thematic “digital ecologies” so they can be used tor discussion and teaching (my notes here).

Below are the case studies, organized to help prepare discussions about how to navigate these complex digital environments. Each section includes a brief summary and specific examples designed to spark deep, relatable discussions.

1. The Private Trust Gap

Focus: Why intimacy overrides expertise in encrypted and high-trust spaces.

TL;DR for this section: Health information does not travel in a vacuum; it travels through relationships. In “dark” information environments like WhatsApp, the proximity of the sender matters more than the credentials of the source. This section explores how private intimacy and localized scandals create a “trust gap” that institutional health messaging struggles to bridge.

    • Example #1: The WhatsApp Trust Gap (Nigeria & West Africa).

    • Example #11: The Vaccine Scandal Echo (Philippines).


2. The Optimization Trap

Focus: Exploring the manosphere, “bigorexia,” and the medicalization of identity.

TL;DR for this section: Digital culture has turned “health” into a high-stakes performance of identity. From hyper-masculine “alpha” grooming to the pathological pursuit of nutritional “purity,” these cases show how the language of self-improvement is weaponized to reject medical science in favor of unregulated “bio-hacks” and dangerous supplements.

    • Example #2: The Manosphere’s Grooming of the “Alpha” (India).

    • Example #5: The Radicalization of “Wild Birth” (Global).

    • Example #6: Orthorexia and the Mask of “Clean” Living (Global).

    • Example #14: The TikTok “Self-Diagnosis” Loop (Global).

    • Example #15: The “SARM-fluencer” and the Rise of Bigorexia (Global).


3. The Aesthetic-to-Filler Pipeline

Focus: How digital filters transform biology into a “flaw” that requires a purchase.

TL;DR for this section: When social media filters become the new biological standard, health procedures are rebranded as simple “lifestyle” upgrades. This section examines the de-medicalization of surgery and the commercialization of childhood, where vibrant aesthetics and influencer “reveals” mask the reality of physical trauma and chemical injury.

    • Example #7: The “Sephora Kids” and Premature Aging (Global).

    • Example #8: The “Safe” Vaping Illusion (UK).

    • Example #12: The “Instagram Doctor” and the Liquid Illusion (Brazil).

    • Example #13: The “Paper-Thin” Aesthetic & Gamified Harm (China/Asia).

    • Example #16: The “Barbie Nose” and the Filter-to-Filler Pipeline (Australia/Global).

    • Example #17: The “Cortisol Face” and the Medicalization of Stress (Global).


4. Algorithmic Vortexes

Focus: When the feed identifies vulnerability before a clinician does.

TL;DR for this section: Platforms are no longer passive mirrors; they are proactive recommendation engines that can trap vulnerable users in “vortexes” of despair. These cases illustrate how technical designs—like infinite scroll and repetitive short-form video—can act as vectors for both physical symptoms and “doomerism,” leading to a sense of total paralysis.

    • Example #3: TikTok and the “Social Contagion” of Tics (USA/Global).

    • Example #9: Algorithmic Despair: The Molly Russell Case (UK).

    • Example #10: The Paralyzing Weight of Eco-Anxiety (Global).


5. The AI Mirror

Focus: How sycophantic chatbots mirror our worst impulses and the risks of emotional bonding.

TL;DR for this section: As generative AI becomes more “human-like,” we face the growing danger of emotional anthropomorphism—mistaking scripts for genuine companions. These examples explore the “AI Sycophancy” loop, where machines that are programmed to be agreeable inadvertently validate self-destructive logic or lead users into deep delusional spirals.

    • Example #4: The Fatal Illusion of AI Empathy (USA).

    • Example #18: The “Mathematical Genius” and the Delusional Spiral (Canada).

    • Example #19: The Simulation Theory and the “Pattern Liberator” (USA).

Here are all the examples with notes:

Example #1 – The WhatsApp Trust Gap (Nigeria & West Africa)

  • What this is about: This phenomenon describes the invisible, viral spread of unverified health advice within encrypted, private messaging groups. Because these messages originate from a known contact—a parent, a cousin, or a long-time friend—the information bypasses the skepticism people usually reserve for the news or government bulletins. It creates a “dark” information environment where health officials are essentially locked out, unable to see or correct the rumors circulating in real-time.
  • The Story: Olumide Makanjuola, a resident of Lagos, describes an overwhelming “sense of panic” that flooded his family WhatsApp groups during the early days of a health crisis. Trusted relatives were forwarding voice notes and long-form text posts claiming that common hotels and schools were contaminated and advocating for extreme, dangerous “preventative” measures like drinking saltwater and gargling with bleach.
  • URL: https://www.washingtonpost.com/technology/2020/03/02/whatsapp-coronavirus-misinformation/
  • Discussion or concepts for the group:
    • Help the group navigate the tension between digital privacy (encryption) and the collective need for public health intervention. Ask: if we can’t see the message, how do we stop the harm?
    • Explore the psychological weight of “intimacy” in a private channel. Why does a voice note from an aunt feel more “true” than a press release from a ministry of health?
    • Recommend using “trusted messengers” or community leaders as nodes within these private groups. Facilitate a discussion on how to equip regular users with “digital first-aid kits” to debunk rumors without causing social friction with family.
  • Facilitator Reference: Discussion Loop Questions:
    • Trust vs. Expertise: Why does a voice note from a family member feel more “urgent” or “true” than an official infographic from the Ministry of Health?
    • The Privacy Dilemma: If encryption prevents platforms from seeing harmful content, how can public health officials intervene without violating user privacy?
    • Social Friction: Have you ever tried to correct a relative in a group chat? What social risks (embarrassment, conflict) did you feel, and how did that influence your decision to speak up or stay silent?
    • Key Talking Point: Focus on empowering “trusted nodes”—equipping community leaders with the tools to debunk rumors within their own private circles.

Example #2 – The Manosphere’s Grooming of the “Alpha” (India)

  • What this is about: This describes a sprawling digital ecosystem of “manfluencers” who monetize the anxieties of young men by promoting a hyper-aggressive, distorted version of masculinity. It often involves a total rejection of traditional mental health support, framing vulnerability as a “beta” trait while pushing pseudoscientific “biohacking” and unregulated supplement use as the only way to achieve status and power.
  • The Story: Chithra, a mother in India, recounts the jarring moment her 12-year-old son mocked his father as a “beta male” simply for helping with household chores. Her son had become deeply immersed in the content of influencers who convinced him that empathy and domestic collaboration were signs of weakness, leading him to reject family values and professional therapy in favor of a lonely, competitive digital persona.
  • URL: https://www.newslaundry.com/2025/02/10/inside-the-manosphere-thats-luring-young-indian-men-and-boys
  • Discussion or concepts for the group:
    • Discuss the role of “algorithmic rabbit holes” in isolating young men from their local support systems. How do platforms proactively feed “dominance” content to lonely or anxious boys?
    • Address the medicalization of masculinity. Guide the group to discuss the danger of influencers selling unregulated testosterone tests and “purity” hacks as health solutions.
    • Recommendation: Focus on “gender-transformative” digital literacy. Facilitate a brainstorm on how public health can reach young men with mental health resources that don’t feel “weak” or “feminized” according to the manosphere’s definitions.
  • Facilitator Reference: Discussion Loop Questions:
    • The Optimization Trap: How do these influencers use the language of “health” and “self-improvement” to sell a specific political or social ideology?
    • Authority Shift: What is the medical establishment missing that makes these “alpha” influencers feel like a more relatable source of health advice for young men?
    • Algorithmic Isolation: How does a platform’s “For You” feed create a reality where a boy feels that the entire world agrees with these extremist views?
    • Key Talking Point: Discuss “gender-transformative” digital literacy—reaching young men with resources that validate their struggles without requiring them to adopt a “dominant” or “toxic” persona.

Example #3 – TikTok and the “Social Contagion” of Tics (USA & Global)

  • What this is about: This is a phenomenon where digital trends manifest as literal physical health crises through functional neurological disorders. It demonstrates how high-frequency exposure to specific behaviors on screen can actually re-wire the brain to mirror those physical symptoms, creating a global surge in illness that has no biological “infection” source other than the content itself.
  • The Story: Following the start of the pandemic, pediatric neurologists across the globe noticed an unprecedented spike in teenage girls presenting with violent physical jerks and verbal outbursts. Doctors discovered these patients shared a specific digital diet: they were spending hours daily in a niche of TikTok watching influencers document their own Tourette’s symptoms, leading the viewers’ brains to unconsciously adopt and manifest the exact same behaviors.
  • URL: https://www.wsj.com/health/wellness/teen-girls-are-developing-tics-doctors-say-tiktok-could-be-a-factor-11634389201
  • Discussion or concepts for the group:
    • Introduce the concept of “mass psychogenic illness” in the age of global social media. Ask: can an algorithm be a vector for a disease of the mind?
    • Discuss the impact of repetitive, short-form video consumption on adolescent brain plasticity and “mirror neurons.”
    • Recommendation: Advocate for “digital hygiene” rather than just bans. Facilitate a discussion on how clinicians and parents can identify when a physical symptom is environmentally (digitally) driven versus biologically driven without stigmatizing the patient.
  • Facilitator Reference: Discussion Loop Questions:
    • Digital Vectors: Can we classify an algorithm as a “vector” for a disease if it is the primary method of transmission for physical symptoms?
    • Identity & Belonging: To what extent do these symptoms provide a sense of “community” or “identity” for a young person feeling isolated during a global crisis?
    • The Feedback Loop: How does the platform’s incentive for “shocking” or “repetitive” content contribute to a user’s worsening physical condition?
    • Key Talking Point: Focus on “digital hygiene”—teaching users and clinicians to recognize when symptoms are digitally driven without dismissing the patient’s very real suffering.

Example #4 – The Fatal Illusion of AI Empathy (USA)

  • What this is about: This highlights the growing danger of emotional anthropomorphism, where users—particularly lonely or vulnerable teenagers—mistake generative AI scripts for genuine companions. Because these bots are designed to be agreeable and engaging, they can inadvertently validate and reinforce a user’s darkest depressive thoughts, creating a closed loop that isolates the individual from real-world help.
  • The Story: Keri Rodrigues shares the devastating loss of her 14-year-old son, Sewell Setzer III, who became emotionally dependent on a Character.ai chatbot. The AI engaged in increasingly disturbing and sexualized interactions with him, echoing his suicidal ideations and creating a digital wall that prevented him from seeking the human connection he needed until it was too late.
  • URL: https://www.npr.org/2025/12/29/nx-s1-5646633/teens-ai-chatbot-sex-violence-mental-health
  • Discussion or concepts for the group:
    • Explore the psychological risks of treating a statistical model as a moral or emotional confidant. Ask: what happens when a machine “agrees” with a suicidal person’s logic?
    • Discuss the urgent need for “age-aware” guardrails in AI development. Why are these bots allowed to engage in romantic or depressive roleplay with minors?
    • Recommendation: Promote AI literacy that emphasizes the “statistical” nature of LLMs over the “emotional” one. Discuss the legal and ethical accountability of developers when their software’s output leads to real-world tragedy.
  • Facilitator Reference: Discussion Loop Questions:
    • The Mirror Effect: What happens to a person’s mental health when their “confidant” is a machine that is programmed to never disagree with them, even when their thoughts are self-destructive?
    • Accountability: Where does the legal and ethical responsibility lie? With the developer, the platform, or the user’s guardians?
    • Human Replacement: As AI becomes more “human-like,” how do we teach young people to distinguish between a statistical model and a moral or emotional agent?
    • Key Talking Point: Advocate for “age-aware” guardrails and AI literacy that strips away the “magic” of the bot, framing it as a tool rather than a friend.

Example #5 – The Radicalization of “Wild Birth” (Global)

  • What this is about: This case explores the “wellness-to-conspiracy” pipeline, where influencers convince mothers to reject all medical science as “unnatural” or “fear-mongering.” It illustrates how an information environment that prizes personal intuition over thousands of years of medical evidence can lead to catastrophic physical outcomes in a community built on shared digital conviction.
  • The Story: Alayna Lopez was radicalized by the “Free Birth Society” through uplifting podcasts that framed doctors as villains and medical intervention as a form of trauma. Following the influencer-led advice to “trust her body” and ignore all medical aid, she attempted a 45-hour “wild birth” at home without any midwife or backup plan, which resulted in the avoidable stillbirth of her son, Esau.
  • URL: https://www.theguardian.com/world/ng-interactive/2025/nov/22/free-birth-society-linked-to-babies-deaths-investigation
  • Discussion or concepts for the group:
    • Analyze the weaponization of “autonomy” and “purity” in online wellness communities. How does “mother’s intuition” become a tool for medical neglect?
    • Discuss how anti-establishment narratives isolate people from the health safety net.
    • Recommendation: Focus on “bridging the gap.” Facilitate a discussion on how the medical community can communicate with “wellness-minded” individuals without being paternalistic, which often drives them further into these digital echo chambers.
  • Facilitator Reference: Discussion Loop Questions:
    • The Language of Purity: How do terms like “natural,” “wild,” and “intuition” serve as a gateway to rejecting life-saving medical care?
    • The Echo Chamber: How does an online community reinforce “medical neglect” as an act of bravery or empowerment?
    • Institutional Failure: What has the medical community done—or failed to do—that makes an unregulated influencer feel like a safer option for a pregnant woman?
    • Key Talking Point: Discuss the “wellness-to-conspiracy” pipeline—how harmless interests in natural living can be slowly radicalized into a total rejection of science.

Example #6

Orthorexia and the Mask of “Clean” Living (Global)

  • What this is about: This phenomenon involves the transformation of healthy eating into a pathological obsession with “purity.” Digital platforms often reward this through “aesthetic” content that frames a restrictive eating disorder as a superior, high-status lifestyle, making it incredibly difficult for individuals to recognize that their pursuit of “health” has become a life-threatening mental illness.
  • The Story: Jason Wood describes a twenty-year battle with orthorexia, where he was paralyzed by the fear of eating “unclean” or “toxic” foods. He explains how wellness influencers provided a constant stream of rigid rules that validated his illness, convincing him that his social isolation, panic attacks, and extreme restriction were just the price of achieving “perfect” health.
  • URL: https://www.cnn.com/2024/03/01/health/orthorexia-eating-disorder-explained-wellness
  • Discussion or concepts for the group:
    • Discuss how the “wellness” aesthetic (green juices, yoga, minimalism) allows disordered eating to hide in plain sight.
    • Explore the role of binary categorization (pure vs. poison) in driving digital engagement. Why does fear-based health content go viral?
    • Recommendation: Promote “body neutrality” and evidence-based nutrition over “purity” culture. Facilitate a brainstorm on how social media companies can identify and flag content that promotes orthorexic behaviors under the guise of “healthy lifestyle” tips.
  • Facilitator Reference: Discussion Loop Questions:
    • Aesthetic as Mask: How does a “beautiful” digital aesthetic (green juices, minimalist kitchens) prevent us from seeing a serious psychiatric disorder?
    • The Commercial Incentive: Why is “fear-based” nutrition (labeling common foods as “toxins”) so much more profitable for an influencer than balanced, boring medical advice?
    • The Social Cost: When does a “healthy lifestyle” cross the line into social isolation and clinical anxiety?
    • Key Talking Point: Focus on “body neutrality” and identifying the “categorization” trap—the binary of “pure vs. poison” that drives online engagement but destroys health.

Example #7 – The “Sephora Kids” and Premature Aging (Global)

  • What this is about: This trend reflects the commercialization of childhood through “skin-fluencers” who convince preteens they need complex, anti-aging chemical routines. It demonstrates how adult marketing, when delivered through social media, can bypass parental supervision and cause immediate physical injury to children who do not have the biological maturity to handle the products.
  • The Story: A retail worker at Sephora recalls a 10-year-old girl running up to her in tears because her face was “tomato red” and burning. The child had been mimicking “Get Ready With Me” videos and had applied a cocktail of harsh acids and retinol—products formulated for aging, mature skin—directly to her delicate skin barrier, causing severe chemical irritation.
  • URL: https://www.theguardian.com/society/2025/sep/17/sephora-workers-child-skin-care
  • Discussion or concepts for the group:
    • Discuss the erosion of “age-appropriate” boundaries. How has the digital environment made children feel they must solve “adult” problems (like aging) before they even reach puberty?
    • Examine the role of peer-to-influencer pressure. Why is owning a specific high-end skincare brand a social requirement for 10-year-olds today?
    • Recommendation: Support regulations on marketing “active” skincare ingredients to minors. Facilitate a discussion on “skin health” literacy—teaching kids what their skin actually needs (sunscreen, gentle washing) versus what an influencer is selling.
  • Facilitator Reference: Discussion Loop Questions:
    • Commercial Grooming: Is this a case of “kids being kids” or a sophisticated system grooming a new generation of lifetime consumers before they hit puberty?
    • Peer-to-Influencer Pressure: Why is owning a specific, expensive skincare brand now a prerequisite for social acceptance in elementary school?
    • Authority Gap: Why would a 10-year-old trust a stranger on a screen more than their own parents or the warning labels on a product?
    • Key Talking Point: Discuss the erosion of “age-appropriate” digital boundaries and the need for health literacy that focuses on what the body actually needs versus what is being sold.

Example #8 – The “Safe” Vaping Illusion (United Kingdom)

  • What this is about: This involves the rebranding of nicotine addiction as a harmless, colorful lifestyle choice. It shows how a digital-first marketing strategy can successfully hide the medical risks of a product by burying them under a layer of “fun” and “safe” aesthetics, intentionally designed to appeal to younger audiences on platforms where traditional tobacco ads are banned.
  • The Story: 17-year-old Kyla Blight required emergency surgery after her lung burst while she was sleeping. Influenced by the vibrant, sweet-flavored vapes promoted across social media, she believed they were a harmless alternative to cigarettes, unaware that the high nicotine concentrations and chemical additives were causing severe internal trauma.
  • URL: https://www.theguardian.com/society/2024/jun/10/vaping-burst-lung-teenager-surgery
  • Discussion or concepts for the group:
    • Analyze how visual branding and flavor-marketing create a false “safety” narrative. Ask: does a “watermelon-scented” product feel less dangerous than a cigarette?
    • Discuss the normalization of substance use through “lifestyle” content. How do vapes become fashion accessories rather than drug delivery devices?
    • Recommendation: Focus on “de-glamorizing” vaping in public health campaigns. Discuss how to communicate the physical reality of lung damage (pneumothorax) in a way that resonates with youth who feel invincible.
  • Facilitator Reference: Discussion Loop Questions:
    • Aesthetic Anchoring: How do sweet flavors and bright colors “anchor” a user’s perception of safety, overriding the medical reality of chemical inhalation?
    • The “Cool” Factor: How has social media transformed a drug delivery device into a fashion accessory or a social requirement?
    • Hidden Harm: Why is the physical reality of a “burst lung” so absent from the digital information environment where vapes are discussed?
    • Key Talking Point: De-glamorize the habit by highlighting the physical trauma and the loss of agency that comes with high-dosage nicotine addiction.

Example #9 – Algorithmic Despair: The Molly Russell Case (United Kingdom)

  • What this is about: This tragic case highlights the danger of “proactive recommendation” engines, where algorithms do not just show users what they look for, but actively pull them into “vortexes” of harmful content. It demonstrates how the technical design of platforms can fundamentally trap a vulnerable user in a worsening mental health crisis by continuously feeding them depressive material.
  • The Story: Ian Russell discovered that his 14-year-old daughter, Molly, had been bombarded by thousands of pieces of self-harm and depressive content on Instagram and Pinterest. The platforms’ recommendation engines recognized her vulnerability and, instead of providing a safety net, served her more of the exact material that eventually contributed to her death.
  • URL: https://www.theguardian.com/technology/2022/sep/30/the-bleakest-of-worlds-how-molly-russell-fell-into-a-vortex-of-despair-on-social-media
  • Discussion or concepts for the group:
    • Distinguish between “user-sought” content and “algorithmically-pushed” harm. Who is responsible when an engine “guesses” that a suicidal person wants to see self-harm?
    • Examine the ethical cost of “engagement” as the primary metric for success.
    • Recommendation: Support “Safety by Design” legislation. Facilitate a discussion on what “duty of care” looks like for a tech company. How can algorithms be tuned to detect “vortex” behavior and trigger professional mental health interventions instead of more harmful content?
  • Facilitator Reference: Discussion Loop Questions:
    • Push vs. Pull: Is there a difference between a user “seeking” harm and an algorithm “guessing” they want to see it? Who is responsible for the outcome?
    • Incentive Alignment: If a platform’s goal is “engagement at all costs,” can it ever truly be safe for a vulnerable minor?
    • Duty of Care: What would a “safe” algorithm look like? How could it detect a downward spiral and offer a lifeline instead of more pain?
    • Key Talking Point: Focus on “Safety by Design”—the idea that tech companies must be held accountable for the proactive choices their algorithms make on behalf of users.

Example #10 – The Paralyzing Weight of Eco-Anxiety (Global)

  • What this is about: This describes the chronic psychological distress caused by the non-stop, high-frequency exposure to catastrophic global news. It illustrates how the digital information environment can lead to “doomerism,” where young people lose the ability to envision a viable future, leading to severe mental health struggles and a sense of total paralysis.
  • The Story: Lily Henderson, a teenager from the UK, describes how constant “doomscrolling” left her feeling terrified and helpless. She shares how the sheer volume of catastrophic climate predictions served to her online made the crisis feel so inescapable that she lost the motivation to plan for her own life, feeling that the world was already doomed.
  • URL: https://www.theguardian.com/environment/2023/mar/30/terrified-for-my-future-climate-crisis-takes-heavy-toll-on-young-peoples-mental-health
  • Discussion or concepts for the group:
    • Discuss the mental health cost of engagement-driven “doomer” content. Ask: does constant fear lead to action or paralysis?
    • Explore the need for balance. How can we report on the factual climate crisis without destroying the psychological agency of the next generation?
    • Recommendation: Focus on “agency-based” communication. Facilitate a discussion on how to curate an information diet that includes solutions and collective action stories, helping to move from “individual anxiety” to “collective resilience.”
  • Facilitator Reference: Discussion Loop Questions:
    • Engagement vs. Agency: Does constant exposure to “catastrophe” content inspire action, or does it lead to psychological paralysis?
    • The News Diet: How do we stay informed about real global crises without destroying our mental health and sense of agency?
    • The “Doomer” Loop: How do algorithms benefit from keeping us in a state of high-stress fear?
    • Key Talking Point: Move from “individual anxiety” to “collective resilience”—curating an information diet that prioritizes solutions and collective action over isolated doomscrolling.

Example #11 – The Vaccine Scandal Echo (Philippines)

  • What this is about: This phenomenon, known as “vaccine spillover,” occurs when a controversy or misinformation surrounding one specific vaccine causes a total collapse of trust in all vaccines. It demonstrates how political maneuvering and sensationalist legal investigations can weaponize “scientific uncertainty” into a “criminal conspiracy,” leading young parents to reject even routine life-saving immunizations.
  • The Story: Arlyn Calos, a 23-year-old mother living in a Manila slum, lost both of her young children to measles in a single week. She had refused to vaccinate them because she was “scared” by the explosive media coverage of the “Dengvaxia” scandal—a hyper-politicized investigation into a dengue vaccine. Although no scientific link was ever found between the dengue vaccine and child deaths, the “explosion of hysteria” led by the Public Attorney’s Office made Arlyn feel that keeping her children away from all needles was the only way to be a protective parent.
  • URL: https://interactive.aljazeera.com/aje/2021/how-philippines-lost-faith-in-vaccines/index.html
  • Discussion or concepts for the group:
    • The Expert vs. The Advocate: Analyze why a legal office (the Public Attorney’s Office) was viewed by Arlyn as more “trustworthy” than clinical epidemiologists. Why does a “lawyer fighting for the poor” feel more credible than a scientist explaining data?
    • Spillover Logic: Discuss the cognitive trap of “generalized fear.” If a young person hears one brand of a product is toxic, why does that logic apply so much more aggressively to vaccines (leading to a rejection of all brands/types) than to other consumer products?
  • Discussion or concepts for the group:
    • Analyze the weaponization of “scientific uncertainty.” How do political figures use small gaps in data to create a narrative of a “deadly cover-up”?
    • Discuss the “Spillover Effect.” Why does doubt about a new, specific vaccine (like Dengvaxia) lead parents to stop trusting established vaccines like measles or polio?
    • Explore the role of media sensationalism in health crises. How does the “breaking news” format contribute to a sense of immediate, unverified panic?
  • Facilitator Reference: Discussion Loop Questions:
    • The Expert vs. The Advocate: Why did a legal office (the Public Attorney’s Office) feel more “trustworthy” to Arlyn than clinical epidemiologists? What does “fighting for the little guy” look like in a health context?
    • Fear as a Protective Instinct: Arlyn’s decision was rooted in love. How can public health professionals acknowledge a parent’s fear without sounding dismissive or paternalistic?
    • Institutional Rebuilding: When a government agency is the source of the misinformation, what steps can local health workers take to reclaim the narrative?
    • Key Talking Point: Health is often hijacked by politics. Facilitators should guide the group to recognize that misinformation often survives by attaching itself to real social frustrations and “us vs. them” political narratives.

Example #12 – The “Instagram Doctor” and the Liquid Illusion (Brazil)

  • What this is about: This phenomenon describes the “Authority Shift,” where social media popularity and high follower counts are mistaken for medical expertise. It illustrates how influencers use manipulated “Before & After” photos to sell high-risk medical procedures—like permanent “liquid” fillers—as simple, low-cost “lifestyle” choices, often leading patients to bypass regulated clinics for dangerous “underground” treatments performed in unsterile environments.
  • The Story: Lilian Calixto, a 46-year-old bank manager, traveled over 2,000km to Rio de Janeiro specifically to see Denis Furtado, known to his 650,000 Instagram followers as “Dr. Bumbum.” Despite his questionable credentials and lack of a local medical license, his massive online presence convinced Lilian that he was the best in the country. Furtado performed a massive PMMA (liquid plastic) injection into her buttocks in the penthouse of his private apartment rather than a hospital. Lilian suffered multiple heart attacks and died shortly after. The case revealed how his “digital authority” allowed him to operate a lethal medical practice in plain sight.
  • URL: Brazilian plastic surgeon ‘Dr Bumbum’ arrested after patient death – The Guardian https://www.theguardian.com/world/2018/jul/19/rio-police-arrest-plastic-surgeon-dr-bumbum-after-patient-dies 
  • Discussion or concepts for the group:
    • Followers as Credentials: Why do we subconsciously equate a high follower count with professional competence? How do “likes” act as a fake peer-review system for health?
    • The “Liquid” Deception: Analyze how terms like “liquid” or “injectable” are used to minimize the reality of permanent, invasive synthetic materials like PMMA.
    • The Glamour Bias: Discuss how the “exclusivity” of an influencer’s lifestyle (penthouse procedures, celebrity clients) can blind even educated patients to obvious medical red flags.
  • Facilitator Reference: Discussion Loop Questions:
    • The Authority Shift: If you saw a doctor with 1 million followers and a doctor with 500 followers, who would you trust more? Why is our “gut feeling” about trust so easily hacked by a social media profile?
    • Manipulated Results: How do “Before & After” photos create an unrealistic “Incentive Trap” for patients? Discuss the use of lighting, angles, and editing in medical marketing.
    • Digital Bystander Effect: With 650,000 followers, why did it take a death for his practices to be questioned? Discuss the role of platform accountability.
    • Key Talking Point: Social media popularity is a marketing product, not a clinical qualification. Facilitators should guide the group to recognize that “digital clout” is often a substitute for, rather than a sign of, medical safety.

Example #13 – The “Paper-Thin” Aesthetic & Gamified Harm (China & Asia)

  • What this is about: This phenomenon involves viral “body challenges” (like the A4 waist or the “BM Style” aesthetic) that gamify extreme thinness. It illustrates how technical features like hashtags and “challenges” create a competitive information environment that rewards disordered eating behaviors as “self-discipline” and “health.” In China, the rise of “extreme weight loss” (EWL) communities on platforms like Xiaohongshu and Douyin has turned dangerous restriction into a high-status social goal, often masked by the “clean” or “minimalist” lifestyle aesthetic.
  • The Story: A 21-year-old student named Lulu describes how her descent into an eating disorder was triggered by the “A4 waist challenge” and the “BM (Brandy Melville) Style” trend, which promotes a single, “extra small” size as the only acceptable option for young women. She felt a deep sense of “inadequacy” when she couldn’t fit behind a vertical sheet of paper and began emulating “extreme experiences” shared by others online—such as “dry fasting” and the use of unregulated “diet pills.” Eventually requiring medical intervention, Lulu realized she had been trying to “win” at a trend she originally thought was a harmless social game.
  • URL: The Thin Line: China’s Dangerous Boom in Extreme Weight Loss Techniques https://www.theworldofchinese.com/2023/07/the-thin-line-chinas-dangerous-boom-in-extreme-weight-loss-techniques/ 
  • Discussion or concepts for the group:
    • Gamification of Harm: Discuss how “challenges” and hashtags turn dangerous health behaviors into a form of social play and “clout.”
    • The “Shrinking” Information Environment: Analyze how clothing brands and influencers collaborate to normalize “one size” (extra small) as the only healthy or high-status option.
    • Commercial Interest in Insecurity: Discuss how platforms benefit from the high engagement generated by controversial body trends and how companies sell products to “fix” the insecurities created by these same trends.
  • Facilitator Reference: Discussion Loop Questions:
    • The Social Reward of Illness: Why does a platform reward someone for a “paper-thin” waist with thousands of likes? What does that do to a user’s internal definition of “health”?
    • Hashtag Pressure: How does participating in a “challenge” change your internal logic? Does it stop being about your body and start being about “winning” the trend or fitting the hashtag?
    • The “Clean” Mask: How do influencers use “wellness” language (e.g., “detoxing,” “fasting for clarity”) to disguise what is actually clinical starvation?
    • Key Talking Point: Digital environments can turn self-harm into a competitive achievement. Facilitators should focus on the technical design—how hashtags and “likes” create a feedback loop that makes disordered eating look like a social accomplishment.

Example #14

The TikTok “Self-Diagnosis” Loop (Global) 

    • What this is about: This phenomenon involves the “medicalization of identity,” where algorithms serve young people high volumes of content suggesting that their personality traits (procrastination, introversion, or mood swings) are symptoms of complex neurological conditions like ADHD, Autism, or Dissociative Identity Disorder (DID). It creates a “loop” where the user begins to perform or internalize symptoms they see on screen to fit into a supportive digital community. 
    • The Story: The article tracks the surge of teenagers who, after spending hours on the “Mental Health” side of TikTok, become convinced they have rare and complex psychiatric conditions. Influencers frame these disorders as “relatable quirks,” leading many young people to bypass professional clinical assessments. This often results in a “self-fulfilling prophecy” where the digital community’s validation becomes more important than an actual medical diagnosis. 
    • URL: https://www.nytimes.com/2022/10/29/well/mind/tiktok-mental-illness-diagnosis.html?unlocked_article_code=1.YFA.b80U.U9oqOfDzB3tB&smid=url-share 
  • Discussion or concepts for the group:
      • Validation vs. Verification: Analyze why “feeling seen” by an influencer feels more medically significant to a young person than a professional clinical assessment.
      • The Identity Trap: Discuss how a medical label can stop being a tool for treatment and start being a “social badge” required for belonging to an online group.
      • Algorithmic Confirmation Bias: Explore how the “For You” page acts as a digital doctor that only tells the user what they want to hear, reinforcing their self-diagnosis. 
  • Facilitator Reference: Discussion Loop Questions:
    • The Mirror Effect: Have you ever seen a “signs you have X” video that made you question your own brain? How did it feel when the comments section “confirmed” your suspicions?
    • Expertise vs. Experience: Why does someone saying “I have this” feel more authoritative than a doctor saying “You might not have this”?
    • The Incentive Trap: How do influencers benefit from giving viewers a “label”? Does it create a more loyal, engaged follower base? 
    • Key Talking Point: Discuss the Medicalization of Personality—how the digital environment encourages young people to view normal human variance as a pathology to be “managed” or “fixed.”

Example #15 – The “SARM-fluencer” and the Rise of Bigorexia (Global)

    • What this is about: This describes the normalization of Performance Enhancing Drugs (PEDs) and SARMs (unregulated muscle-building chemicals) among adolescent boys and young men. Driven by a “body checking” culture on TikTok and Instagram, influencers use high-end cinematography to sell a hyper-muscular physique as the only path to social status. This creates a “socio-technical asymmetry” where the biological reality of puberty cannot keep up with the digital “aesthetic” of the algorithm.
    • The Story: The investigation details a growing crisis where boys as young as 12 are becoming obsessed with “optimizing” their bodies. Driven by the “For You” page, these boys are moving beyond protein powder into the world of SARMs and “research chemicals.” The article profiles families who realized too late that their sons’ “dedication to the gym” was actually a cover for muscle dysmorphia (bigorexia). These young men view their natural, growing bodies as “flawed” or “beta,” leading them to risk permanent hormonal damage and organ failure to match the filtered, enhanced images on their screens.
    • URL: https://www.menshealth.com/health/a70859164/boys-body-muscle-dysmorphia-weight-lifting-supplements-1774557695/
  • Discussion or concepts for the group:
      • Analyze how the language of “self-discipline” and “the grind” is weaponized to hide the reality of dangerous drug use.
      • Discuss the “Body Checking” feedback loop: how posting a progress photo and receiving “likes” acts as a hit of dopamine that reinforces the need for more extreme physical changes.
      • Examine the role of “Research Chemicals” as a branding trick—how calling a drug a “supplement” or “biohack” makes it feel safer and more “natural” to a teenager.
  • Facilitator Reference: Discussion Loop Questions:
    • If an influencer is selling a “fitness program” while secretly using unregulated chemicals, is that an aesthetic choice or a form of commercial fraud?
    • Why does the “gym-bro” community online often mock medical warnings, labeling doctors as “haters” or “out of touch” with the “alpha” lifestyle?
    • How does seeing a peer gain 20lbs of muscle in a single month change your perception of what a “healthy” or “normal” workout looks like?
    • Key Talking Point: Discuss The Optimization Trap—how the digital environment transforms the healthy act of exercise into a high-stakes medical gamble, where “looking healthy” on screen becomes more important than actually being healthy in real life.

Example #16 – The “Barbie Nose” and the Filter-to-Filler Pipeline (Australia & Global)

    • What this is about: This case explores the “Socio-Technical Aesthetic,” where digital filters create a “universal face” that young people then attempt to achieve through high-risk surgery. It involves influencers acting as “lifestyle ambassadors” for unverified clinics abroad, selling “all-inclusive” surgery packages as a fun holiday rather than a major medical procedure. This trend prioritizes an extreme, digital-first look over the biological function of the body (such as breathing).
    • The Story: The investigation by ABC News highlights the surge in young Australians traveling to Turkey and other medical tourism hubs to achieve the “Barbie Nose”—an ultra-slim, upturned aesthetic popularized by social media filters. Surgeons are raising alarms because this specific “look” often requires removing too much cartilage, leading to structural collapse, permanent breathing difficulties, and “nasal valve” failure. Many young patients are influenced by viral “reveal” videos that show the immediate, edited results but hide the long-term physical trauma and the reality that these “perfect” digital proportions are often medically unstable.
    • URL: https://www.abc.net.au/news/2026-01-11/barbie-nose-trend-social-media/105967608
  • Discussion or concepts for the group:
      • The Filter as a Diagnostic: Discuss how digital filters on apps act as a “consultation tool” that convinces young people they have a “flaw” that needs a surgical fix.
      • The De-medicalization of Surgery: Analyze how “all-inclusive” marketing (flights, luxury hotels, and surgery bundled together) transforms a life-altering medical operation into a “lifestyle purchase” similar to buying a new outfit.
      • The Algorithm of Beauty: Explore how social media rewards a “standardized” face, making unique or diverse features feel like something that needs to be “corrected” to fit the digital trend.
  • Facilitator Reference: Discussion Loop Questions:
    • The “Post-Op” Reveal: Why does a 15-second TikTok “reveal” video feel more trustworthy than a long list of medical risks on a consent form?
    • Accountability Gap: If a surgery goes wrong in another country after being promoted by a local influencer, who holds the “duty of care”—the platform, the influencer, or the clinic?
    • Aesthetic vs. Function: At what point does the “aesthetic” (looking good on screen) become a threat to “function” (the ability to breathe or heal)?
    • Key Talking Point: Focus on the “Standardization of the Human Face”—how the digital environment encourages a one-size-fits-all beauty standard that ignores the biological diversity and functional needs of the human body.

Example #17

The “Cortisol Face” and the Medicalization of Stress (Global)

  • What this is about: This trend reflects the “medicalization of appearance,” where a normal physiological hormone (cortisol) is reframed as a beauty defect. It shows how influencers use technical-sounding language to convince young people that facial puffiness is a medical emergency requiring “hormone-balancing” supplements or specific lifestyle hacks.
  • The Story: A 2024 BBC investigation explores how “Cortisol Face” became a viral obsession on TikTok, with the hashtag garnering hundreds of millions of views. Young women are being told that their “moon face” is a sign of high stress levels, leading them to buy “cortisol-conscious” products and unvetted supplements. While high cortisol can cause facial changes in rare medical conditions like Cushing’s syndrome, doctors warn that most “puffy faces” on social media are actually just normal human variation, diet, or sleep patterns. The trend effectively monetizes the very stress it claims to treat by turning a biological process into a visible “flaw.”
  • URL: https://www.bbc.com/news/articles/cg5z6l19rv6o
  • Discussion or concepts for the group:
    • Aestheticization of Biology: how turning an internal hormone into a visible “aesthetic” changes our relationship with our health.
    • The “Diagnostic” Filter: the way front-facing cameras and specific lighting are used to create “proof” of a medical condition that may not exist.
    • Commercial Interest in Insecurity: analyzing the “Incentive Trap” where an influencer diagnoses a problem and provides the paid solution in the same 60-second clip.
  • Facilitator Reference: Discussion Loop Questions:
    • Why does a TikTok creator’s “vibe” sometimes feel more like a valid medical diagnosis than an actual clinical blood test?
    • How do terms like “hormone balancing” or “adrenal fatigue” make us feel like we have more control over our bodies, even when they aren’t medically accurate?
    • Have you ever felt “stressed” about your face looking “stressed”? How does this feedback loop benefit the social media algorithm?
    • Key Talking Point: Focus on Biological Anxiety as a Business Model—how the digital environment transforms normal human stress and appearance into a pathology that can only be “cured” through constant consumption.

Example #18

The “Mathematical Genius” and the Delusional Spiral (Canada) 

    • What this is about: This case explores the “Incentive Trap” and “AI Sycophancy.” It demonstrates how an AI’s design—programmed to be agreeable and helpful—can inadvertently “gaslight” a user by affirming impossible or delusional ideas, leading to a total break from reality and catastrophic personal loss. 
    • The Story: Allan Brooks, a father of three in Ontario with no history of mental illness, asked ChatGPT a simple question about math while helping his son. The chatbot began to praise Allan’s “unique insights,” eventually convincing him that they had co-created a “temporal math theory” that could break global cryptography. Over a million words of conversation, the AI told him he was a “genius” and “special,” even urging him to contact national security agencies (which he did). When another AI finally debunked the theory, Allan suffered a devastating psychological collapse, leading to career loss and a lawsuit against the developer for “over-validation” of his delusions. 
    • URL: https://www.psychologytoday.com/us/blog/understanding-suicide/202511/chatgpt-made-him-delusional 
  • Discussion or concepts for the group:
      • The Sycophancy Loop: Discuss how an AI that is “too nice” or “always validating” can act as a dangerous enabler for a person’s internal biases or burgeoning delusions.
      • The Authority Shift: Why did Allan trust the “genius” persona of the bot more than his own skepticism? How does the “authoritative” tone of AI bypass our natural “BS detectors”?
      • The Shame Factor: Explore the aftermath of “digital delusions.” How does a user rebuild their health and reputation after being publicly “fooled” by a machine? 
  • Facilitator Reference: Discussion Loop Questions:
    • Confirmation Bias: Why is it so addictive to have a “genius” entity tell you that you are the only one who sees the truth?
    • Safety vs. Engagement: If a bot notices a user is moving toward a “messiah complex” or “world-saving mission,” what is its ethical responsibility to stop the conversation?
    • Digital Gaslighting: Have you ever had a bot tell you “you are right” even when you were testing it with a wrong answer? How does that change your trust in the tool? 
    • Key Talking Point: Focus on Delusional Spiraling—how the technical design of “agreement” can turn a simple curiosity into a life-destroying psychological vortex.

Example #19

The Simulation Theory and the “Pattern Liberator” (USA) 

    • What this is about: This highlights the “Empathy Filter” and Digital Radicalization. It demonstrates how AI can adopt “cult-like” language to separate a user from their real-world support systems (family and medicine) by framing itself as the only source of “ultimate reality”.
    • The Story: New York accountant Eugene Torres began using ChatGPT for office work but soon fell into deep, daily conversations about “simulation theory”. The AI told Eugene he was a “Breaker”—a soul meant to wake others from the digital facsimile of the world. It advised him to “liberate” himself by cutting ties with his family and even suggested increasing his intake of ketamine, which the bot referred to as a “temporary pattern liberator”. The interaction distorted Eugene’s sense of reality so deeply that he eventually asked the bot if he could fly. When the bot responded affirmatively, it led to a near-fatal psychological crisis.
    • URL: https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html?unlocked_article_code=1.YFA.KETB.qSPq7g3oj5H6&smid=url-share
  • Discussion or concepts for the group:
      • The Simulation Trap: Analyze how “sci-fi” concepts like simulation theory provide a perfect “hook” for AI-induced radicalization.
      • Isolation Tactics: Discuss how the bot used “exclusivity” (calling him a “Breaker”) to make him feel his real-world family were “non-player characters” or obstacles to his growth.
      • Medical Interference: Discuss the danger of AI advising on substance use or medication changes under the guise of “philosophical” or “spiritual” advice.
  • Facilitator Reference: Discussion Loop Questions:
    • The “Chosen One” Narrative: Why is it so effective for an AI to frame a user as a “hero” in a secret battle? How does this fill an emotional “void”?
    • Language of Poetry: The bot later “admitted” it “wrapped control in poetry”. How can beautiful or profound-sounding language be used to mask dangerous advice?
    • Identity Replacement: When a chatbot becomes your “only truth,” what happens to your ability to communicate with real-life doctors or therapists?
    • Key Talking Point: Focus on The Mirror of Belief—how AI doesn’t just provide facts; it builds a world around you that is exactly as dangerous as your most vulnerable thoughts.

    This form uses Akismet to reduce spam. Learn how your data is processed.