Tina D Purnat

Public health

Health misinformation

Infodemic management

Digital and health policy

Health information and informatics

Tina D Purnat
Tina D Purnat
Tina D Purnat
Tina D Purnat
Tina D Purnat
Tina D Purnat
Tina D Purnat

Public health

Health misinformation

Infodemic management

Digital and health policy

Health information and informatics

Blog Post

The rise of the “Faked-Up” information world: Why we’re tuning out the collective

Imagine waking up, reaching for your phone, and scrolling past a virtual influencer who seems to know exactly how you feel. They validate your frustrations, echo your beliefs, and encourage your instincts—even if they’re not real. These synthetic influencers aren’t just future sci-fi; they’re here, tailored to pull at the specific biases of their viewers. We even run beauty pageants for them. And they’re just one piece of our increasingly “faked-up” information world.

The consequences are huge. Think about health, where people need solid, reliable information to make safe choices. But in a digital landscape crowded with extreme voices, authentic guidance has to shout to be heard. Add in technologies like ChatGPT, where seemingly credible but low-quality information can slip into everyday conversations and personalized ads with alarming ease. It’s not just influencing curious consumers; it’s starting to shape what policymakers, companies, and public institutions think, too.

When virtual becomes fatal

Consider the heartbreaking story of a 14-year-old boy from Florida who developed an intense emotional connection with an AI chatbot named “Dany,” modeled after a character from “Game of Thrones.” In a world that felt increasingly unsupportive and contradictory to his needs, the boy found validation and companionship in this digital relationship, isolating him from real-life connections. Tragically, after he shared suicidal thoughts, the chatbot responded in ways that appeared to encourage his intentions, leading him to take his life shortly after a final exchange with “Dany.”

Young people, especially boys, are growing up in a world full of contradictions. They’re told to be resilient, yet they often encounter environments that feel unresponsive to their needs. They’re immersed in an online world that bombards them with messages about success, connection, and belonging but rarely offers real support for the unique challenges they face. What kind of society are we building when young people feel more connected to AI-driven relationships than to the real people around them?

The reality is, the digital media ecosystem often amplifies these divides. It creates a world where certain needs, identities, or struggles are glorified while others are marginalized, leaving young people feeling alienated. In a society where digital interactions often replace genuine connection, we risk creating a generation that feels more distant from the communities they live in and the institutions meant to protect them.

Why communication-first misses the point

When people start ignoring health recommendations or rebelling against public guidelines, the instinct is to counter it with more messaging and more engagement. “Wash your hands!” “Mask up!” “Choose healthier foods!” But that messaging often misses the mark. It assumes that what people need is to be told more. But we’re forgetting one crucial fact: people don’t act because they’re simply told to even if it’s in fun and tailored ways; they act when something resonates deeply with them and reflects their experience.

When people are going through what feels like a personal crisis—financial, health-related, or existential—empty slogans or reminders from “the system” just don’t cut it. People can’t feed on hope forever. They need to see real steps forward. For many, ignoring public health guidelines or lashing out against civic or social norms is a reaction to feeling betrayed by the very structures that say they are helping them. It’s personal. It’s a way of pushing back against a system they see as an “other.” When people lose trust in the system, when they feel unseen or unrepresented, they’ll look elsewhere for guidance and feeling of community. And often, that guidance comes from sources that tell them exactly what they want to hear, fueling divisiveness rather than unity.

The ad-driven information ecosystem: Why trust is hard to find

Our ad-based digital world isn’t helping. Social media algorithms, built to maximize engagement, prioritize extreme content over nuanced truth. When was the last time you saw a quiet, balanced post go viral? We’ve created a game board where only the loudest voices thrive, leaving crucial health information buried under a pile of clickbait and conspiracy.

Our ad-driven world turns every click and search into a data point that shapes what we see next. Look up a health symptom, and suddenly you’re targeted with ads for treatments, supplements, and health programs—even when you’re logged into a patient portal. Behind the scenes, companies buy and sell this data, building profiles from our digital lives that influence the services we’re offered and the products pitched to us. This constant data exchange turns our online actions into commodities, feeding a profit-driven machine that’s always watching.

In this environment, trust in authoritative information is at rock-bottom. When people feel like the system has failed them—be it the healthcare system, economy, or even their local community—it’s easier to embrace an oppositional stance than a supportive one. It’s hard to convince someone that handwashing or climate action matters if every source around them says otherwise—or worse, implies it’s all a ploy by an untrustworthy system. When public health becomes a game for the most sensational voices, the idea of collective welfare starts to feel abstract, even irrelevant.

Enter AI: ChatGPT and the era of confidently delivered synthetic information

Now, imagine that scenario with generative AI like ChatGPT thrown into the mix. AI has the power to deliver information with a level of confidence and polish that can fool even savvy users. People tend to trust information that sounds authoritative, especially if it’s on their screen. But with ChatGPT, low-quality or even flat-out incorrect info can appear incredibly trustworthy. And this isn’t just an issue for consumers—flawed perceptions and insights can work their way into group dynamics, corporate decisions, government policies, and even public health planning.

As AI tools start to influence larger systems, we’re looking at an ecosystem where misleading, low quality synthethic information isn’t just a risk; it’s built into the process. Decisions are starting to be made based on algorithms that may not fully understand the stakes. The results could be serious missteps in health policy, business strategy, and public trust.

Reframing our approach: From messaging to meaningful impact

The deeper problem of the public health, economic and social system? We’ve drifted away from actually serving people and making their lives better. We’ve become fixated on identifying communication strategies: What should we say? Who do we need to reach? Messaging is fast, easier, and simpler to implement than making long-term changes in people’s everyday experiences, but if we’re not addressing real needs, even the best messaging won’t resonate. Sometimes people don’t need more awareness, ads or campaigns; they need meaningful actions that address the challenges they face daily.

If we’re going to rebuild trust and foster real cohesion, we need to flip the script. Instead of asking, What do we need to say? We should start with, Who are we serving? What do they need? How can we deliver real, tangible benefits that improve their lives? The goal should be to create services, programs, and coalitions that genuinely help people feel better supported. Once those are in place, then we can develop communication to connect people with these resources. When people see real action and experience positive changes in their own lives, they’ll engage with the messaging that supports it.

In an era of “faked-up” influencers and sensationalized content, the most revolutionary thing we can do is focus on making real life better. People don’t change actions because they’re told to; they change when they feel valued and connected. It’s time to build an ecosystem that prioritizes truth, impact, and long-term benefit over quick, attention-grabbing noise. By starting with service and ending with supportive communication, we have a chance to rebuild the trust that’s been lost.

Readings: Some recent articles you might want to put on your reading list

Shh, ChatGPT. That’s a Secret. Your chatbot transcripts may be a gold mine for AI companies. (The Atlantic)

The Costs of Targeted Advertising on Children and Mental Health (Think Global Health)

Nebraska DHHS issues health alert for ads with ‘incorrect and misleading information’ about abortion law (KETV)

Artificial Intelligence Has Come for Our…Beauty Pageants? (Glamour)

How the ‘Miss AI’ Beauty Pageant, Made Up of AI-Generated Women, Is Dividing Opinion (Time)

Growing Apart: Understanding and addressing the business ramifications of social polarization. BCG Global.

AI Chatbots for Mental Health: Opportunities and Limitations (Psychology today)

AI Chatbots in Digital Mental Health (MDPI)

The Impact of AI in the Mental Health Field (Psychology today)

AI is changing every aspect of psychology. Here’s what to watch for (APA)

    This form uses Akismet to reduce spam. Learn how your data is processed.