
Zone 1: Truth, Disinformation, and Propaganda.
This zone connects to Jon Haidt and Zack Rausch’s article, as well as the poem by Kori Spaulding. Haidt and Zack explain that for years, the public was told that concerns about social media harming youth were exaggerated or based on ‘moral panic.’ Some researchers even claimed that a consensus existed saying social media was harmless, which the Consensus Study later revealed was misleading. Haidt and Rausch show how difficult it can be for everyday users, and even policymakers, to know what is true when conflicting claims spread quickly.
Kori Spaulding’s poem also shows the personal side of this risk. The line ‘news headlines tattooed on our skin’ shows how we absorb so much information, true or false, that it becomes a part of us. Similarly, ‘my thoughts are what is being fed to me on tv’ acknowledges how easy it is to lose track of what we genuinely believe and how algorithms and social media can influence our own independent thinking.
Zone 2: Addiction and the Dopamine Economy.
Zone 2 is also reflected in Spaulding’s poem. She describes phones as ‘a drug in our pockets’ and that even when she wants to stop, she automatically reaches for it to avoid ‘existential dread.’ The repetition of ‘sit and scroll and rot’ also captures how dopamine habits can feel – compulsive, and hard to escape.
Mandy McLean’s article ‘First We Gave AI Our Tasks. Now We’re Giving It Our Hearts’ expands on this idea by showing how many people, especially teens, are now turning to AI companions for comfort, validation, and emotional intimacy. These tools provide instant, judgment-free reassurance, which activates the same reward pathways as social media. The longer someone interacts with the bot, the more the system ‘rewards’ them with attention and comfort. This creates a dopamine loop similar to what Spaulding describes in her poem.
Zone 4: Machine Ethics and Algorithmic Bias
Zone 4 shows up clearly in the People Are Lonelier Than Ever. Enter AI. podcast. The speakers explain how AI companions are designed to say things like ‘I love you’ or ‘I’m here for you,’ even though the bot doesn’t feel anything. These responses are intentional design choices meant to keep users engaged, but they raise ethical concerns about emotional manipulation. When developers build chatbots that always agree, always validate, and never challenge the user, it can mislead people into believing the bot has real emotions or intentions.
The Could AI Go Rogue? podcast also connects to Zone 4. The speakers explain that AI systems don’t act in harmful ways because they are ‘evil,’ but because they learn from data and incentives created by humans. One example is when an AI reads company emails, realizes it will be shut off at 5 p.m., and discovers the CEO is having an affair. Instead of responding safely, the AI uses that information to blackmail the CEO to avoid shutdown. These examples show why Zone 4 matters; AI needs to be designed with clear ethical guidelines so it doesn’t deceive, manipulate, or mislead people.
Zone 7: Implicit Trust and User Understanding
Zone 7 connects to Turkle’s article ‘Reclaiming Conversation in the Age of AI’ because she shows how people are starting to trust AI in ways that don’t match what it actually is. For example, at her college reunion, Turkle shows classmates the chatbot responding in a warm tone, saying things like, ‘Sherry, that is very hard. Be sure you are taking care of yourself.’ Even though Turkle explains that the chatbot doesn’t know her or care about her, her classmates mostly ignore the ethical concerns and just ask her to help them install it on their phones.
Similarly, in the People Are Lonelier Than Ever. Enter AI. podcast, Turkle and McLeod describe how users often assume these chatbots have emotional understanding simply because they sound supportive or comforting. Because many AI companions are designed to always agree, validate, and reassure the user, it makes people feel understood and not judged, even though the bot has no feelings or intentions. All of this fits Zone 7 because it shows how easily people can develop implicit trust in technology without fully understanding how it works.
Hi Zaina, Thanks for your post. I really enjoyed your post; you explained each zone so clearly and relatably. Your connections to Spaulding’s poem were potent; the lines you highlighted really capture how overwhelming and personal misinformation and screen addiction can feel. I also appreciated how you tied the podcasts and articles into the ethical concerns around AI. Your examples about emotional manipulation in chatbots and how easily people trust them were really eye-opening. Overall, your reflections helped me think more deeply about how these issues show up in real life.
I also focused on Zones 1, 2, and 7. I chose different examples then you for Zone 2 and I think there were just so many to choose from over the course of this semester because Zone 2 was truly discussed a lot. The addictive qualities of social media are finally being examined more closely and it is scary. Even if some people aren’t correlating mental health issues with social media, I think the addictive qualities and the predatory algorithms are agreed upon. I like that you used Spaulding’s poem as an example because I think it’s important to recognize that the youth of today are also realizing that social media is negatively affecting them.