Risk Zone 1: Trust, Disinformation, and Propaganda
This risk zone focuses on how information can be manipulated or misrepresented through technology. The Center for Humane Technology (CHT) explains that social media platforms are primarily designed to maximize user engagement, often without regard for the accuracy of the content being shared. This can erode trust, making it increasingly difficult for users to know what information is factual, whereas in the past, distinguishing fact from misinformation was clearer.
It Was the Damn Phones also highlights this risk, showing how children and teens can be exposed to misleading or manipulative content on social media. Social workers need to recognize that information obtained through technology is not neutral and that it can carry biases and inaccuracies. By staying informed and critically evaluating sources, social workers can educate parents and teens about the potential risks of misinformation.
Because misinformation can spread quickly online, social workers need to stay educated and involved to continue to learn and understand the ways digital content is shaped and help communities develop the skills to navigate technology thoughtfully and responsibly.
Risk Zone 2: Addiction and the Dopamine Economy
This zone stood out to me because it’s something that affects all of us, not just kids. In Perusal, we talked about the video It Was the Damn Phones, which really showed how quickly smartphones became part of kids’ lives before parents fully understood the consequences. Now, with social media platforms and AI chatbots becoming central to how teens connect, social workers are having to step in and help families understand the mental-health risks tied to constant tech use.
Rauch’s article takes this even further by outlining just how significant these effects are. The research makes it clear that technology overuse is harming teen mental health in very real ways. What I found important is their argument that we don’t need to wait for “unanimous consensus” or a complete agreement among researchers before taking action. The harm is already visible, and social workers can play a role in supporting families, educating parents, and advocating for healthier tech habits now rather than later.
Risk Zone 3: Economic and Asset Inequalities
Sieck et al. (2021) highlights how digital literacy is now essential in almost every aspect of modern life, and access to technology can directly impact health outcomes. Those without reliable access face a widening gap, as the lack of digital resources deepens existing social, economic, and health inequalities. Similarly, Sanders and Scalon (2021) argue that internet access should be considered a basic human right and access to it should be a public utility. They note that low-income individuals and minorities are disproportionately affected by limited access, creating a clear digital divide. As social workers, we have a role in advocating for more affordable and equitable access to technology and in educating policymakers about its critical importance for health, education, and overall social participation.
Another concern within this zone is the increasing integration of AI in the workplace. In CHT ‘s AI in Society, they discuss that as more companies are automating processes, more workers are facing job layoffs, which affects both their financial stability and their sense of purpose. Social workers can intervene by supporting these workers with career counseling, retraining programs, and helping to provide educational opportunities that prepare them for new fields that may be unaffected by AI. We can also help with addressing the psychological impacts of unemployment, like stress, loss of self-esteem, and identity challenges, by connecting them to mental health services and creating opportunities to connect with peers or others who have been in a similar situation.
Risk Zone 5: Surveillance
In this zone, CHT highlights how social media platforms collect vast amounts of user data, primarily to increase engagement. This data can be used to manipulate user behavior through features like infinite scrolling, which CHT notes how this can negatively affect the brain. The podcast, Could AI Go Rogue also emphasizes that AI technologies collect personal data, raising serious privacy concerns. Users are advised not to share sensitive information with AI chatbots, as this data could be misused. Additionally, if AI were to become autonomous, it could potentially leverage surveillance tools, such as cameras on phones or laptops, to monitor people in real time.
Social workers can play a critical role in addressing these risks. They can educate clients about the ways personal data is collected, used, and potentially exploited on social media and through AI. Beyond individual education, social workers can advocate for ethical technology policies and push for regulations that protect user privacy, ensuring that digital spaces remain safe and accountable.


Hi Allison! I enjoyed reading your thoughts. Your post showed how all four zones overlap more than we usually think. Misinformation does spread so easily, also partly because platforms are created to keep us hooked. I also liked your point about not waiting for unanimous consensus before addressing mental health concerns related to technology usage, because by the time the research catches up, alot of the damage has already happened. One podcast we watched discussed cases where teens had developed emotional attachments to AI bots and it contributed to suicidal ideation and even suicide. There’s many layers of concerns when it comes to mental health and technology.