Danger Zones

Written by Nickwenscia

November 12, 2025

In many of our social work roles today, we’re seeing a clear shift: services are no longer just face-to-face; they increasingly happen over a screen or involve algorithmic support. For example, one paper discussed how telehealth services are now being integrated into social work practice for vulnerable groups, helping clients overcome distance, mobility, and access barriers (WJARR, 2025). At the same time, another article by Reamer (2023) warns that artificial intelligence (AI) in social work, while promising, brings significant ethical issues and emerging risks when we use it to assist vulnerable clients. Both show how exciting the possibilities are but also how essential it is recognize the “danger zones” that Bibbs et al. (2023) outlined for social work tech use.  

Danger Zone 1: Surveillance and Loss of Autonomy 

In the world of telehealth for vulnerable populations, practitioners might use remote monitoring tools or apps to track client well-being (WJARR, 2025). That’s great for access, but it risks tipping into surveillance: clients might feel watched, controlled, or that their autonomy is compromised. It directly links to Bibbs et al.’s (2023) caution about how technology can shift power dynamics and erode client agency if we aren’t careful.  This sense of being constantly monitored can unintentionally undermine the trust that is essential to an effective therapeutic relationship. To counter this, social workers must ensure transparency by clearly communicating how data are collected, stored, and used. This will empower clients to make informed choices about their participation in telehealth. 

Danger Zone 2: Data Bias & Inequity 

When we bring AI into social work (Reamer, 2023), bias becomes a real concern. AI models may reflect historical inequities or misinterpret data from marginalized populations. If a vulnerable client is served by an algorithm that wasn’t designed with their context in mind, they could be misdiagnosed or underserved. This echoes Bibbs et al.’s (2023) warning that tech can “exacerbate existing risk” if we don’t actively guard against it. To address this, social workers and developers must collaborate to ensure that AI systems are tested for fairness and cultural sensitivity before being implemented in practice. Ongoing human oversight is also essential; technology should assist , not replace, the ethical judgement and empathy that defines social work. 

Danger Zone 3: Erosion of Boundaries 

With telehealth and AI, social workers may blur professional boundaries, texting outside session hours, clients reaching out through apps, or chatbots handling sensitive disclosures. Reamer (2023) highlights how AI tools and remote services require us to rethink how we maintain confidentiality, informed consent, and boundaries. Bibbs et al. (2023) flag this as a danger zone: when tech use outpaces our ethical guardrails, relationships can shift in unintended ways. Establishing clear communication guidelines and digital boundaries is crucial to maintaining trust and professionalism in these new spaces. Social workers must also receive ongoing training to navigate these evolving dynamics responsibly, ensuring that technology enhances care rather than compromises ethical standards. 

Danger Zone 4: Dependency & Dehumanization 

One of the most profound risks comes when vulnerable clients become overly reliant on technology or AI because human connection is replaced or reduced. Telehealth may provide access, but fewer in-person options might lead to less rich relational work. Reamer (2023) emphasizes that AI isn’t a substitute for professional judgement or human relational depth. Bibbs et al. (2023) call this anger zone “dehumanization”, that in digital practice we might lose the relational core of social work, especially for clients who already feel disconnected.  In addition, WJARR (2025) point out that telehealth is a game changer for people who live in rural areas, low income communities or living with disabilities. However, not everyone has the tech skills or the resources. When that happens, what’s meant to bridge the gaps can actually make people feel even more disconnected because it is not accessible for them. 

We do we do? 

We need to integrate telehealth and AI thoughtfully and deliberately. For Vulnerable groups, telehealth offers access and flexibility (WJARR, 2025); AI promises new predictive tools and resource efficiency (Reamer, 2023). However, we must build in measures to mitigate those danger zones. That means involving clients in design, analyzing how algorithms impact equity, setting clear policies for digital boundaries, ensuring human interaction remains central, and viewing technology as an aid, not a replacement, for the core values of our profession (Bibbs et al., 2023). Embracing telehealth and AI doesn’t mean ignoring risk. It means riding the wave, while keeping one hand firmly on ethics, relational practice, and social justice.  

 

References 

Bibbs, T. D., Wolfe-Taylor, S. N., Alston, N., Barron, M., Beaudoin, L., Bradley, S., Young, J. A. (2023). Constructing the future of social work tech habits of mind: With the Ethical OS. Advances in Social Work, 23(1), 132-147. 

Reamer, F. G. (2023). Artificial intelligence in social work: Emerging ethical issues. International Journal of Social Work Values and Ethics, 20(2), 52-71. 

WJARR. (2025). The use of telehealth in social work practices for vulnerable groups. World Journal of Advanced Research and Reviews.  

3 Comments

  1. GarisonCole1108

    You explained the danger zones really well, and I agree with how you connected them to what we’re seeing in social work right now. Technology definitely has its benefits, especially in terms of access, but it also creates new risks that we can’t ignore. I appreciate how you highlighted that issues like surveillance, bias, and blurred boundaries can easily occur if we aren’t intentional. For me, the biggest takeaway is that tech should support our practice, not replace the human side of it. As social workers, we must remain vigilant, protect our clients, and ensure that we utilize these tools in ways that keep people safe, empowered, and respected.

  2. Srathvon1

    One thing that I was not able to square with one of the readings that you mentioned was how we would involve clients in creating and analysis of the ethical framework that would guide our usage of technology in the field of social work. I understand that it is important thing to do, however I think glosses over that client seeking the resources of talk therapy and case work are not usually in the most stable times of their lives.
    how do we keep to our ethical responsibilities of client care and add something else for them to consider? I would love to hear what thoughts you have about specific method social workers can involve clients in this process without breaking other ethical commitments and/or boundaries of a client/provider relationship.

  3. Dr P

    Nice job.

    Dr P

Submit a Comment