Using AI Responsibly

Written by cdoucet

September 13, 2025

The Center for Humane Technology (CHT) is really about ensuring technology works for people instead of taking advantage of them. To me, their mission is to push back against tech that drains our attention or harms our mental health, and to promote systems that protect dignity, democracy, and well-being. I relate to that because it doesn’t dismiss innovation; it emphasizes responsibility. That feels very similar to social work, where we hold hope for progress but also advocate for fairness and justice.

When it comes to social media, I think CHT is right on target. These platforms are built to keep us hooked by tapping into our brain chemistry: likes, outrage, and endless scrolling. I wasn’t shocked by how they explained the science, but it hit me how intentional those designs are. Over the past year, social media has gotten a little more open about things like misinformation, but the algorithms still run on attention and engagement. That means the core issues haven’t really gone away.

Their take on artificial intelligence also stood out. CHT points to five big challenges: disinformation, bias, job loss, weapons, and losing human control. The one that hits hardest for me is bias. AI learns from human data, and that means it can repeat and even amplify inequalities. In places like healthcare or the justice system, that can be devastating. I agree with CHT on this, but I also see the other side: if AI is developed carefully, it could help expand access to services and lighten the workload for social workers.

Listening to their podcast on rogue AI honestly left me uneasy. It’s easy to picture sci-fi robots gone wild, but what feels scarier is how much power algorithms already have in our everyday lives. They shape what we see online, how we get our news, and even how people vote (most of it happens without transparency). This isn’t some distant “what if”; it’s happening right now, and it shows why we need real accountability.

All of this ties back to social work. For clients, social media can add to anxiety or depression, and biased AI can shut them out of jobs, housing, or healthcare. For us as social workers, it means we need to stay aware, advocate for fair tech policies, and help people build media literacy. At the same time, I see potential for AI to reduce paperwork and improve access to care if it’s done responsibly.

Thinking about CHT’s work makes me realize that dignity, equity, and justice aren’t just principles we bring to our clients; they’re also values we should demand from the technology shaping our world.

2 Comments

  1. cbrown0227

    I like how you tied CHT’s mission back to social work because it really does connect. I agree with you that the real problem isn’t robots taking over but how algorithms already shape so much of our lives. The bias piece especially stood out to me because it affects people who are already vulnerable. I also like that you mentioned the positives of Al since it’s not all bad and could actually make things easier if used the right way.

  2. Nramsey3

    Hey,

    I really enjoyed reading your post. I could relate to many of the points you shared. I loved how you connected CHT’s mission to social work values, such as dignity, fairness, and justice. That really stuck with me because I hadn’t thought of it in quite that way before, but you’re absolutely right, both fields push for change that centers people, not profits. You also raised a significant point about bias in AI. That one hits home for me too, especially knowing how much harm can come from systems that are supposed to be “neutral” but end up reinforcing discrimination.Like you, I also see the potential if it’s handled responsibly it could definitely help social workers by reducing administrative work and making support more efficient.Thanks again for sharing your post, it really got me thinking!

Submit a Comment