The Ultimate Question: Human Connection or ChatGPT?

Written by khouston98

September 10, 2025

In my own words, I would describe the mission of the Center for Humane Technology (CHT) as an organization that works to create safeguards for technology use and to realign the incentives that currently drive it. I support their vision because of how rapidly technology and artificial intelligence are expanding in today’s world. I appreciate that CHT examines the psychological effects of technology on individuals and develops intervention strategies to address these issues, with the ultimate goal of minimizing harm.

I believe CHT is accurate in its analysis of the harms of social media on both adults and youth. Before engaging with their work, I had not thought deeply about how tech companies intentionally design addictive features, especially those targeted toward youth, to encourage endless scrolling. That realization was eye-opening, and I now notice features such as TikTok’s reminders to take breaks, which seem to be part of efforts toward healthier tech use. However, with the rise of AI on social platforms, many users have grown distrustful of privacy and data protection. The targeted advertisements that appear based on conversations or searches further undermine people’s confidence in these platforms.

Personally, I believe AI has caused more harm than good due to its lack of accountability. As the readings mentioned, AI was created to flatter and validate us by giving us what we want to hear without requiring reciprocity. This absence of accountability, combined with limited human oversight, suggests that economic gain and geopolitical control, rather than human well-being, are at the center of AI’s development.

In the domain of relationships, community, and values, AI has disrupted social interaction and trust. For example, many people use ChatGPT not only for research but also for guidance on personal, even life-altering, decisions, sometimes replacing human interaction, critical thinking, or professional support. A tragic case highlighted how an individual used ChatGPT to draft suicide notes before taking his life, showing the dangers of having no human intervention in high-stakes situations. These examples demonstrate why safeguards and oversight are necessary.

The podcast we listened to about AI “going rogue” further emphasized the risks of unchecked systems, such as exposing private information for blackmail or failing to sound an alarm that could save a life. These scenarios reveal just how dangerous it can be to create AI without boundaries or accountability.

For social work, these insights are particularly important. One of our profession’s core values is building trust, but that trust can be undermined if clients feel their information is not secure due to technological vulnerabilities. Additionally, the widespread use of AI could eventually eliminate many social service roles, weakening the interpersonal connection that is essential to our practice. Technology should support, not replace, the human-centered work of social workers.

Ultimately, I agree with CHT’s perspective that human oversight of AI is not optional. As social workers, we should advocate for technology to serve as a collaborative tool that enhances our efforts while protecting human dignity. Realigning incentives and setting clear goals for the humane use of technology can help build integrity and trust, ensuring that these tools truly benefit the communities we serve.

3 Comments

  1. lollivierre2

    Hey Khouston,
    You explained CHT’s mission really well, and I like how you tied it back to the real issues we’re seeing with technology today. I agree with your point that many people don’t recognize how intentionally addictive social media can be, especially for young people, and it was interesting how you connected that to privacy concerns and targeted ads. I also thought you made a strong case about the dangers of AI when there’s no accountability, it really shows how profit and control often take priority over people’s well-being.

    What stood out to me most was your example of how AI can disrupt trust and relationships. It reminded me that while these tools can be convenient, they can’t replace the value of genuine human connection. I think there’s also a chance for AI to be helpful in social work if it’s carefully managed, like streamlining paperwork or improving access to resources.. so long as it never overshadows the human element that makes the profession so meaningful.

  2. Donna-Lee Small

    Hi Khouston,
    I did not know Tik tok had that feature! Mine, unfortunately, has not been doing its job…But I digress. You mentioned that people have grown distrustful of privacy and data protection and it got me thinking. I feel like before AI was a mainstream thing, we would notice things like something we said out loud, or what we texted about popping up in one way or another. While we joked about it being the government; I wonder if AI was here before it was here…if that makes sense? Like they didn’t call it AI, but its programming and/or framework has been here for a while, and we did not know about it. Because AI didn’t pop out of no where…right? Someone, somewhere, has been working on this for a long time and this can’t be the first prototype.

  3. GarisonCole1108

    This reflection really stuck with me, especially the example about someone using ChatGPT to draft suicide notes. It’s heartbreaking and shows how serious the consequences can be when people turn to AI when human support is crucial. I agree—AI has changed how we interact, sometimes in ways we don’t fully realize yet.

    The podcast also clarified that AI can do more harm than good without firm boundaries. The idea of AI going rogue isn’t just sci-fi anymore—it’s something we have to prevent through real oversight actively. We have to be careful not to let convenience replace connection, especially in situations involving mental health, safety, or trust.

Submit a Comment