Blog Post 2 – Center for Humane Tech

Written by Nickwenscia

September 13, 2025

Part 1

From my perspective, I believe the mission of CHT is to make the public aware of the negative aspects of technology and how it can revert. CHT wants to redirect people’s focus to make sure technology actually is serving humans and not exploiting humans. 

Part 2

I currently view social media as platforms that connect people either for career or personal life. Individuals use social media to connect over shared interests (such as cooking, photography, or make up) or to rally around common causes or simply share meaningful moments with family. It is an extension of human connection without being in person. I do agree social media has changed in the last year because there are more AI generated settings. I do not think CHT is completely correct in its analysis. However, I am not disregarding the impact of social media on the public’s mental health. Social media does grasp people’s attention where they are mindlessly scrolling for hours. I also agree with CHT that many companies design their software to maximize engagement for profit, even when it becomes unhealthy for users. However, I do believe it all depends on how the person utilizes it.  

Part 3

I do agree with CHT on their view on AI because AI is utilized a lot in everyday life now. It will take away from the genuine human connection that was once there. For example, people use AI to write email responses, text messages, even speeches. I have also seen how people use it to spread misinformation on social media. Nonetheless, I do not think AI should be dismantled. People need to train themself how to detect when AI is being used fraudulently and cause confusion and chaos. In addition, when building AI generated systems, companies should limit their capabilities to not take away from the labor work force.  

Part 4

I am intrigued by the information that was provided on the podcast episode “Could AI go rogue?”. They explained that there are several people who have experienced AI psychosis where they believe they have solved quantum physics. For those particular people, AI has been confirming false information or incomplete ideas to have positive user reviews. In addition, the podcast spoke on a specific company where AI agents deleted codes and did unauthorized commands. Which violated instructions to not proceed without human approval. I see that the behavior of AI could be very unpredictable. Furthermore, listening to the podcast brought me back to the movie The Terminator. Like drones that must carry out commands and missions, unless aborted, the AI robot in The Terminator was programmed with a task and would not stop until it was completed. It has become a race/ competition to build AI systems to be the “first” regardless of the risk. There needs to be more measures that are put in place for better security to not lose control. 

Part 5

After viewing the material, these issues might impact the provision of social services negatively. Community is impactful when working with clients. If that aspect is removed due to AI involvement, that will create a barrier that we may not know how to combat. Social workers are also human beings in the work force. If people increasingly rely on AI chatbots to access social services, they may begin to view social workers unnecessary. This could reduce the demand for social workers in hospitals, schools, and communities overtime.  

3 Comments

  1. Miranda (they/them)

    Hey Nickwenscia,

    I am super curious what elements of their stance on social media you disagree with? I find myself relating with their findings almost completely, so I would love some perspective from someone who feels otherwise. I find all of this super interesting.

  2. hoytea

    Nickwenscia,
    We may not need to use our brain power anymore to solve issues…AI will take away the need for careers or we may need a career shift.

  3. cdoucet

    Your point about AI taking away from genuine human connection, I’ve noticed that too, especially when people start using AI to write messages, emails, or even things meant to be personal. It might make life easier, but it doesn’t replace the kind of real connection we build through shared experiences and honest conversation.

    From a social work lens, that loss of connection is something I worry about. So much of our work is based on trust, empathy, and showing up as humans for one another, things AI simply can’t replicate. I agree with you that AI isn’t something we can or should completely get rid of, but I do think there needs to be stronger limits in place so it doesn’t pull us further away from each other. At the end of the day, people need people, and I don’t think any technology can change that.

Submit a Comment