Lonique Center for Humane BLOG post 2

Written by lollivierre2

September 7, 2025

1. Mission of CHT

The Center for Humane Technology’s mission is to make sure technology is designed in a way that truly benefits people and society instead of exploiting them. They focus on exposing the ways tech companies profit off attention, addiction, and division, and they push for healthier systems that prioritize human well-being.

My opinion: I think their vision is powerful because it forces us to question how deeply technology shapes our choices and values. I respect that they don’t just complain about tech but actively create solutions and education around it. I believe their vision matters, because technology is everywhere in our lives, and without organizations like CHT, there’s little accountability for how it affects us.

2. Social Media in Society

Do I think CHT is correct in its analysis? Yes, I do. Their point that social media was built to keep us hooked rather than to connect us is very true. Infinite scroll, constant notifications, and algorithms that reward outrage or extreme content are clear examples.

Was I surprised by the discussion of the brain? Not really. I already knew social media plays with our dopamine and attention, but reading how intentionally it was designed to hijack our minds made me think more critically about my own use.

Has social media changed in the last year? Yes. In 2025, it feels like AI has added another layer. More feeds are filled with AI-generated content, and platforms push short videos harder than ever. While some platforms talk about “safety features,” the addictive nature hasn’t really gone away it’s just evolving with new tech.

3. AI in Society

My reaction/opinion: I think CHT’s perspective on AI is spot-on. They show how AI can bring both progress and harm depending on who controls it and why. I agree with them that the way AI is being rushed into society without enough safeguards is risky.

One problem area: The domain I’ll focus on is human society and culture. CHT argues AI could reshape how we see truth, trust, and creativity. I agree: deepfakes, biased systems, and fake content already confuse people. Where I might challenge them is that they don’t talk enough about the possibility of communities reclaiming AI for positive cultural use like using AI in art therapy or social work in supportive ways.

4. Podcast Episode on Rogue AI

My reaction is a mix of concern and reflection. The idea that AI could “go rogue” shows how fragile our systems are when huge companies race ahead without careful planning. What stood out to me is that the real danger of AI isn’t robots attacking us, but the subtle ways these systems can spiral out of control… like algorithms making harmful decisions no one can fully explain. I feel that society needs to slow down and make smarter choices now before it’s too late.

5. Impact on Social Work

Negative impacts: These tech issues can make clients more distracted, isolated, or misinformed, which makes it harder for social workers to connect and provide support. AI systems might also reinforce bias leading to unfair treatment for vulnerable groups. For social workers, the constant exposure to harmful online culture can increase stress and burnout.

Positive impacts: On the other hand, if technology is reshaped with humane principles, it could actually free social workers from paperwork, connect people to resources faster, and give clients healthier ways to use tech. Social workers can also use this awareness to educate clients about digital well-being, making them more resilient in the face of harmful tech.Be Wary of Silicon Valley's Guilty Conscience: on The Center for Humane Technology | LibrarianShipwreck

4 Comments

  1. zallen16

    Hi Lonique! I was also not too surprised by ‘Social Media and the Brain’. I’ve read studies on social media and its impact on dopamine. Still, the page was interesting and informative. My reaction to the podcast was the exact same as yours. They talked about examples of AI lying and blackmailing in test scenarios, which I found extremely concerning, esp when you think about how integrated AI is in every life (finances, healthcare, the military). In your last section, you discussed the possibility of AI helping social workers connect people to resources faster; this is a great point. I also mentioned this idea in my post, maybe through chatbots that could provide clients with 24/7 support and basic guidance.

  2. koriej

    I agree that their reasonable opinion on AI’s potential for both good and bad is excellent. It seems like technology is advancing more quickly than our capacity to develop laws that protect people, which is why your argument about the rush with which AI is being used without proper protections is so important. It would be impactful to have them emphasize how AI may be purposefully reclaimed for positive cultural applications, such as the supportive social work tools or art therapy you described. People may view AI less as a threat and more as a tool that, with careful application, can encourage resilience and connections.

  3. Brittni

    When I was thinking about wether or not social media has changed in the last year I didn’t really think it had. Then when you mentioned how AI is not such a big part of it I realized it has actually changed a lot! AI within social media has arguably made it a lot worse in a few ways. It has definitely led to more distrust among people and more fake content. That has had an impact on a lot of artists because AI is learning how to make art by stealing from art it views that is already available. I do agree with you that there is the potential for positive AI use in the future but negative impacts are already here and that seems like something we will have to deal with before we can improve it.

  4. Nickwenscia

    Hello lollivierre2,

    Thank you for sharing your thoughts. Do you think the government should step in more to have limitations on the AI technology that is being created or already exist?

Submit a Comment