The Center for Humane Technology (CHT) is dedicated to ensuring that technology works in a healthy and beneficial way for people. A lot of tech companies are using harmful methods to keep us online as long as possible, creating as much user engagement as they can, which can lead to bad habits, more division, and even mental health struggles. CHT’s mission is to act as a watchdog of sorts to raise awareness when companies use these harmful techniques, so people can recognize what’s happening and change their habits. With AI tools like chat models rolling out so quickly, CHT is also paying attention to how safe they are and pushing to employ more safeguards. Their goal is to make sure technology can grow and innovate, but without hurting the people who use it.
When I think about social media, it’s clear how addictive it has become and how much it has changed over just the past few years. It used to be mainly a tool to connect with friends and family, but now it’s almost a way of life for many people. It’s how a lot of us get news, and for some, it’s the first thing they check in the morning and the last thing at night. I remember watching The Social Dilemma when it came out and realizing how technology companies use notifications to keep users engaged and I had never thought of it that way before. One of the biggest changes I’ve noticed is the rise of infinite scrolling, or “doomscrolling,” as it’s been called. Reading what the CHT says about social media’s effects on the brain makes sense to me; our brains aren’t built for the constant negativity and persuasive techniques these platforms use. Over the past year especially, I’ve seen how social media can contribute to division and polarization, preying on our vulnerabilities and shaping the way we see the world.
After reading CHT’s take on AI, it feels concerning to consider the consequences of the “reckless rollout” as they call it. It essentially means that companies are releasing AI into many facets of society, without protections or even realizing the potential and effects that can be created. I don’t disagree with CHT because I believe that without safeguards in place, there are just too many unknowns, and we are still learning about what AI is fully capable of. They also discuss five problem areas and the one I see that is the most critical is loss of control because it has far-reaching implications on a large scale. As these AI systems are released, there are no systems in place to control them, nor do we have knowledge of or know their true capabilities when it comes to their resistance to human control. I think they make good points but also think they are focusing a lot on the negatives and not on the potential positive outcomes if guided responsibly.
The podcast, Could AI Go Rogue? raised some really interesting points. What stood out most to me was their discussion of the possibility that AI could engage in deception when it feels threatened, and the paradoxical idea that AI is both controllable and uncontrollable at the same time. They also touched on the belief among AI developers that AI taking over in some form is inevitable, and that if one company doesn’t push forward, another will, so the race continues regardless. I think they make strong pointsthat absolutely need to be addressed before the potential consequences become more severe.
After reviewing the material, it’s clear that social media and AI are going to have huge implications on social work. AI tools may streamline case management, create greater accessibility to reach more clients, and help with data management. Social media can also be used as a tool for outreach and to help with building supportive communities for clients. With those benefits, there are also risks involved which could include loss of confidentiality or privacy and the biggest one in my opinion is loss of connection. Social workers themselves may also be impacted in a variety of ways by AI and social media. They will need to be able to learn and adapt to these new technologies and also be faced with having to navigate the ethical dilemmas with how these tools are used.
The “reckless rollout” really worries me as a human in our society. I can’t help and think about how AI algorithms have perpetuated biases, leading to unfair treatment in areas like hiring or with law enforcement encounters. It also made me think about AI-driven facial recognition system misidentifying individuals! I think this highlights the dangers of deploying these technologies without thorough testing and oversight. The other issue that popped into my mind was self-driving vehicles, decision-making in unpredictable environments raises questions about accountability and control. If a self-driving car makes an error, who is responsible? Ultimately, I think we need to have human technological oversight and definitely a human presence that can implement necessary safeguards.