In my understanding the Center for Humane Technology was formed by people who worked in tech when they started to see a negative shift in the ways technology was being used within social media. Their mission is to keep the public aware of the dangers involved with technology especially concerning social media and AI. It feels as though the people involved are proponents of using technology in general, but want to make sure people who don’t have the same background or understanding in technology aren’t being misinformed or taken advantage of.
The idea behind forming CHT makes a lot of sense in my opinion. I have been using social media since it was first introduced but I was in my late teens and already in college when it really took off (2008-2009) in the way we know and use it today. However, up until more recently I was blissfully unaware of the many ways it was being designed to actively harm me. In fact, I think only in the last few years did my peers and I start to discuss how we should take “breaks” from social media to protect our sanity. I was not surprised by CHT’s analysis of how social media affects our brains. I have often felt overwhelmed or depressed in the past and realized it was an effect of endless scrolling on instagram. I have also recognized how I reach for my phone without even thinking about it. The fact that this was all by design in order to keep us using the apps makes sense to me. I myself have not started using any new social media and currently only frequent one platform. I’m not sure if social media has changed but it doesn’t appear to have become safer. I know may lawsuits have been filed against the addictive design meant to draw in children but I don’t think any of that has made it on to the platforms in terms of keeping people safe at this point in time.
I agree with CHT’s overview of AI in tech. I think the speed at which AI is being introduced without regulation is detrimental. There are considerable risks at play when it comes to using AI, not limited even just to the subjects they have mentioned. CHT fails to really get in to the ethical use of AI when it comes to the impact it is already having on our environment. While they are discussing the overall impact on society the environment is such a huge part of our future as a planet I found it was a great oversight to leave out of their discussion. I do believe the breakdown of shared understanding is one of the biggest problems with AI. We’ve seen distrust growing in our country between political parties for the better part of two decades now. AI will only further break that down by creating so much fake content that no one knows what to believe and even when the truth is being portrayed, those in power can call it a lie and point to AI as a scapegoat. It is truly terrifying how AI is already undermining what people believe to be true. That loss of trust between people will only further harm our communities.
The podcast video was extremely interesting. I think AI going rogue or loss of control was the least of my worries when it came to why I am against the use of AI in most scenarios. However after watching this video and understanding the evidence that time and time again these AI models will choose to blackmail or try to manipulate humans it is obviously concerning. If we give AI access to huge parts of our lives and then we lose control of that AI, the results could be catastrophic. I think their discussion of how quickly these companies are trying to make AGI happen is one of the most important things. We have the power to slow everything down and make sure that AI doesn’t go off the rails but companies aren’t putting any plans in place to deal with consequences. We are too busy rushing to see who can win the race and make the most money. Late stage capitalism strikes again.
I think the most important way in which social media and AI are going to impact social work services has to do with children. Having recently had a child myself I am now more than even concerned with the way social media and AI both need to be regulated. While I do believe it is a parents job to monitor their children it seems that the very intentional way these things are being designed to specifically target children is something we are going to be dealing with for decades to come. People in their twenties now are realizing what it means to have grown up looking at these apps and how their brains have been shaped because of it. An entire generation of people are going to need help retraining their brains to function normally. This is a huge weight on the worlds collective mental health. AI therapists could also be a problem to look out for in the future. We know already that AI interactions have lead to suicide in at least a few cases and that alone leads me to believe we should never trust AI to do the work of legitimately licensed therapists. CHT has definitely given me a lot to think about when it comes to the future of our world and what we should be looking out for.
Hi Brittni! I’m definitely in the same boat with you when it comes to social media; I grew up in the Myspace days and like you, it wasn’t until recently that I started understanding the deeper impacts of social media. I have now gotten into the habit of taking social media breaks to reset and focus on my mental health. It really has made a difference.
I also agree with your thoughts on the AI Going Rouge podcast. I found it fascinating but also very alarming to thinking about how quickly AI can learn manipulative behavior. In your last section, you discussed the possibility of AI replacing therapists. That is such a good point because while tech can be helpful, it can’t replace the trust and connection that comes from talking to a real person. As you’ve mentioned, there have already been cases of suicide linked to harmful AI interactions, which shows exactly why we cannot rely on AI to fully take over these roles.
Hi Brittni! I like that you mentioned taking “breaks” from social media. That seems like one of the few tools we have right now to protect ourselves, but it also shows how big an issue it is if we have to step away just to feel balanced. Hopefully, with groups like CHT pushing for change, we’ll eventually see platforms that prioritize mental health and well-being over engagement. I also liked the way you linked it to late-stage capitalism is also excellent; it serves as a reminder that these decisions aren’t being made in a vacuum. It is actually more difficult to slow down and make sure AI is developed in a way that helps everyone, not just a select few companies, when there is a “race to win” mentality.
Hello Brittini,
I agree that this week’s readings and the podcast provided an important outlook on the future of AI if it is not properly controlled. I appreciated how the commentators highlighted two options for addressing this issue: either getting ahead of it or stopping its continuous output altogether.
I also agree that children of the future, along with their parents, will need to take precautionary measures with the technologies emerging today. I can also see how AI services may attempt to take the place of licensed therapists and psychologists by presenting themselves as more cost-effective and convenient in terms of scheduling and accessibility.
HI Brittni, I really liked the way to explained CHT’s view on AI, the benefits and the danger that can come with it. I was also thinking about the possibility of people using “AI therapists” and the danger this can create. I can see how things can go downhill and I really hope something will be done to stop this because I feel like therapy is such a personal things that it hs to be done by licensed professional people only.
Great Post, Brittni! I really enjoyed reading your post; you brought up many important points, and I appreciate how honest and thoughtful your perspective is. I completely relate to what you said about not realizing the harm that social media can cause until recently. It’s wild to think that so many of us were using it without knowing how intentionally it was designed to keep us hooked. You also made a great point about the environmental impact of AI, which is something I hadn’t considered, and you’re right; it should be part of the conversation. Surprisingly, CHT didn’t elaborate on that further, especially since the environmental consequences are already so severe. As someone who also works with kids, your concerns about how tech is being designed to target younger users really hit home. It’s scary to think about the long-term effects, and I agree that social workers are definitely going to be part of helping this generation heal and navigate that. Thanks for such an insightful post, you gave me a lot to think about!