1. The goal of the Center for Humane Technology is to align technology with what is best for people. Instead of using false design techniques that take advantage of human weaknesses, they want to develop technology that encourages truth and wholesome relationships.
Their idea is really intriguing. CHT advocates for technology that genuinely benefits people rather than just making money, which means they are not anti-technology. This is an important mission, in my opinion, in a society where a great deal of our attention is focused on commodities.
2. In my opinion, CHT’s review of social media is correct. Even if it divides society or damages mental health, social media is purposefully made to draw attention and keep users interested. Their explanation of “attention harvesting” and how applications use algorithmic feeds and endless scrolling to attract users touched an emotional attachment with me because I observe these characteristics in my own daily routine. Although it wasn’t entirely unexpected, the section on social media and the brain was depressing. I became more aware of how strong and addictive these platforms are when I learned that they use our dopamine systems to keep us coming back. In my opinion, social media has evolved over the past 12 months. Even professional platforms like LinkedIn are attempting to become more addictive, and there is more short-form video content than ever before (TikTok, Instagram Reels). This may worsen mental health problems and has made it more difficult for people to disconnect.
3. The way that CHT views AI is both interesting and a bit concerning. I agree that society is finding it difficult to keep up with the rapid changes in AI. Their efforts to bring AI into line with human ideals have become important. Loss of Control was one of the five areas that most interested me. According to CHT, AI may eventually behave in ways that are impossible for humans to completely predict/control, which could have unexpected negative effects. I believe that this is a significant concern; algorithms that spread false information without human intervention are already minor examples of this. The consequences could be far more severe if AI systems grow more independent. If I were to criticize CHT, I would say that their message might come across as frightening and urgent at times, which might keep people from interacting with the solutions. More focus should be placed on practical steps that communities and regular people, not just politicians or tech firms, can take.
4. The podcast episode opened my eyes. The idea that AI might be able to survive shutdowns/changes interested and, to some extent, alarmed me. It got me thinking about how rapidly these technologies are developing and how little control there is in comparison to their possible influence. I was both scared and intrigued by the potential dangers if we don’t take action right away, but I was drawn in as well because I think we still have time to work together to develop AI in a way that will benefit society.
5. These challenges will directly impact the practice of social work. Social workers must understand how algorithms could worsen depression, anxiety, and loneliness, as social media already has an impact on clients’ mental health, body image, and self-esteem. In the future, AI tools might be used to provide online treatment, suggest interventions, or assess clients. If applied responsibly, this could be a step in the right direction toward accessibility! If technology continues to cause harm more quickly than we can mitigate, social workers themselves run the risk of burnout or a sense of helplessness. CHT’s efforts, however, offer me hope that social workers may be involved in advocacy by promoting safer platforms, teaching clients digital literacy, and making sure AI tools used in human services are fair and compassionate.
Hiya Korie!
You did a great job breaking down the main points from CHT and connecting them to real-life experiences. I liked how you reflected on how social media affects you personally and tied it back to bigger mental health concerns. Your point about AI being both exciting and concerning also stood out especially the reminder that while the risks are real, there’s still space for communities and social workers to make a positive impact.
I also think it’s interesting how you mentioned LinkedIn becoming more addictive… I’ve noticed that too, and it makes me wonder if eventually all platforms will adopt these attention-grabbing tactics, even the ones we normally associate with professionalism. It really highlights how important it is for social workers (and society in general) to push for tech that serves people instead of just profits.
Hi Korie,
I thought it was crazy that as a tool designed to harvest our attention, social media has resorted to the shortest form of content in order to capture our attention. As the years have gone by, we have adapted and so has technology to feed the very addiction it helped create. It’s a hole that is continuously dug into – a cycle leading to destruction. Unfortunately, to add to the chaos, AI is not here to save us. If anything, it is up to us to step up and not let something like AI, that is only advancing, get too advanced, especially without proper guardrails.
It’s mind boggling as to the power or lack thereof that technology can control human lives. For every post that I have read, AI seem to be of utmost concern yet there are no regulatory body to stop it from being rogue.