In the YouTube video How AI Uses Our Drinking Water, they explain how much water AI systems require. I knew data centers used a lot of water, but I didn’t understand why. Now I know that it’s mainly for cooling, and AI is using so much energy that the systems overheat, and clean drinking water is needed for liquid cooling. They explain that up to 80% of that water evaporates, which raises concerns about water needed for drinking, farming, and basic hygiene. Even producing the electricity for data centers uses additional water. Major AI companies have promised to be “water neutral by 2030” and are exploring alternatives like ocean water or other non-freshwater cooling methods. The video made the point that water is a limited resource, and it made me think we need to focus seriously on finding better alternatives. What felt missing was a deeper look at how realistic the proposed solutions actually are and whether “water neutrality” is even achievable.
In The Social Contract is Breaking and AI is Holding the Hammer, I was surprised by how many workers have already been laid off in 2025 and by the predictions for future job loss. The main point seems to be that AI and tech are evolving so quickly that if we don’t put safeguards and policies in place now, the consequences will hit before we’re prepared. Thakur made his point well and even offers a few potential solutions, but I agree that they do seem like science fiction rather than practical steps. I also appreciated his discussion of who will benefit from the wealth AI creates because this is an important point that needs to be discussed further.
In I’m 57 and AI Just Made Me Unemployable (But Not For The Reason You Think), the author talks about how hard it’s been to find a job after being laid off at his age. He explains that workers in their late 50s expect to be paid for their experience, but companies don’t want to pay those higher salaries anymore. Instead, they either replace the work with AI or hire younger employees who have some experience but lower salary expectations. The author argues that AI is becoming an easy excuse for companies to avoid paying for seasoned workers, and he effectively shows what is truly happening in the workforce.
Behind the Curtain: A White-Collar Bloodbath argues that many white-collar jobs once thought to be safe are now at risk, with mass layoffs projected over the next one to five years. The article suggests unemployment could hit 20% and that up to half of white-collar roles may disappear. Amodei emphasizes that action needs to happen now, which is a theme across all these articles, yet no one seems to be taking it seriously. He does acknowledge some major benefits of AI, like potential medical breakthroughs and economic growth, but warns that these gains could come with extreme wealth concentration and deepening inequality. I think he does a good job of raising awareness and making the point that workers and employers need to be aware of these implications but I wish he offered more concrete advice about how they can actually prepare. He also mentioned the “token tax” idea and states that it’s probably not in his economic interest which leads me to believe it probably will never happen although I think it should.
During my AI experiment, I learned that I could feed ChatGPT the full articles and have it answer the questions, but I noticed the responses leaned toward downplaying AI’s role in layoffs. It argued that companies and not AI are responsible, which felt overly simplified. There must be some connection between AI and the layoffs we’re already seeing. While ChatGPT framed mass-layoff fears as exaggerated, the conclusion didn’t match what these authors are warning about. Honestly, reading everything together made me uneasy. I hope that with all these warnings being issued now, real changes can happen and AI can be regulated, because the future these authors describe feels pretty bleak if nothing is done.


I didn’t know why AI used so much water either! The video was really fascinating. I think it is interesting that you felt like you got biased answers from AI. It makes me want to go back and ask different questions. I also felt uneasy using AI and now that I understand the climate issue, it makes me feel even worse about it.
I agree that the video made the environmental impact of AI feel more serious, especially learning that clean drinking water is used for cooling and most of it evaporates. The promises of being “water neutral by 2030” sound good, but I also wonder how realistic those solutions are.
The readings about layoffs made it clear that AI isn’t just replacing jobs—it’s also being used to justify cutting higher-paid or older workers, which ties into ageism more than just technology. The predictions about white-collar unemployment were alarming, and even though AI has benefits, it seems like the economic gains could be concentrated among a few people unless policies change.
I also thought it was interesting that the AI tool downplayed layoffs, which shows how the framing of these tools can shape how we interpret the issue. Overall, AI isn’t automatically the problem, but without regulation and worker protections, the impact could get a lot worse.
I also wondered how far they had looked in the solutions posed in How AI uses our drinking water. I find the satellite in space idea quite compelling. Could we do a space station of servers and there are astronauts whose entire job is to float out in space taking care of our AI…or more likely going up to fix damage. As to the idea that this all feels like a scifi plot…all of our lives right now feel that way, so I guess we’re here! We are the scifi novel!
Allison,
Your analysis of the articles was well done. They weren’t meant to give you an answer, but rather to give you more questions.
In terms of the AI experiments it is difficult for me to give you feedback since I don’t know the prompts you used. You can go back and change the prompts and see how significantly that might change your answers. It only have what it has learned from all of us.
My reason for assigning this experiment is to get you engaged. I realized that we have a responsibility to use AI judiciously due to the impact on the environment. But is we dont play – if we aren’t’ engaged – we have no ability to influence the outcomes.
Dr P