1. BBC Video on AI Water Use
I genuinely didn’t know how much water AI systems use, especially the fact that massive amounts of clean water are needed just to cool data centers. I had heard about AI’s energy use, but not the water footprint. The BBC wanted to highlight that the environmental cost of AI goes far beyond electricity; water usage is becoming a real sustainability issue. They made the point clearly because the visuals and statistics were simple and straightforward. What felt missing was more context on solutions, like what governments or tech companies are actually planning to do about it.
2. AI and White-Collar Job Disruption
I didn’t realize how quickly executives are openly acknowledging that AI could eliminate large portions of white-collar jobs. The article showed that this shift isn’t hypothetical; it’s already shaping hiring and layoff decisions. The main argument was that a major transition is coming in the workforce, and white-collar workers—who have long felt “safe”—may be hit hardest. They made the point effectively by combining data, quotes from CEOs, and current events. What was missing was the worker perspective; it felt very “top-down.”
3. “Companies Are Blaming AI for Layoffs “
This article introduced the idea that some companies may be using AI as a convenient scapegoat to cover up deeper issues like mismanagement, bad investments, or pressure from shareholders. I had not considered how AI could become a PR tool to justify decisions that were already in motion. The author argues that while AI is part of the story, layoffs are often motivated by cost-cutting, market panic, or internal mistakes—not solely by technology. The article succeeded because it connected corporate behavior to broader economic patterns. What was missing was hard data; some claims felt more opinion-driven.
4. I’m 57 and AI Made Me Unemployable”
I didn’t realize how emotionally complex the job displacement conversation is for older workers. The author showed how much ageism intersects with AI disruption—meaning some people aren’t just competing with machines, but also with biases about being “too old” to adapt. His main point was that the real issue isn’t AI, it’s a job market built in a way that undervalues older workers and treats experience as a liability instead of an asset. It was effective because the personal storytelling made the argument feel human. What was missing was a broader structural analysis or suggestions for policy change.
AI Experiment
I analyzed the Axios article and the Mattlar Medium article using AI. The AI said both articles agree that AI is impacting jobs and changing business strategies. But Axios frames the change as an inevitable technological shift, while Mattlar argues companies are using AI as a cover story for decisions driven by profits or poor planning. The AI essentially said they are describing the same phenomenon but from opposite angles—one structural, one skeptical. AI said Mattlar’s article was more persuasive because it acknowledged multiple causes instead of just blaming AI. Axios noted that it leaned heavily on CEO perspectives, which might be biased. The AI pointed out that arguments that recognize complexity tend to feel more credible.
What really stood out to me while using AI was how much the wording of my questions mattered. When I asked something broad, the responses felt generic. But when I was specific, like naming the articles or asking for direct comparisons, the answers were clearer and actually useful. Some of the responses felt genuinely insightful, while others came off a little too neutral, almost like the tool was avoiding choosing sides. Emotionally, it was a weird mix: on one hand, it was impressive to see how fast it could break down ideas, but on the other hand, it made me think about how easily tools like this could replace certain tasks people do at work. Seeing it in action made the concerns in the readings feel a lot more realistic.
As for where things are headed, it seems pretty obvious that AI is going to be part of everyday work, whether we’re ready or not. Jobs will shift, some will disappear, and new ones will show up just as quickly. This means people of all ages are going to need more digital skills, not just students. To keep up, we’re going to need better worker protections, more training opportunities, and a lot more honesty from companies about why they make certain decisions. AI itself isn’t automatically a bad thing, but if we don’t manage it carefully, it could make existing inequalities even worse. The real challenge will be making sure people aren’t pushed aside as the technology grows.
I didn’t know AI systems use huge amounts of clean water just for cooling data centers, and that honestly shifted how I see its environmental footprint. I wish the video talked more about what solutions are being developed instead of just presenting the problem.
The articles on job disruption highlighted good points, especially the idea that white-collar jobs aren’t as protected as people assumed. It was interesting how some companies might be using “AI” as a convenient reason for layoffs that are actually driven by poor planning or financial pressure. It showed that technology isn’t always the cause—it can also become the excuse.
The story about the 57-year-old worker made the issue feel personal. Ageism and the way the job market undervalues older workers plays a huge role too.
Using AI to analyze the articles also showed me how much the quality of my prompts mattered. When I asked vague questions, I got vague answers, but specific comparisons led to more helpful insights. At the same time, seeing how easily AI summarized complex ideas made the job-loss concerns feel very real.
Overall, AI isn’t automatically harmful, but if companies and policymakers don’t focus on transparency, worker protections, and real upskilling opportunities, this technology could deepen inequality instead of helping people.
Korie,
This is a really good job on this post. I think you made a good point about wanting to hear more about the worker perspective. You also gave great information about your experiment with AI. That is the paradox, isn’t it? Nice job.
Dr P