![]()
How AI uses our Drinking Water: BBC World Service
- I will be honest, I think about this one all the time. I have never seen this video, but I have read online many times about the amount of water AI uses from activist circles I follow. This is one of the main reasons I have an issue with AI. Although this is something I have thought and read a lot about, I had never heard the idea of putting AI storage under the ocean or in space. At this point AI feels inevitable, so I do appreciate the ideas that would make cooling it less of a thing. I do wonder what the heat production would do to the wildlife under the sea. Maybe space is the answer.
- I think they were trying to make us think about what goes into powering and storing AI and maybe think twice about how much we use it. I also think they were merely trying to share information that most people are not aware of and they were very successful at getting the information across. It was clear, concise and could be taken in easily. It was a very successful piece.
The Social Contract is Breaking and AI is the Hammer
- This is something I was not surprised by at all…a CEO bragging about laying people off and making more money through AI. I feel like this is our every day now. I was surprised by the list of ideas for how to help folks in this seemingly inevitable circumstance the article is proposing, where everyone loses work to AI. The one that surprised me the most was the idea of a universal income. I would have never thought people would really be proposing this in a country where we cannot seem to even be paid a living wage, but I am all for it. Please let’s make life about more than just working. If this was truly the outcome of AI, I would stop having such hard feelings about it.
- This article is to share information about what experts and business men are saying about AI. It is provocative and informative and I’d say it was successful in its endeavor. I am confused when reading it alongside these other articles…is it real? I read them in reverse chronological order and am now wondering if it was simply the information available 4 months ago was different, or if this author missed an entire piece of this puzzle.
Behind the Curtain, a White-Collar Bloodbath
- This is the future I have worried about. One where we no longer need humans for the things we have all studied to do. I am not really sure what I learned from this, probably some names of AI generators and what they actually do, that there are specific softwares for very specific things, but I figured those were out there. I read often about the future effects AI will have on the work force and the directions we are supposedly headed in. This article paints a very interesting picture and the idea of legislating regulations around it is an interesting one. I am all for all the ideas that keep people safer, but I am not sure if warning everyone will do much to help. Where are we supposed to turn if we all lose our jobs? Everyone cannot become a plumber or an electrician.
- This article was meant to warn the public of the inevitable take over of AI. It speaks on things this one man thinks will help everyone, a man who has a very biased view of the industry and personal stake is everyone taking his word for truth. I suppose he was successful, but will this reach his audience? Do the readers or axios need this warning or should it be broadcasted in a more accessible place, if his goal really is to warn. I am curious if his actual goal was to brag about his work and convince everyone of the successes he is having, especially when being paired with the two newer articles.
Companies Are Blaming AI for Layoffs. The Real Reason will Piss You Off.
- So this article really has me intrigued, but unsurprised and leaves me SO curious about the two articles above. When those articles were written, did the date this one is speaking on not exist? The fact that companies are lying about layoffs being because of AI to save face for their mismanagement of hiring is new information to me, but I am not surprised at all. I have learned to not trust things blindly. I also had not really thought about the effect on entry level jobs AI really does have, but it makes a lot of sense. Repetitive tasks are a big part of learning an industry though, so I am very curious how that will play out in the long run.
- This article was meant to inform and likely shock us. It was written in a very compelling way and I do believe it was successful. I do wish all of these articles had studies linked so I could easily fact check them, but I guess that is too much to ask, or maybe I just couldn’t find it.
I’m 57 and AI Just Made Me Unemployable (But Not for the Reason You Think)
- I learned in this that companies ARE looking to hire 30 somethings, which was surprising to me. I always assume they really want entry level folks because they are the cheapest, but I supposed we are all working for less than we are worth these days and generally they do not have to pay us 30-somethings. I was always taught that if we kept up with the tech, we’d have an easier time getting a job, so it is a bit disheartening to hear that it is not true. I am so darkly curious to see where all of this takes us.
- This article was meant to be a place of catharsis for the author- you cannot convince me otherwise and whether or not that was successful is something only the author can tell us. I suppose it was meant to inform us of the cruel reality as well, and with that it was also successful. I do not think it really needs anything else to be what it is. Way to go author.
AI Experiment that I Very Begrudgingly Took Part In
(To say I had moral issue with this is putting it lightly)
ChatGPT brought up a difference I had noticed about these two articles as well.
Evidence base:
- Mattlar’s article leans heavily on third-party research (MIT study, Yale Budget Lab, Oxford Internet Institute) to argue that most AI pilots fail or don’t generate profit, and that “AI-driven layoffs” are often not backed by operational reality. Medium
- The Axios piece relies more on Amodei’s first-hand testimony and projection as CEO of Anthropic. While it also discusses the scale of AI agents and how they could be deployed, it’s more about forecasting future risk than analyzing past ROI. Axios
AI and I agree again:
I find the Medium article (by Jari Mattlar) more persuasive overall, for a few reasons:
- Evidence-backed critique: Mattlar brings in several external studies (MIT, Yale, Oxford) to back up the claim that AI is being misused as an excuse. That gives the argument empirical grounding. Medium
- Detailed examples: He names specific companies (Salesforce, Klarna, Accenture, IBM) and describes how their public statements about AI don’t match internal realities. Medium These concrete cases make the critique more tangible.
- Balanced advice: The article doesn’t just warn — it also gives practical advice for workers on how to navigate the situation (document wins, upskill, know your worth). That makes it actionable.
- Motive analysis: He doesn’t just say “AI will kill jobs,” he argues why companies want to blame AI (PR, investor optics), which is a more nuanced take than pure existential fear.
- What did you learn about using AI that you might not have known?
I honestly do not know. I think, have conversations, and read about AI often, even outside of this class. I have detailed conversations about it with my friends who use it and try to understand what people gain from it. I have worked with a bit a bit in writing formulas for excel. - What did you think about the responses you received? Good, weird, helpful, annoying, incorrect, or whatever.
I was a bit surprised that we seemed to pick up on some of the same things. It made me curious if how I typed my prompt gave the AI a clue into who am I am how to speak to me so quickly or if the things I thought about were just the things an average person reading these would. - What was your emotional reaction to using this tool?
I was not too pleased. It was especially tough for me after watching the BBC video, which focused on something I think so often about when it comes to AI. I don’t think writing the blogpost was worth the water it wasted to get here. - Where do you think we are headed and how do we manage it?
What an incredibly huge question. I honestly have no idea. I can see a future where the economy collapses, there is revolution and things are completely different. I can see a reality where the billionaires continue to take over, AI has all our jobs, and we are all living in slums. What can I say, I have a pretty bleak look at the future right now. After reading these articles this week, I can hope for a future where we use AI for good and we all have a UBI and we can live comfortably without having to kill ourselves to do it. Wouldn’t that be beautiful? Wherever we are headed, I think I will stick close to my community, build things in real time, use AI when I am forced to by there being no choice not to (hello Google). I think we just manage whatever is coming that way. Lean on each other, learn what we can, adapt and move with the times as much as needed. We just keep doing the work however it looks and hopefully some of us want to work for CHT and create policy and guidance around the growing technology. I will definitely not be one of those who do that, but I know some of you will.
Hi, Miranda, I agree about the impact on marine life or ocean temperature for storing under the sea or in space, That seems terrifying considering climate change and current world circumstances. Right now it is a huge political issue south of Atlanta, they are proposing building a data center in Fayetteville, which is near my home, and I am so worried about the impact to wildlife, water system and local residence.
Miranda,
Your analysis of the articles you read and the video was right on and well done. I found it interesting that you talked about not knowing what you learned and then told me very clearly and succinctly what you learned. These articles were not intended to give you “truth” or the final opinion. It was meant to encourage you to understand the complexity and paradoxes of this debate. AI has the capacity to totally reinvent our social structure. The question is in what direction and who will make those decisions. I intentionally gave you articles that did not see this debate the same way – either as the cause or the solutions. The challenge we face is to engage with the questions.
Now to the ethical issues of this assignment. I gave you all the BBC video to watch because I wanted you all to be aware of the cost to our society and our planet each time we make the decision to utilize AI. But you are correct, just knowing that will not stop the use of AI. I remember when computers were first coming on the scene (yes, now you know how old I am) and there was a great debate about whether or not we should engage in that technology. A friend of mine asked me whether we wanted that technology in our hands our in the hands of people who did not share our values. That’s when I bought my first Apple 2e computer.
So when you think about the cost, use AI judiciously. I have used it for research and found it tremendously helpful. I am in awe of the medical uses and the availability of options for people living with disabilities because of what it can do. There are a million ways this technology can better society. And there are a million ways that it can pull us right off the rails. If we don’t play – if we don’t engage – we don’t have any power over determining which way things go.
Really good job on this post BTW
Dr P