In the article, the social contract is breaking, and AI is holding the hammer. The author’s intention was to expose tech companies such as Microsoft for replacing positions that were once held by humans to AI. The author successfully gets his point across by exposing the Reddit user who is the cEO of a business bragging about how they replaced humans with AI as they believed that AI brings more productivity. This did not surprise me because of the advancement of AI and how well they “imitate” human behavior and skills. What surprised me was the rate. This was an instance of you know the issue is occurring, but you don’t know the impact until you look at the statistics. The article “Behind the curtain, a white collar blood bath. Discusses the same topic of AI replacing human jobs but the author goes in depth with the possible reality of AI taking human jobs. They stated that AI could have the potential to cure cancer, and cause the economy to grow but at the cost of a high unemployment rate. What intrigued me was the author’s view of AI being used as AUgmentation in this current state to help people. This is interesting because AI already has almost human behavior and skills; if this is “augmented,” then reality may look bleak for humans in terms of employment. In the article “Companies Are Blaming AI for Layoffs. The Real Reason Will Piss You Off” The author discusses the theory behind why AI is taking the jobs that were once held by humans. I have learned from this article that the agencies are paying for the advancement of AI, which has increased the profits of those agencies.
The following responses were from ChatGPT
(a) Agreement and Disagreement
Agreement:
-
Both articles acknowledge that AI is affecting white-collar jobs, particularly entry-level and office-based roles.
-
Both recognize that companies reference AI and “digital transformation” as part of the reasoning for workforce changes.
-
Both highlight that the job market is undergoing structural shifts, with some roles being reduced or transformed.
Disagreement:
-
Causal emphasis: The Axios/Amodei piece warns of potential large-scale displacement, suggesting AI could replace a significant portion of entry-level jobs within a few years. The Medium article argues that layoffs attributed to AI are largely misleading and that the primary causes are cost-cutting and over-hiring during pandemic years.
-
AI readiness: Axios implies AI is advancing quickly enough to replace many roles soon, whereas Medium points out that most AI initiatives currently yield little measurable ROI.
-
Responsibility: Medium places the blame on companies using AI as a public-relations excuse, while Axios frames job displacement as a structural risk that could happen regardless of corporate narrative.
(b) Most Persuasive Article
The Medium article is more persuasive because it relies on concrete data, including:
-
Only about 1% of service firms reported layoffs explicitly due to AI in recent months.
-
95% of companies investing in AI reportedly saw no measurable ROI.
Its argument is logically consistent: companies over-hired during the pandemic, now face normalization pressures, and use AI as a convenient narrative for layoffs. The Axios article is more speculative, relying on projections rather than current evidence.
(c) Key Takeaways
-
AI is influencing job markets, but its role in actual layoffs may be overstated.
-
Entry-level roles are particularly exposed to change, either through transformation or reduced hiring.
-
Developing skills that complement AI, such as creativity, judgment, and interpersonal abilities, will enhance career resilience.
-
Employers’ statements about “AI-driven transformation” should be examined critically, as cost-cutting may be the underlying factor.
-
The potential for large-scale displacement exists, but it is not inevitable and depends on policy, business choices, and the pace of technological adoption.
One thing that I have learned from using AI that I did not know was their ability to become self aware, almost breaking the fourth wall approach which was evident in its key takeaway response, with one of them being where it stated its role in agencies The responses were good and almost human like as there were some parts where it summmarized the section that they covered in 1-2 lines. It appeared almost opinionated I have used AI in my everyday life, except for school. Using it and copying and pasting its results to the blog post, it felt inappropriate and i felt like i was cheating on this blog post as a result. After reading the responses of the article from the AI, we may be heading to a place where AI may even be able to serve as professors, given their level of intelligence and their ability to mimic human behaviors. I think to manage this we can limit the access that AI has to information, as AI can only provide information that Us humans provide them with as well as the skills that they have
Hey Gemalli 🙂
You made a clear point about how AI is often blamed for layoffs even when the real issues might be cost-cutting or companies trying to correct for pandemic over-hiring. I thought your comparison of the two articles was strong, especially how you highlighted the difference between data-driven evidence and speculation. Your reflection about AI imitating human behavior was also interesting, and I liked how you connected that to your own experience using AI for school. Overall, your post showed good critical thinking about what AI can actually do versus how organizations choose to use it.
Your reflection articulates a nuanced tension between valuing AI’s analytical capabilities and experiencing discomfort with its use, particularly within academic contexts. Notably, your response echoes key themes from the articles you examined: the rapid advancement of AI, concerns about human replacement, and the ethical unease that emerges when technology closely mimics human reasoning to the extent that its outputs appear almost “too good.” Your admission of feeling as though you were “cheating” addresses a significant emerging norm: while AI can serve as a cognitive tool, it should not replace one’s intellectual voice. This awareness demonstrates academic integrity. Furthermore, your observation that AI could eventually assume roles similar to professors underscores the importance of managing this transition with care. Your suggestion to limit AI’s access to information contributes to broader discussions about governance, transparency, and the necessity of maintaining human oversight as these systems become more advanced.
Gemalli,
You are beginning to get the major issues we face with AI. I agree with everything Garison has shared with you. And be careful about talking about AI replacing the professors when you still have grades to be given….;)
Dr P