Technically, we have two types of Artificial Intelligences (AI), which have deeply involved in our daily lives and work: general AI intellectually acts like humans and narrow AI undertaking some specific tasks, such as navigating autonomous driving cars or assigning delivery tasks to robotics in factories (Akaliyski, 2025). My major concerns about AI usage are three-fold, respectively, amplified value conflicts, and moral responsibility to its agency, and AI capitalism. These crises pile up, making human society more vulnerable than it was ever before.
First, generative AI is an amplifier of conflicting values. Thus, mass adoption of AI is mass distribution of values conflicts across all walks of our lives. On one hand, sexism, racism, homophobia, speciesism, just to name a few, are contagious thoughts taking the vehicles of AI. While humans’ capabilities to identify misinformation and establish personal judgement lie behind the education, the value of which is now doomed. Even university students today find no point to internalize knowledge and apprehend argumentations through intellectual activities of reading, discussing, and writing. If AI can generate an entire essay or implanted into our head, then where are the meanings of education after all? Without internalizing knowledge, people’s opinions are shaped by what AI fed to them, and these opinions are cooked out of probability models from various ingredients on the internet where misinformation and alternative (manipulated) facts crowded. Surely prompt-engineering will become the necessary skills for every workers to acquire, but if everyone becomes a engineer, where are philosophers and thinkers to meditate for the direction of humanity?
Second, when AI has an agency to control these ideological, biochemical, and military weapons, and makes decisions with real detrimental consequences, who are the one responsible for that? Take atomic bomb as an instance, such weapons are destructive enough to tear down all civilized facilities and knowledge we have been constructing for thousands of years. If AI are more intellectual than any individual human on earth, should we grant it with privilege and agency? In my view, AI can intellectually approximate human, but it lacks morality, and thus, takes no responsibility for its previous “actions”. AI only “reasons” by prediction – so it is always statistically correct, while errors occurred in terms of statistical deviance. We can only accuse the one who train the specific AI modals, but which are actually trained by data of all internet users. Moral responsibility of AI is shared and diluted among everyone. However, such responsibilities can be easily maneuvered. What is the social contract between us and AI? How can we build up trust with each other in the AI era, if the fundamental contracts of our society is disguised?
Disguised contract confront us with AI capitalism. Monopolized tech companies made us believe we are free of responsibilities to use AI. As the industrial capitalist made us believe we are free of responsibilities to sell and buy commodities to satiate our needs. When industrial capitalist propagated: “Let machines assemble the product so that human can do arts”. What they actually did was transforming well-rounded workers into unskilled ones who were laid off and shifted to menial works. How could they even do arts? When tech giants advertise: “Let AI wash dishes so that human can do arts”. It is just another blank check signed by capitalism. In Marxist perspective, AI is just another invented means of production, which is invented by utilizing data from each individuals without our explicit consents!
Our heart will be misplaced in the era of AI, more than the age of Max Webber when he famously concluded humans became “specialists without spirit, sensualists without heart”. We must ask what lies behind. According to International Energy Agency (IEA), data centers account for 415 terawatts hours or 1.5% of global electricity consumption in 2024, not to mention how Google enclosed rural lands in Chile to build data centers and drained the local water supply which would otherwise used in farming to cool its machine to power AI (PBS News, 2024; IEA, 2025). Further more, Many tech companies (e.g. Amazon, Google, Tencent) have stoped to hire new programmers, while simply growing their market share (monopolization) by deploying AI coders.
AI made our society especially vulnerable than it ever was before. Institutions, governments, universities, and corporates show almost complete conformity to AI usage, so do the individuals. Enforcement power failed to ensure the relevant responsibilities is undertook and meaningful discussion occurred only in small circles. I’m deeply concerned.
Regardless how social scientists make predictions, crisis are certainly going to happen in this world of VUCA(volatility, uncertainty, complexity, and ambiguity). What really matters is how we respond to the upcoming crises. In order to react differently with satisfactory alternative route to make good enough societies for humans, we certainly need to get prepared before crises strike.
Reference
International Energy Agency. (10 April 2025). Energy demand from AI.https://www.iea.org/reports/energy-and-ai/energy-demand-from-ai
PBS News. (Sep 17, 2024). Google to pause plans for big data center in Chile over water worries. https://www.pbs.org/newshour/world/google-to-pause-plans-for-big-data-center-in-chile-over-water-worries
Akaliyski, Plamen. (2025, November 28). Change with AI and Revision [Lecture Notes]. Lingnan University.
Leave a Reply