The rise of AI: threat or opportunity for the humanities?
In recent years, the dizzying advance of artificial intelligence (AI) has unleashed an intense debate about the future of the humanities. Tools such as machine learning algorithms capable of writing texts, generating works of art, and curating information have led us to ask if traditional disciplines—philosophy, ethics, literature, history, art—will be able to survive in an increasingly automated world. It is not the first time that humanistic culture
has faced a technological challenge: from the invention of the printing press to the digital revolution, each leap has sparked similar fears. However, the scale and speed of current AI pose unprecedented challenges. On the one hand, there is a risk that society will end up devaluing the humanistic contribution in favor of a technocentric vision. On the other hand, these same disciplines may prove more essential than ever to bring human meaning to an era dominated by data and algorithms. The question is profound: what place will the humanities occupy in the age of AI?
The concern is not merely theoretical. In 2024, for example, 6,500 artists signed a manifesto denouncing the use of unauthorized creative works to train generative AI, calling it “a great and unfair threat” to the livelihood of creators. This fact shows a generalized fear: that automation displaces writers, painters, musicians, and historians, reducing their work to mere inputs for algorithms. Likewise, educators and academics wonder if critical thinking and deep reflection—cultivated by the humanities—may atrophy in a generation accustomed to obtaining instant responses from machines. What happens to human creativity and empathy when we outsource the production of ideas to artificial systems? In this article, we explore, with a sociological and psychological approach, the challenges that AI poses to the humanities, and argue why they remain irreplaceable in the digital age.
AI Challenges for Humanistic Culture
The impact of AI on society extends beyond the technical; it touches cultural fibers and fundamental values. From a sociological perspective, one of the greatest challenges is the possible cultural homogenization and loss of depth in the collective narrative. AI is fed by huge volumes of data and information, optimizing patterns and maximizing efficiency.
However, that same accelerated logic of information can clash with the reflective pace required by the humanities. South Korean philosopher Byung-Chul Han warns that the omnipresence of digital technology “atrophies people’s contemplative capacity.” In a world flooded by instant stimuli, the space for contemplation and critical interpretation is reduced. The humanities, on the other hand, thrive on the pause, on the detailed analysis of the human condition. If everything becomes a flow of information without context, we run the risk of losing the sense of reality and our own history. This phenomenon has been called a “crisis of narration”: it is not that we stop telling stories, but that the narratives lose their strength of cohesion and identity. Humanistic narratives (in literature, history, philosophy) serve to “order collective existence” and create a shared meaning, something difficult to achieve if we completely delegate the production of meaning to algorithms whose main purpose is to keep us constantly engaged.
Another sociocultural challenge is how AI influences public opinion and democracy. Historian Yuval Noah Harari highlights that AI already plays a crucial role in the “creation and dissemination of ideas” on digital platforms. On social networks such as TikTok or Facebook, algorithmic systems determine which messages receive massive attention, thus shaping the social conversation. This means that more and more of our culture and public discourse are mediated by decisions of artificial intelligences. Harari warns that the line between human and non-human voices blurs, deteriorating the quality of democratic debate—”What happens to human conversation when the strongest voices in it are not human?” he wonders. Here we see a direct clash with a pillar of the humanities: critical dialogue and independent thinking. If AI largely determines what we read, see, or even believe, the humanistic mission of questioning, contextualizing, and criticizing becomes more difficult, but at the same time more vital to safeguard against manipulation and
misinformation.
From the perspective of psychology and cognitive sciences, questions arise about the effect of automation on our minds. A central point is the possible impoverishment of exclusively human skills such as original creativity, deep empathy, and the construction of the sense of self. Byung-Chul Han argues that “artificial intelligence is truly incapable of thinking” because it lacks the affective dimension that characterizes human thought. According to Han, the act of thinking is not purely logical; it implies an emotional response, a shudder, a “goosebumps” before the unknown. Machines can process data, but they don’t feel. This absence of genuine emotions limits the creativity of AI to a mere recombination of existing patterns. Likewise, technological analyst Evgeny Morozov argues that no machine can possess a historical and experiential sense of the past, present, and future, nor experience the burden of nostalgia or the wounds of experience. By lacking that living and emotional memory, AI is “trapped in formal logic.” In other words, they can know, but not truly understand. The human imagination, on the other hand, draws from our lived experiences, from our awareness of time and mortality, to create something truly new. An algorithm could write a poem imitating previous styles, but it is difficult to spark a new literary movement loaded with existential meaning. The humanities nurture that connection between reason and emotion, between memory and innovation, which is the source of authentic creativity.
It is also worth asking how AI affects our empathy and human relationships. Numerous social psychologists point out that screen-mediated communication (chats, social networks, virtual assistants) can weaken the ability to empathize, by reducing face-to-face interactions and direct emotional signals. Psychologist Sherry Turkle, for example, points out that the face-to-face conversation is irreplaceable to learn to put yourself in the other’s place—”it is
where we develop the capacity for empathy,” she says. If in the future we depend on AI for elderly care, automated therapies, or even simulated friendship (as some emotional chatbot startups suggest), we could see collective emotional intelligence eroded. The humanities—through the literature that makes us live other lives, the ethics that forces us to consider our neighbor, the art that moves us—work as training for empathy and a reminder of our common humanity. A hypertechnological world without that humanistic counterweight is in danger of becoming colder and more instrumental, where people are
seen as numbers or “users” instead of beings with intrinsic dignity.
In summary, AI poses formidable challenges: it tends to accelerate the flow of information at the expense of reflection, threatens to replace or commodify human creativity, reconfigures our public sphere, and could affect our basic psychological abilities. However, these challenges are precisely why the humanities are not only destined to survive, but to thrive. Next, we examine why these disciplines should continue to play a central role in the digital age.
The Sense of the Human: Sociological Contributions in the Digital Age Sociology and social philosophy offer us tools to understand what is at stake in the
confrontation between AI and the humanities. Beyond the immediate concern for jobs or skills, there is an essential question: What defines the human in times of “smart” machines?
The humanities have always been, deep down, an exploration of the sense of the human—our stories, values, expressions, and links. Faced with the irruption of sophisticated artificial agents, this exploration becomes more critical than ever to affirm our collective identity.
The philosopher Hannah Arendt, already in the 1950s, reflected deeply on the impact of automation on the human condition. Arendt observed with amazing clairvoyance that the “advent of automation” would empty the factories and free humanity from the burden of routine work. But she warned that this material release could lead to a crisis of meaning: what would humans do if machines work for us? Arendt feared that, in a “workers’ society”
that had glorified work above other activities, removing the work would leave a spiritual void. Her implicit response was to turn her gaze to those “highest and most significant activities” that give value to life: political action, artistic creation, philosophical contemplation. That is, the humanities. Today, when AI promises to automate not only physical work but also intellectual tasks, Arendt’s question resonates with renewed strength. Freed from mechanical work by machines, will we dedicate ourselves to thinking, creating, taking care of our common world? Only if we keep the humanities alive will we have the compass so that that liberation…
Discussion about this post