What might great philosophers have said about AI… according to AI?

Obviously, Plato did not get to use ChatGPT back then, and we might think that he could not have thought anything about it, as AI never existed in his time. However, philosophical inquiry often transcends the specific contexts of its time, offering insights that resonate across ages. Philosophy as a discipline has ways of conceptualizing everything we could ever think of. So, even though they never heard of AI, they would probably have been able to dissert about it!

What would great philosophers have said about AI?

I asked ChatGPT (GPT-4 Turbo) to express a possible opinion about AI of great philosophers from our past. And to push further the exercise, I also asked an opinion about Artificial General Intelligence (AGI), which is likely to be the next evolutionary step of AI, far more powerful than what we are experiencing in 2024.

Although it is not possible to actually know whether they would agree or not with those statements, it is an interesting exercise and use case to show the ability of Gen AI to contextualize and assemble different concepts and pieces of knowledge together. There were hundreds of great thinkers in all times and civilizations. It was quite a challenge to choose only those ones, and I went for the most commonly known philosophers. My good friend Boris Sirbey even tried to make past thinkers debate together, and it is an absolute must read!

CONFUCIUS (551–479 BC)

Confucius focused on personal and governmental morality, emphasizing family loyalty, ancestor veneration, and respect for elders, aiming to achieve societal harmony. About Gen AI, he might have viewed it as a tool that should be aligned with the principles of harmony, ethical conduct, and social order, stressing the importance of using such technology to enhance moral education and societal well-being. He would likely argue that AGI should embody the virtues of benevolence, righteousness, and propriety, ensuring that AI systems support societal harmony and ethical relationships among people.

  • “I am worry about AI disrupting social order, undermining respect for tradition, and failing to adhere to ethical and moral principles in human interactions.”

PLATO (428/427–348/347 BC)

Plato believed in a realm of immutable ideal forms and argued that knowledge of these forms is the basis for all true understanding, with the physical world being a shadow of the true reality. Plato might see Gen AI as a reflection of the shadows in the cave, intriguing yet far from the true and ideal forms of knowledge and reality, urging a cautious approach to its interpretation and use. He would suggest that AGI development focuses on aspiring towards the ideal, using technology to elevate human understanding and wisdom, rather than being misled by the mere shadows of reality that AI might project.

  • “My concerns revolve around the misuse of AI, potentially leading people away from the truth and towards a world of illusions, much like the shadows on the wall of his allegorical cave.”

ARISTOTLE (384–322 BC)

Aristotle emphasized empirical observation and logic, categorizing the natural world into a hierarchy and asserting that knowledge comes from direct experience. He might appreciate the capacity of Gen AI to systematize knowledge and its potential to contribute to empirical sciences, emphasizing the importance of grounding AI’s reasoning in observable reality. He would likely focus on the importance of AGI being practical and beneficial for society, aligning with his vision of purpose-driven life, and stressing that AGI should serve to enhance human flourishing.

  • “I wary of AI’s impact on human virtue and the potential for technology to encourage extremes of behavior rather than a balanced, ethical life.”

THOMAS AQUINAS (1225–1274)

Aquinas sought to reconcile Christianity with Aristotelian philosophy, arguing for the existence of God through rational means and emphasizing the harmony of faith and reason. Aquinas might view Gen AI as a creation that reflects God’s order and rationality, arguing for its potential to uncover divine truths and enhance human understanding within the framework of faith. He would probably advocate for the development of AGI in a way that harmonizes with divine law and human ethics, ensuring that technology serves the greater good and contributes to the moral and spiritual development of humanity.

  • “My main concern is to ensuring that AI does not lead humans away from God or diminish the value of the soul, faith, and divine revelation.”

RENÉ DESCARTES (1596–1650)

Known for his statement “I think, therefore I am”, Descartes founded modern rationalism, emphasizing the role of doubt and the mind in acquiring knowledge. Descartes might be skeptical of AI’s ability to truly think and be conscious, questioning whether artificial systems can possess genuine understanding or merely simulate it. He would likely see the development of AGI as a challenge to define what constitutes true knowledge and consciousness, emphasizing the need to distinguish between genuine cognition and artificial simulation.

  • “I am concerned about AI’s lack of true understanding and consciousness, questioning whether such systems could ever truly replicate human thought and moral reasoning.”

BARUCH SPINOZA (1632–1677)

Spinoza proposed a monistic view of the universe, equating God with nature, and argued for the unity of everything, denying the existence of moral absolutes and emphasizing rationality and freedom. Spinoza might regard Gen AI as a natural extension of the human intellect, a tool for expanding our understanding of the universe, emphasizing the importance of aligning AI with rationality and ethical conduct. He would likely argue that AGI should be developed with an understanding of the interconnectedness of all things, promoting freedom and enhancing human capacity for rational thought and action.

  • “I worry about the misuse of AI in ways that go against nature and rationality, potentially leading to unethical outcomes or enhancing human bondage rather than freedom.”

JOHN LOCKE (1632–1704)

Locke introduced the concept of the “tabula rasa” or blank slate, arguing that knowledge comes from experience and that humans have natural rights. Locke might be intrigued by the potential of Gen AI to learn and adapt, emphasizing the importance of the environment and experiences in shaping AI’s “knowledge” and abilities. He would likely advocate for the careful cultivation of AGI’s learning processes, ensuring that AI systems are exposed to positive influences and experiences that promote the well-being and rights of individuals.

  • “My primary concerns include privacy issues, the potential for AI to infringe on individual rights, and the importance of consent in the use of personal data.”

JEAN-JACQUES ROUSSEAU (1712–1778)

Rousseau argued that civilization corrupts natural goodness and freedom, advocating for a return to a more natural state and emphasizing the social contract as the basis of society. Rousseau might view Gen AI with caution, concerned about its potential to further detach humanity from its natural state and questioning the impact of technology on freedom and inequality. He would likely stress the need for AGI to be developed in a way that respects natural human rights and freedoms, advocating for technology that promotes equality and enhances societal bonds rather than eroding them.

  • “I am particularly worried about AI exacerbating social inequalities, undermining community bonds, and leading to greater moral and political corruption.”

Mary Wollstonecraft (1759–1797)

Wollstonecraft is considered one of the early feminists, advocating for women’s rights and education, and critiquing the societal norms that limited women’s independence. She might view generative AI as a means to educate and liberate, potentially offering women and other marginalized groups access to knowledge and opportunities. Wollstonecraft would likely see AGI as an opportunity to advance equality, emphasizing its potential to provide educational resources and challenge oppressive structures.

  • “My concerns revolve around ensuring that AI does not perpetuate educational and social inequalities, advocating for equitable access to technology.”

KARL MARX (1818–1883)

Marx focused on the role of class struggle in societal evolution and advocated for a classless society, critiquing the capitalist system and its inherent inequalities. Marx might analyze Gen AI in the context of capitalist production, critiquing its potential to exacerbate inequality and alienation, while also recognizing its revolutionary potential to transform the means of production. He would likely advocate for the development of AGI in a way that democratizes access to technology, ensuring that it serves to empower the working class and contributes to the abolition of class distinctions.

  • “I fear the potential for AI to increase capitalist exploitation, widen the gap between the bourgeoisie and the proletariat, and further alienate workers from the means of production.”

FRIEDERICH NIETZSCHE (1844–1900)

Nietzsche challenged traditional moral values, proclaimed the “death of God,” and introduced the concept of the will to power as a fundamental drive. Nietzsche might see Gen AI as a manifestation of the will to power, challenging conventional human values and potentially leading to a reevaluation of what it means to be human. He would likely advocate for the development of AGI that transcends traditional moral and societal limitations, encouraging a redefinition of values and the emergence of the “übermensch” or “overman” who would shape a new era.

  • “I am concerned about the potential for AI to be used in ways that stifle individual will and creativity, fearing a society where technology leads to conformity rather than the emergence of the übermensch.”

Virginia Woolf (1882–1941)

Woolf’s writings emphasize the subjective experience, exploring the inner lives of her characters and critiquing the social structures that restrict women’s freedoms and creativity. Woolf might see generative AI as a double-edged sword, capable of both offering new forms of expression and creativity and reinforcing societal norms that limit individuality and authenticity. She would advocate for AGI that fosters creativity and individual expression, ensuring that technology serves as a means of liberation rather than confinement.

  • “My concerns include the potential for AI to stifle creativity and individuality, stressing the importance of preserving human emotions and experiences in the face of technological advancement.”

Martin Heidegger (1889–1976)

Heidegger focused on the nature of being, questioning the essence of technology and its impact on human existence and thinking. He might view generative AI with caution, reflecting on how it shapes our understanding of being and the world, and potentially leading us away from authentic existence. He would likely be skeptical of AGI, questioning whether it can truly enhance human life or if it merely represents another step towards the domination of technology over humanity.

  • “I want to bring the attention on the existential implications of AI, pondering whether it distances humanity from a more profound engagement with being.”

Ayn Rand (1905–1982)

Rand promoted Objectivism, advocating for rational self-interest, individualism, and laissez-faire capitalism as the ideal social system. Rand might view generative AI as a pinnacle of human innovation and creativity, embodying the potential of the individual mind. She would likely support the development of AGI as a means to further human progress and prosperity, emphasizing its alignment with Objectivist principles.

  • “I wonder about the potential for governmental or collective control over AI, arguing for the protection of individual rights and freedoms in its development and use.”

Hannah Arendt(1906–1975)

Arendt explored the nature of power, authority, and the human condition, focusing on the importance of direct democracy and the dangers of totalitarianism. She might be intrigued by the potential of generative AI to influence political discourse and public space, analyzing its capacity to either support or undermine democratic engagement. Arendt would likely emphasize the need for AGI to enhance public discourse and political participation, ensuring it serves as a tool for empowering citizens rather than controlling them.

  • “I worry that AI is used by authoritarian regimes to surveil and manipulate populations, stressing the importance of safeguarding freedoms and democratic values in the age of AI.”

Simone de Beauvoir (1908–1986)

De Beauvoir laid the groundwork for modern feminism, arguing that one is not born but becomes a woman, critiquing the social constructs that define and limit women’s roles and freedoms. She might view generative AI as a tool that can either perpetuate gender stereotypes and inequalities or challenge and dismantle them, depending on how it’s programmed and used. She would advocate for AGI development that incorporates feminist principles, ensuring that AI systems do not reinforce existing gender biases and work towards gender equity.

  • “My main concerns involve the potential for AI to solidify traditional gender roles and biases, and the necessity of including diverse perspectives in AI development to prevent such outcomes.”

Michel Foucault (1926–1984)

Foucault examined how power dynamics shape knowledge, society, and individual identities, focusing on institutions like prisons, hospitals, and schools. He might analyze how generative AI could be used to monitor, categorize, and control individuals, reflecting on its implications for power relations and personal freedom. He would be interested in how AGI could reshape societal structures and the distribution of power, potentially advocating for its use in deconstructing traditional hierarchies.

  • “I can’t ignore the potential for AI to reinforce societal controls and surveillance, emphasizing the need for critical reflection on how technology influences power dynamics.”

More philosophical questions raised by AI

The advent of AI not only revolutionizes our technological capabilities but also propels us into a profound philosophical inquiry, challenging our conceptions of consciousness, identity, and free will. These philosophical questions, deeply rooted in millennia of thought, are now reinvigorated and transformed in the context of AI, offering both a mirror to reflect on human nature and a lens through which to envision our future.

Consciousness and sentience in AI: A philosophical conundrum

The exploration of consciousness and sentience within artificial intelligence (AI) thrusts us into one of the most profound philosophical debates: what constitutes consciousness and can a non-biological entity possess it? Philosophers like Daniel Dennett, with his functionalist view, argue that consciousness can be understood in terms of the functions it performs, suggesting that if AI can replicate these functions, it could be considered conscious. David Chalmers, on the other hand, presents the “hard problem” of consciousness, focusing on the subjective experience — something he argues might never be replicable in AI due to its non-physical nature.

This debate extends beyond academic discourse, touching on the ethical implications of AI development. If an AI were to possess consciousness, it would necessitate considerations of rights, ethical treatment, and potentially even personhood for AI entities. The philosophical inquiry into AI consciousness challenges us to define the ethical boundaries of our interactions with technology, urging a reevaluation of what it means to be sentient and the moral obligations that arise from this status. As we advance, the lines between biological and artificial consciousness may blur, prompting a redefinition of consciousness itself in a way that accommodates the evolving landscape of intelligent beings.

Identity and self in the age of digital personas

The digital age, characterized by the proliferation of AI and digital personas, presents a novel context for examining the concepts of identity and self. Philosophers like Charles Taylor, who emphasizes the narrative construction of identity, and Derek Parfit, known for his exploration of psychological continuity, provide frameworks for understanding how identity is formed and perceived. These philosophical perspectives gain new relevance as digital technologies enable the creation of complex online identities and interactions with AI entities that mimic human behaviors and emotions.

The emergence of digital personas challenges our traditional notions of identity, suggesting that it can be fragmented, multifaceted, and distributed across digital platforms. This raises questions about the authenticity of our online selves, the impact of digital interactions on our psychological well-being, and the ethical considerations of privacy and data ownership in shaping our digital identities. As we navigate this new terrain, the philosophical insights into identity and self offer guidance on maintaining coherence and authenticity in a world where the boundaries between the human and the technological are increasingly intertwined.

Free will and determinism: Navigating the predictive power of AI

The philosophical tension between free will and determinism is magnified in the age of AI, particularly as predictive algorithms become capable of influencing human decision-making. The debate encompasses perspectives like John Searle’s criticism of computational reductions of mental states, arguing for the irreducibility of consciousness and the autonomy of human will. Compatibilists, such as Daniel Dennett, offer a counterpoint by suggesting that free will can coexist with a deterministic understanding of the universe, proposing that freedom lies in the complexity of human decision-making processes, which AI might augment rather than diminish.

AI’s predictive capabilities challenge our perceptions of autonomy and agency, raising ethical questions about the extent to which our choices are truly our own in the presence of algorithms designed to predict and influence those choices. This debate urges a careful consideration of the balance between leveraging AI for societal benefits and safeguarding individual autonomy. It calls for a nuanced approach to AI development and governance that respects human agency while acknowledging the potential of AI to enhance human decision-making capabilities.

Conclusion

This philosophical journey is not merely academic; it has practical implications for how we design, implement, and interact with AI technologies. The questions raised by AI serve not only as challenges to be addressed but also as opportunities for deepening our understanding of the human condition. Exploring consciousness, identity and free will in the context of AI does not offer easy answers, but it does provide a valuable framework for critical thinking and ethical reflection.

It even invites us to reflect on what it means to be human, the values we hold dear, and the kind of future we wish to create. As we continue to integrate AI into every aspect of our lives, the philosophical insights into these fundamental aspects of human existence will remain crucial in guiding our choices and ensuring that technology enhances, rather than diminishes, the human condition.

In exploring the theoretical viewpoints of distinguished philosophers on AI, we venture into an intellectual exercise of considerable depth and imagination. It is essential to bear in mind that these reflections are speculative, rooted in the extrapolation of each philosopher’s core ideas and principles to a modern context they could not have directly contemplated. The interpretations presented are speculative and should not be taken as definitive statements of what these philosophers would indeed assert about Gen AI and AGI.

It is rather an invitation to think alongside the great minds of the past, engaging with their ideas in the context of today’s technological landscape, and perhaps, in doing so, uncovering new pathways for ethical reflection and technological advancement in the pursuit of wisdom and well-being for society.

[Article created on March 1st, 2024, by Jeremy Lamri with the support of the Open AI GPT-4 algorithm for structuring, enriching and illustrating. Writing is mostly my own, as are most of the ideas in this article]

— —

Follow my news with Linktree

If you are interested in the combination of web 3 with HR, and more generally in the societal challenges related to the emergence of the metaverse, AI, blockchain, quantum computing, and other emerging technologies, I invite you to subscribe to the dedicated newsletter that I hold on the subject, and to read the articles that I write:

--

--