News
【Fellow's Contribution】Amid the Turmoil over Predictions for the Future of Human and Artificial Intelligence
2026-02-04

Piero Formica FCAcad, Innovation Value Institute, Maynooth University, Ireland

 

Introduction

We invite the reader to enter the Mind Gallery, a conceptual space for personal growth, creativity, and mental well-being amid the rise of Artificial Intelligence (AI). The Gallery illuminates the value of imagination detached from temporal circumstances that are accidental, provisional and inconsistent with each other.

 

With the advent of Artificial Intelligence, human life and the cultural world undergo strong fluctuations and uncertainties. On the one hand, the machine adopts the characteristics of the human body (automata), and the human body takes on the characteristics of the machine (cyborgs, bionics, and computer prostheses). From another perspective, the nature of human work can retreat into subservience to intelligent machines or evolve to satisfy the human being's need to realise themselves, developing knowledge, skills, and potential thoroughly. New job opportunities in the face of cancelled existing work can reduce or increase inequalities.

 

In this text, we present five questions to inspire readers and guide their education, work, and decision-making. These are three stages of a transdisciplinary intellectual journey. Today, this journey can be accomplished by participating in ideation laboratories operating under various names at several educational institutions, including the institute where I work: the Innovation Value Institute at Maynooth University in Ireland. In the laboratories, the learning process takes over the teaching process to cultivate abstruse questions that reveal unusual paths. Teaching is focused on knowledge maps so that the student is placed in a position to say, 'I know.' Instead, learning prepares the mind to understand ignorance. Learners exploring ignorance – "agnotology" is the term coined by the historian of science Robert N. Proctor – take pleasure in not finding what they were looking for, and they are not afraid to confront the uncertainty that comes from the 'unknown unknowns'. That is how the facts classified as immutable, fixed once and for all, are challenged and proven wrong.


Screenshot 2026-02-03 at 14.02.48.png

 

The background of the Gallery

In the mid-1950s, three scholars in computer science, economics, and cognitive psychology created Logic Theorist, the first AI program. Allen Newell (1927–1992) and John Clifford Shaw (1922–1991) worked at the Rand Corporation, a think tank that helps improve policy and decision-making through research and analysis. Herbert A. Simon (1916–2001), winner of the 1978 Nobel Prize in Economics, was an American political scientist and economist. Logic Theorist had a profound influence on the development of Artificial Intelligence, demonstrating that machines could be enabled to manipulate abstract symbols and logical expressions to solve problems—tasks previously thought to be performed only by human intelligence. Approximately 70 years later, in June 2025, the Santa Fe Institute highlighted how Artificial Intelligence has penetrated our lives. AI writes computer code, moves robots on command, drives vehicles, and assists with hiring, housing, and criminal cases.

 

Five questions in the form of disquieting muses are displayed in the Gallery

The first question: Will Artificial Intelligence really become intelligent?

As long as AI is a mere matter, it will lack the capacity for abstraction. That is, it will lack the thought to understand metaphors and apply knowledge from one context to another. AI will focus on individual words rather than general concepts. As Aristotle (383 BC-322 BC) might say, an AI robot exploring another planet would provide information. Still, it would be unable to define its structure, lacking the sensory perception that would allow it to think in an Aristotelian way, characterised by abstraction and free choice. In short, in the absence of subjective experience, consciousness, and the "soul" that characterise human intelligence, AI will simulate intelligence rather than be a true sentient intelligence. To be truly intelligent, AI would need to be equipped with matter that mimics biological systems. Research is moving in this direction.

 

The second question: What mental maps do we use to interact with Artificial Intelligence? 

The maps designed by our (super) specialisation? Perfect maps containing every possible detail, since we turned to AI with the intention of overlooking nothing? Alternatively, do we reject maps as detailed as they are circumscribed, knowing that they contain "niches of imbecility" (as the historian Harari put it)? Let's examine one case of acceptance and the other of rejection.


Acceptance: Our mission is to design a mental map that best orients us towards the most effective use of human and natural resources, aimed at maximising profit at the micro level and GDP at the macro level. Increasingly augmented AI will allow us to complete the mission with high quality and accelerated timeframes previously unattainable.


Rejection: Augmenting AI leads us to augment Human Intelligence. In what sense does "augment" mean? A new room showcasing previously unpublished thoughts is opening in our Mind Gallery.

 

The third question: What will become of us and learning machines? 

As things stand, the answer is: "everything and more." Predictions about both Human Intelligence (HI) and Artificial Intelligence (AI) are numerous and varied. As highlighted by Simon Rogerson of De Montfort University, we are witnessing such an obsession with AI that physicist Stephen Hawking (1942-2018), speaking at the launch of the Leverhulme Centre for the Future of Intelligence, a new AI think tank based in Cambridge, declared.


Success in creating AI could be the biggest event in the history of our civilisation, but it could also be the last – unless we learn how to avoid the risks. Alongside the benefits, AI will also pose dangers, such as powerful autonomous weapons or new ways for the few to oppress the many.


Among the most insidious risks is confusing companionship with friendship. The former involves spending time with AI without mutual emotional demands. The latter involves emotional and affective bonds, with each investing in the other's well-being.


Observing the human behaviour influenced by egocentrism, it is argued that this hinders our reception, interpretation, and use of information. Learning machines, being egoless, encounter no resistance to continuously absorbing enormous amounts of information. Consequently, they perform cognitive tasks better than the mental processes that allow us to analyse, interpret, handle, digest, or synthesise data to gain understanding for solving problems, making decisions, and interacting with the world.

 

The fourth question: What will become of Nature?

The AI supply chain absorbs enormous quantities of electricity, water, rare earths, and other natural resources. Furthermore, by increasing the efficiency of industrial processes, it is a driver of economic growth and of the resulting escalation in resource use, a phenomenon known as the Jevons paradox, after William Stanley Jevons (1835–1882), a 19th-century English economist. As anthropogenic stress soars, there is a need to improve the efficiency of energy networks and weather models, and to accelerate scientific research on biodiversity loss and anthropogenic disruption. Measurement standards must be established to assess the effectiveness of these interventions.

 

The fifth question: What about the gap between the Global North and the Global South in the AI adoption?

The Global South is not strictly a geographical term. It includes all developing countries, less industrialised countries, and countries with more fragile economies. China is a global economic power committed to responsible development. Japan, Australia, and New Zealand are part of the Global North. China, India, and Japan are the leading countries of Confucianism, Taoism, and Buddhism, which profoundly influence Asian culture and ethics. Southern Europe includes countries bordering the Mediterranean Sea and experiencing a digital divide compared to Northern Europe, with investments in Research and Development (R&D) lower than the European Union average.


In January 2026, the OECD published a ranking on the use of generative AI, which autonomously creates new content from existing data. Broadly speaking, Northern countries lead the OECD. Western and Northern values predominate. The South is burdened by economic and social disparities, fewer available resources, and AI's lack of alignment with its cultural philosophies. Let us examine the details.


Screenshot 2026-02-03 at 14.04.42.png

 

Resources

The North possesses superior digital infrastructure (high-speed internet, electricity), has significant capital investments funding AI startups, benefits from technologies often designed for Western and Northern markets, and can rely on its high concentration of technological talent.

 

Skills Gaps

The lack of specialised skills in the Global South hinders the adoption and development of AI.

 

Cultural Colonialism

The South provides data used to improve AI projects developed in the North. AI models trained on English-language data and incorporating cultural principles and social norms from rich countries lack the relevance and accuracy to be accepted by Southern communities.

 

Cultural Aversion

AI focused on Western values raises concerns in the South about the possible erosion of cultural identity and jobs.

 

Governance

The rules governing AI derive from Northern policies and guidelines aimed at advancing their commercial interests. It follows that the South has shown signs of distrust towards AI.

 

Let us now turn our attention to perspectives suggesting greater openness in the South to AI as a positive and transformative force.

 

Collectivist Mentality

In the North, AI ethics emphasises individual rights. AI could harm personal autonomy and privacy. Techno-scepticism would slow the pace of AI adoption. In contrast, AI is more welcomed by the more collectivist communities of the South. They assume that AI can improve collective areas such as healthcare and education, thereby benefiting social harmony and collective well-being.

 

Trust

Compared to the North, more countries in the South express less trust in public institutions. This posture could translate into greater acceptance of AI if seen as a tool to counter delays, cost increases, and productivity declines—bottlenecks for which local regulatory authorities are at least partly responsible.

 

Conclusion: Meeting the Idea Builders in the Mind Gallery

Estimates predict that AI-enhanced machines will dramatically reduce the demand for human labour within ten years. Data scientists and data analysts will be in particular demand. However, what about the Idea Builders (so to speak, Ideators or Creators) who cultivate the garden of human activities? They are in the wake of digital humanism, which elevates our mental states, thanks to poems and novels that push us to read beyond the algorithms influenced by the prejudices inherent in human data. If the simple replication of what is being done does not require Creators, the opening of opportunities for jobs carried out by humans depends on them. Idea Builders challenge prevailing conceptions by designing transformations that seem impossible at first glance. Their actions are not jeopardised by AI but by the Non-Immediate Application Syndrome (SNIA) of their ingenious projects, which, at best, receive hesitant approval, expressed with a 'yes, but'. Countless SNIA cases have marked the lives of the Ideators and the history of innovation. Richard Holmes, in his essay The Age of Wonder: How the Romantic Generation Discovered the Beauty and Terror of Science (New York City, NY: HaperCollins Publishers, 2008) underlines that <<Babbage’s ‘computer’ had no immediate application that officialdom could see or even imagine, though Babbage claimed, correctly, that it would transform the calculations for logarithms, astronomical tables, engineering construction models, map-making and marine data>>.

 

Some maintain that the cost of creating ideas is approaching zero thanks to AI. This trend could lead to an explosion of creativity. In addition to replicating and recombining what is already known and performing limited tasks more efficiently and effectively, AI would improve human inventiveness. However, the path AI has taken so far does not allow it to commit to generating genuinely original thoughts, as it lacks consciousness, self-awareness, and intuitive knowledge acquired through instinct. It only works with large, correlated data sets and existing models, but it does not know what is true. Differently, as the philosopher and linguist Noam Chomsky (“The False Promise of ChatGPT”, New York Times, March 8, 2023) argues, the human mind <<seeks not to infer brute correlations among data points but to create explanations>>. There are no substitutes for the only accurate intelligence we know, our own, which makes conjectures and creates entirely new ideas that keep our brains awake.


About Author

Piero Formica is a Fellow of CORE Academy (Division of Social Sciences). He serves as Senior Research Fellow & Thought Leader at the Innovation Value Institute, Maynooth University, and holds the title of Distinguished International Collaborator (awarded December 2025). He is also Invited Professor of Knowledge Economics in the MOIM—Master in Open Innovation Management at the University of Padua, and a Fellow of the Royal Society of Arts. His work bridges knowledge economics, innovation and entrepreneurship, with a distinctive emphasis on the philosophical lenses through which societies navigate uncertainty and technological change.

111.png

Fellow's Profile Page: https://coreacad.org/Member.aspx?ProId=192