Wed. Feb 4th, 2026

After a month of total writing sobriety, I’m thrilled to be back to kick off 2026 and get back into good habits.

We live in an increasingly uncertain world. Geopolitical conflicts, Europe’s declining influence, bleak job market prospects, the perceived rising cost of living: sources of anxiety are plentiful. Add to this a growing mistrust of our leaders, whether political or heading major corporations. Faced with this uncertainty, we naturally seek answers. But where do we find them, and at what cost?

Our journey toward knowledge has undergone three major phases over recent decades. Search engines initially seemed like the miracle solution. They offered a plurality of answers, more or less relevant, at the click of a button. But we quickly understood that algorithms dictated the order in which results appeared, not always showing us what we wanted to see. Cookies gradually refined targeting, offering us viewpoints we were likely to appreciate, thus creating the first information bubbles.

For about twenty years, and with significant expansion over the past decade, social media has transformed our relationship with information. Its success rests on several pillars: instant access to unlimited information through continuous scrolling, supposedly pre-selected content, and constant reinforcement of our own viewpoints. This truth bubble represents both a risk and a source of comfort: after all, it’s reassuring to find we’re not alone in thinking this way. The risks are now well documented: dislocation of social bonds, societal polarization, absence of constructive debate, resort to violence, proliferation of misinformation, and sometimes erratic content moderation. Platforms like Truth Social or X embody this ambivalence: decried as much as they are used.

AI represents a new rupture. It gives the impression of being more “intelligent,” more thoughtful: a new oracle of Delphi. We’ve moved from multiple answers (search engines) to convergent answers (social media), arriving today at AI’s singular answer. But this apparent intelligence conceals a very different reality. AI doesn’t think: it only offers probabilistic responses based on the text corpus on which it was trained. This means AI is incapable of truly inventing new things and can replicate the reasoning biases that affect human judgment.

The risks are multiple and concerning. The term “dark patterns,” introduced by UX designer Harry Brignull in 2010, refers to user interfaces carefully designed to trick users into acting against their own interests. According to a study by Di Geronimo & al. of 240 popular mobile apps, 95% employed one or more of these deceptive design practices. In the AI context, these patterns can take subtler and potentially more dangerous forms, exploiting our cognitive biases and emotional vulnerabilities – for more on cognitive biases, I always refer myself to Kahneman’s masterpiece Thinking, Fast and Slow.

AI progressively reduces our ability to challenge and question the results we receive. This vicious circle is particularly concerning: the less we think critically, the more we accept AI responses without questioning them, which further diminishes our thinking capacity. Hallucinations are already well known and documented. Longer term, we risk seeing conformist thinking imposed, shaped not by debate and the confrontation of ideas, but by algorithms and their intrinsic biases. Or even worse, if we consider the mass creation of deepfakes, by which distinguishing true from false becomes almost impossible, to the extent the true can appear false and vice versa. This dissolution of objective reality threatens the very foundations of our society.

Furthermore, poorly regulated AI is proven to seriously harm users’ mental health – the American Psychological Association has called against the use of AI chatbots for mental health support. The emerging phenomenon of “chatbot psychosis” shows that prolonged chatbot use can worsen or even trigger psychotic symptoms in vulnerable individuals. Unfortunately, cases where interaction with an AI has been cited as a primary or contributing factor in one person’s death are not uncommon anymore. The substitution of humans by machines is occurring today where we didn’t necessarily expect it: in social relationships. ‘Innovative’ services like friend.com, which offers entirely digital friendships, illustrate this dystopian trend. Is this the future of our relationship with machines?

Be careful: I’m not claiming that the internet, search engines, social media, and artificial intelligence are inherently harmful. On the contrary, they can be magnificent tools in service of progress, provided certain essential conditions are met. Faced with these challenges, businesses have a crucial role to play in preserving our collective capacity for thought.

AI adoption must be accompanied by a clear charter, adequate employee training, and measured use. Overuse is the problem, not the tool itself. As the World Economic Forum emphasizes, analytical thinking, resilience, and social influence will rank among the top ten skills employers will prioritize through the end of the decade. AI must remain an assistant, never the final decision-maker. Humans must retain the central role in decision-making, using AI as a tool to aid reflection, not as a substitute for it.

It’s crucial to insist on preserving social bonds within companies and beyond. Authentic human interactions constitute our best defense against digital isolation and dependence on artificial systems. Companies must give their employees the keys to decipher our uncertain environment. This includes readings (such as the ones you can find on this blog!), continuous training, and transparent communications from top management. Training and education continue to play a major role. In the age of AI, technical skills must be accompanied by the ability to think critically, to question results, and to maintain ethical judgment when faced with decisions proposed by algorithms. Leaders, on their side, must acknowledge their limitations: “I cannot predict exactly where the world is going and where we’re going, but here are my convictions and our strategy.” This intellectual honesty creates a climate of trust and encourages critical thinking.

The digital era offers us unprecedented opportunities, but it also demands constant vigilance. We must learn to think with machines, not like them. The companies that succeed will be those that can balance technological innovation with the preservation of critical thinking, that invest in their employees while responsibly adopting AI. In this world of growing uncertainty, our best asset remains our capacity to question, to doubt, to debate. It’s this capacity for critical thought that distinguishes us from algorithms and that must be cultivated, protected, and transmitted to future generations. The future doesn’t belong to those who blindly trust AI, but to those who can use it while preserving their intellectual autonomy and, ultimately, their humanity.

Leave a Reply

Your email address will not be published. Required fields are marked *