The Art of Not Becoming a Robot Beyond 2026
Why We Don’t Need 26 New Skills—We Need to Start Using Our Heads
Or: The difference between skills you can Google and character you have to earn.
The other day I was sitting in a café—one of those establishments where the oat milk surcharge carries more weight than GDP trends—reading a newsletter titled “26 Skills for 2026.” The title alone triggered an irresistible urge to take an immediate nap. Apparently, according to the author, we’re in the midst of a “historic phase transition.” The party’s over, now comes the serious part. We need to transform ourselves from simple users into “orchestrators” of “agentic AI.” We need “skill stacks,” “cognitive firewalls,” and the ability to do “botscaling.”
I looked around the café. At the next table sat a young man staring so intensely at his phone he nearly face-planted into his matcha oat latte. Across the way, a mother was trying to wipe three children’s mouths simultaneously while talking to someone on her headphones. And I wondered: Do these people really need “botscaling”? Or do they perhaps just need a little peace, functioning infrastructure, and the assurance that they won’t be “irrelevant” tomorrow simply because they haven’t programmed their own AI agent?
We’re currently experiencing a strange, collective hysteria. An entire industry of consultants, futurists, and LinkedIn prophets has made it their mission to constantly tell us we’re deficient. The human being, according to the tenor, is an obsolete model, faulty software desperately in need of an update. If we don’t immediately start “reinventing ourselves,” “unlearning” our everyday knowledge, and optimizing our “skill stacks,” we’ll be swept away by the wave of history.
Let me tell you something: That’s nonsense. It’s dangerous nonsense.
Because this obsession with “future skills”—with small, isolated abilities—distracts us from what we’re really in danger of losing if we’re not careful: our competence as human beings to shape our future. And that, if you’ll pardon me, is a world of difference.
Why Your Personal Skill-Stack Mania Interests No One If You Have No Values
Let’s talk briefly about language. Words matter. When we start describing our lives with terms from software development, we begin treating ourselves like software. The text I was reading said we need to build our “skill stacks.” A “stack” is a pile. In IT, these are various technologies layered on top of each other. When one becomes obsolete, you pull it out and slide in a new one. That’s efficient for machines. But humans aren’t stackable goods.
The problem with lists like “26 Skills for 2026” is their arbitrariness. There’s “curiosity” next to “prompting.” There’s “resilience” next to “data intuition.” It’s like building a house and saying: “We need bricks, cement, good vibes, and the ability to cook spaghetti.” It conflates tools with attitudes, character traits with instruction manuals.
It suggests we can prepare for the future by simply reading more, watching more tutorials, collecting more certificates. It’s the ideology of constant self-optimization. We’re supposed to become “orchestrators”—conductors of robot armies. That sounds elitist, and it is. It’s a vision for managers who’ve lost touch with reality.
Because what happens to the nurse? The craftsperson? The teacher? Are they also supposed to “orchestrate teams of autonomous agents” while changing a real person’s bandage or teaching children empathy? Hardly. This technocratic view reduces humans to their economic utility. It asks: “How can humans still be useful to machines?” But the right question—the only question a civilized society should ask—is: “How the hell do we make machines useful to humans?”
And here we arrive at the picture we should actually be examining. Not the wobbly stack of “skills,” but the circle of genuine competencies.
Can We Please Stop Pretending Life Is a Management Seminar?
When you turn off the noise of buzzwords, a structure reveals itself that’s much older than the internet and will still exist when AIs like ChatGPT with all their agents are long since gathering dust in the digital museum. It’s the difference between knowing and being.
Let’s look at this more closely. An intelligent competency model doesn’t simply divide humans into “features” but, for example, into four areas: professional competence, methodological competence, social competence, and personal competence.
The prophets of “agentic AI” talk almost exclusively about the left side: professional and methodological knowledge. How do I operate the tool? How do I analyze the data? These are useful things, no question. But they’re things that change. Today it’s the programming language Python, tomorrow it’s natural language, the day after tomorrow it’s thought transmission (God forbid). These are skills. You can learn them, and you can forget them—above all, they can change.
But the real crisis we’re sliding into isn’t a crisis of missing skills. We have enough people who can program. We have enough people who know how to optimize a process. We have a crisis of personal and social competence.
Take value consciousness. In the world of “skill stacks,” this barely appears. At most, there’s a vague “ethical compass.” But value consciousness is more than a compass you pull out of your pocket when needed. It’s the foundation. If we’re building agents that act autonomously—that do things without asking us first—then we need to be damn sure about the values we’re aligning these systems with. Not “efficiency.” Efficiency isn’t a value. Efficiency is an accelerator. If you make a racist, unjust process more efficient, you end up with faster injustice.
Value consciousness means asking “why?” before you automate the “how.” It’s the ability to stand up in a meeting and say: “We could do this, and it would save us millions, but it’s wrong.” Which AI agent will tell you that? None. That’s your job.
Or take consequence awareness. A wonderful concept. It’s completely absent from the hyperventilating tech debate. There, the principle is “move fast and break things.” But when the things you’re breaking are democracy, teenagers’ mental wellbeing, or social peace, perhaps we should slow down a bit. Consequence awareness is the intellectual maturity to know the end of the song before you play the first note. It’s the ability to anticipate: If we use AI to conduct job interviews, what does that do to applicants’ dignity? If we have bots write our texts, what does that mean for respect toward the people who have to read them, and what happens to our ability to think?
Writing is thinking. Whoever delegates writing delegates thinking. Consequence awareness isn’t a “skill” you learn in a weekend seminar. It’s an attitude that emerges from education, reflection, and—yes, unfortunately—time. Things we supposedly can no longer afford.
Do We Really Want to Switch from Human to “Digital Empathy”?
Let’s get to my favorite bugbear in this whole debate: the demand for “digital empathy.” What’s that supposed to be? Should we pet the server when it overheats? The newsletter I read in the café defined it as the ability to emotionally empathize with processes, even when they’re controlled through interaction with AI.
We don’t need empathy for the digital when dealing with AI. We need analytical understanding and the competence to evaluate AI-based recommendations and statements. While this reaches deep into personality, it is and should not imply an emotional connection with the digital medium, as this promotes dependencies and hasty acceptance.
We live in an increasingly synthetic world. Images, voices, texts, videos—everything can and will be generated. True competence in 2026 doesn’t mean being “feelingly” engaged with technology. It means developing a mental bullshit detector more finely calibrated than ever before. It’s the ability to distinguish: What’s signal? What’s noise? What’s a human? What’s a simulation? And above all: what do I want?! What are my values?!
Digital empathy, if we want to salvage the term, would actually have to mean media literacy paired with knowledge of human nature. It’s about recognizing when an algorithm wants to manipulate us emotionally. When AI speaks in a way that makes us feel understood, the AI feels nothing. It simulates. And when we feel drawn to AI because it responds better to us than humans, a de-socialization occurs—and thus a gateway for manipulation, intentional or not. Because at the moment we’re manipulating ourselves by amplifying certain content and narratives through AI that we ourselves put in. We mirror instead of recognizing. Recognizing, in turn, that this digital simulation is no substitute for human closeness—that’s the competence.
We must stop anthropomorphizing machines or trying to replicate human intelligence through AI—machines can do machine intelligence much better. Sure, it’s impressive when a machine reacts like a human. But it also reduces that machine’s possibilities. Instead, we should use technology to be more human with each other again.
We can gladly let machines take over processes that are “inhuman” today, like sitting in front of monitors for hours. But please, let’s not replace human relationships with relationships with machines. Machines aren’t even like pets. They do nothing for us out of empathy. A dog, a cat senses us and therefore interacts. Machines do what they’re programmed to do. At the moment, that’s still quite “nice.” But we’re just at the beginning. Just think about the beginning of search engines. Initially, they actually displayed relevant search results; today we have no idea by what paid or programmed algorithm we’re retrieving information from them.
Quality of Life Instead of “Botscaling”—A Question of Inclusion
And thus we arrive at the core of the problem: strategic application to promote our quality of life. The text I’m so worked up about speaks of “botscaling.” It wants to scale business models through agents. More output, fewer people. That’s every first-semester business student’s wet dream. But is that a strategy for a society?
A real strategy, a strategy worthy of the name, aims at inclusion. Look at today’s technology: it’s exclusive. Who benefits most from AI? Those who are already privileged. Knowledge workers, programmers, companies that can afford expensive licenses. The single mother trying to apply for parental benefits is still battling forms from the 19th century sitting on servers from the 20th.
Strategic competence means steering technology where it’s actually needed. Don’t imagine “agentic AI” as your personal assistant sorting your emails so you can do more yoga. Imagine “agentic AI” as a system that melts bureaucracy in the background to the point where citizens can breathe again. A system that breaks down translation barriers for immigrants in real-time, not so Google gets more data, but so integration works. A system that automates documentation requirements in nursing care—not to reduce nursing staff, but so for the first time in years, the nurse has time to hold a dying person’s hand without watching the clock.
That’s the point tech evangelists often miss: We don’t want to live more efficiently. No one lies on their deathbed thinking: “If only I’d processed my Outlook inbox faster.” We want to work more efficiently so life has room again.
If “agentic AI” and all these beautiful new tools lead to us just spinning the hamster wheel faster, then we’ve failed. Then all the newly proclaimed skills are worthless. But if we use them to enable participation, to break down barriers, to make access to society easier for people who aren’t tech-savvy—then, and only then, is that progress.
The Rediscovery of Friction
In the competency model of the future, we need an area called “socio-communicative competence.” This includes things like “conflict resolution” and “collaboration.” The prevailing opinion warns that we’re losing the human touch. But we understand humanity today as a kind of “human premium”—as if humanity were a luxury product you have to be able to afford.
I think we need to think more radically. AI is a friction-avoidance machine. It smooths everything out. It gives us the answers we want to hear. It writes texts that offend no one. It takes away the drudgery of thinking and formulating. But friction is important. Democracy is friction. A relationship is friction. Insight almost always arises from friction, from the collision of different ideas. If we all hide behind our AI agents that communicate, negotiate, and decide for us, then we lose the ability to engage—and thus also the ability for personal, entrepreneurial, and societal development.
The most important competence of the future might be “conflict capability in the analog.” It’s the ability to sit in a real room with a real person, hear an opinion you find absolutely terrible, and still remain seated. Not to block, not to unfollow, not to have a bot write a passive-aggressive response. But to listen. You can’t delegate that to an AI—which can, on the other hand, research and organize thoughts for us. But first we have to hear these thoughts and understand the emotions and contexts behind them. And after the AI response, we again have to analyze it and contextualize it in our everyday reality.
We need resilience (from the domain of activity competence). But not resilience in the sense of “I can work 16 hours,” but resilience in the sense of “I can tolerate ambiguity.” I can tolerate that the world is complex and there are no simple answers, even if ChatGPT can generate a simple answer for me in three seconds.
Let’s Enjoy the Imperfect
So what do we do now with 2026? Should we memorize the “future skills”? Should we all retrain as “orchestrators”?
I propose something else. Let’s use the technology. Don’t be naive. AI is a powerful tool, and it would be foolish to ignore it. Use it to clear the garbage from your life. Let AI do the expense report. Let it write the summary of the 50-page report no one wanted to read anyway.
But never confuse the tool with the purpose. Your purpose isn’t to be an optimized processor of information.
Invest in competencies that have no half-life. Invest in your judgment: Read books written before 1900. Travel to places where there’s no WiFi. Talk to people who aren’t in your filter bubble. Invest in your personal responsibility: Make decisions, even if they might be wrong. A mistake you made yourself teaches you more than a success a machine calculated for you. Invest in human collaboration: Build networks (especially within companies) based on trust, not LinkedIn contacts or “skill sets.”
We don’t need a society of highly tuned solo performers with perfect “skill stacks.” That’s a fantasy of people who think life is a video game. We need a society of adult citizens who understand technology but don’t let it dictate how they should live.
Breathe. Stay imperfect. Stay slow when it matters. And above all: Stay contrary. That’s the only “skill” the algorithms truly respect. Because self-will can’t be calculated. And that’s precisely our last, best bastion.