The 5 White Rabbits of AI Adoption 🐇
- Dado Van Peteghem

- 3 dagen geleden
- 5 minuten om te lezen
The ‘five white rabbits’ below often show up together in discussions around AI adoption. Not because people deeply studied AI but because they’re trying to buy time.

Time to postpone learning, time to avoid looking naïve, time to avoid confronting change.
Five white rabbits, five arguments that mostly pop up the moment AI stops being theoretical and starts becoming… personal. The moment people are asked how much they use AI, not how much they read about it.
They sound responsible, thoughtful and noble. But in practice, these 5 white rabbits often reveal something else entirely: an intellectual escape to avoid the AI adoption. Ways to talk about AI without having to use it.
Let’s be very clear: all five concerns are valid arguments. They just happen to be massively overused without real depth, especially by people who don’t understand AI, don’t feel confident with it, or are afraid of what it might reveal about their own relevance.
1. 🐇 “Too much energy!” 🔋⚡
→ it’s unsustainable
Yes, AI consumes a lot of energy. Most people raising this argument however have absolutely no idea how much energy AI actually consumes, nor how it compares to existing (digital) infrastructure. They just heard about it somewhere in the news.
How to deal with this white rabbit:
Turn the question around: “How much energy do you think it uses?” Mostly people can only say ‘A lot‘!
AI workloads can replace other energy-heavy processes.
Energy efficiency per task is improving fast. The trend matters maybe more than today’s snapshot.
If energy is truly your concern, ask why it suddenly became one only now.
There are many energy consumers inside organizations that are rarely questioned. Think about the sheer volume of printed books and publications (trees cut at scale), or the amount of flying for meetings, airports remain full.
The real issue is not isolating one technology, but balancing all of these choices against an acceptable footprint and their actual impact.
2. 🐇 “What about ethics?” ⚖⚡
→ it’s immoral
Ethics are obviously very important but yet again this argument is often put on the table without any solid reasoning on how to make this concrete.
How to deal with this white rabbit:
Ask which ethical concern around AI exactly: bias? labor? transparency? accountability? other?
Ask what the concrete action could be to navigate the concerns.
Point out that not using AI is also an ethical choice, often with real human cost.
Ethics isn’t a reason to stop experimenting; it’s a reason to design guardrails while doing so.
Most ethical failures come from humans misusing tools. Ethics shouldn’t freeze action, it should shape it.
In the end, this is a recurring question that surfaces with every major technology. Electricity can power cities or electrocute people. Social media can connect societies or undermine democracy. We didn’t ban knives because they can kill; through collective dialogue, we agreed on what responsible and acceptable use looks like.
AI deserves the same treatment. Avoiding the discussion won’t help, only raising AI literacy to the highest possible level will allow us to engage in this debate constructively, until the next transformative technology comes along.
3. 🐇 “Our data!” 💾⚡
→ it’s dangerous
This one sounds scary to business leaders and that’s why it’s effective. AI is going to steal all our data.
But again: most people shouting “data!” have never opened a settings page, read a data policy, or configured access controls. They also have never heard about local LLMs.
How to deal with this white rabbit:
Show that users have certain control over what data is shared, model by model, setting by setting.
Explain the difference between public LLMs in the cloud, local LLMs, private workspaces, and enterprise-grade setups.
Ask why email, cloud storage, Slack, and Google Docs suddenly feel “safe” by comparison.
If data is your concern, learn what you can do with local controls.
And if you care about data sovereignty, for example as a European, you can make deliberate choices: use European AI providers like Mistral AI, choose EU-based data centers, and configure tools so data isn’t stored or reused for training. If data sovereignty truly matters to you, design for it.
4. 🐇 “We’ll become dumber!” 🧠⚡
→ it weakens us
This argument assumes intelligence equals memorization, manual effort, or doing everything the hard way.
Calculators didn’t make us worse at math. Writing didn’t destroy memory. GPS didn’t erase spatial thinking, we just changed where cognition happens.
People should not offload all their cognitive work to AI and become lazy, but that’s also a personal responsibility. In many ways, AI acts as a collective mirror. What you get out is only as good as what you put in, the classic ‘garbage in, garbage’ out principle.
The concern about cognitive offloading often stems from an overly passive approach to AI. Because the system requires only minimal input, that’s where things tend to go wrong. With weak or passive input, the output converges toward mediocrity and comfortable averageness, and that’s exactly what we should resist.

How to deal with this white rabbit:
Separate outsourcing thinking from augmenting thinking.
AI frees up cognitive bandwidth for empathy, creativity, connecting the dots.
The real risk isn’t becoming dumber. It’s refusing to evolve your definition of smart.
AI doesn’t make us weaker if used the right way, avoiding it might. I strongly believe in augmented thinking, but I would go a step further and advocate for antagonistic AI: systems that don’t serve up easy answers, but act as critical companions, challenging assumptions, asking better questions, and forcing us to think. Not tools to copy from, but partners that sharpen thinking.
5. 🐇 “We’ll lose control!” 🎮⚡
→ it threatens us
This is about power, identity, and status. Experts fear losing authority. Managers fear losing informational asymmetry. Professionals fear that what made them special…no longer will. And some go full Terminator thinking.
How to deal with this white rabbit:
Acknowledge it, don’t mock it. This fear is very human.
Control doesn’t disappear, it shifts to those who can frame problems, ask better questions (the art of the future!), and make decisions.
AI exposes brittle expertise, that’s uncomfortable but necessary.
The goal isn’t to stay in control of tools, it’s to stay in control of outcomes.
Managers don’t really fear AI itself; they fear no longer being the smartest person in the room 😉. This instinct to control in a world full of uncertainty isn’t new, it’s deeply psychological. Even without AI, we’re afraid of losing control, and in times like these, that fear only intensifies.

Think about the classic ambiguous rabbit–duck illusion. The point isn’t which one you see, it’s how fast you can switch. We need to move from seeing rabbits to ducks. Not sitting ducks, but active ducks: foraging, swimming, socializing, constantly in motion, rarely idle for long periods.
The real pattern
AI adoption doesn’t fail because of technology. It fails because of psychology.
And the uncomfortable truth is this:
People don’t avoid AI because it’s dangerous. They avoid it because not knowing how to use it feels dangerous to their identity. And because people don’t like change all together.
The way forward isn’t to dismiss these concerns. It’s to stop hiding behind them and start engaging with AI hands-on.
White rabbits are seductive, but progress only happens once you get inside the rabbit hole to form solid thinking around it. 🐇
Stay curious
If you want to book me for a keynote or check out my new book ‘Scale vs Soul’, go to www.dadovanpeteghem.com



Opmerkingen