What I learned replacing my colleagues with AI
An accidental early warning for AI isolation
I accidentally experimented on myself.
Without realising it, almost every part of my day where I’d normally talk to colleagues was now a conversation with an AI. Thursday’s design reviews became Claude conversations. An engineering feasibility check became a ChatGPT deep dive.
It was faster, no scheduling, no context-setting, and since I was working completely remote — it felt natural, like any other slack message with a colleague. I felt like using these tools let me get bad ideas out of the way faster. Turning concepts into code, spotting holes in my thinking—it kinda felt like a superpower.
But then it got weird.
I found myself having duplicate conversations—feeding one AI’s answers into another like I'd become an AI manager meets matchmaker - passing generated details like gossip between AIs.
I'd ask Claude to review an essay draft, then copy its response into ChatGPT: 'What's missing from this analysis?' Then I'd take GPT's critique back to Claude: 'Where are the holes in this?' Back and forth, back and forth, like I was moderating a debate between two experts rather than writing my thoughts.
Except they weren't experts - they were language models performing, and I was being an idiot in the middle feeling like I was productive — when I was just playing with very expensive puppets.
This pattern of using the tools also moved into other parts of my life. I’m part of a writing group that meets weekly on Zoom to exchange drafts and spar ideas. But I started skipping these sessions. Again AI felt more productive. But my essays stopped being writing projects and morphed into something else entirely. More debate and discussion than writing support.
I’d then step back, realise I’d spent the morning managing, not making. I had four draft essays, all abandoned after the AI convinced me they weren't worth pursuing or were part of a bigger topic.
But when I looked back at them, I realized the AI hadn't actually understood what I was trying to say - I'd just let its confidence override me. I’d start forgetting where my core ideas were and if they were even still in the essay.
I'd become so hooked on validation from a machine that I wasn’t trusting my own ideas. Even my emails, I’d drop in — “how can I make this more concise?”. Was this me leaning into convenience and being lazy, or actually trying to improve, or a sign of something else?
But something felt off. I had planned a 3 day trip to see a friend in London, and the week leading up to London, I wasn’t particularly happy—nothing dark, just off. My days felt thinner dropping my own ideas, and following what I was getting feedback on. Like I was moving between topics too quickly, and losing the depth that comes with working a problem.
Maybe I was right and cutting bad ideas faster, but it can be hard to know for sure.
If I’m working on something new, how would I know—especially when I don’t know the material beforehand? With the confidence and speed of an AI, I think its hard to gauge this. Am I heading in the right direction, or straight off a cliff?
An illusion of closeness
In London, I stayed on my friend’s couch, and we spent the days co-working on our own stuff. We're both working from home, so why not do it together? Though I kinda expected a huge contrast between our lives. Him; single and in London versus my life with a small kid in a little arctic city of 50,000 people.
But here he was, equally heads-down, staring at black glass from his kitchen table — his work life mirroring my own life but 3,000 kilometres apart.
Something stood out to me on this trip that I didn’t notice before.
In a major city, we can have the illusion of closeness—surrounded by people while fading into the invisibility of the crowd. When I lived in the middle of Berlin, I didn’t feel like I was alone, or that I needed to schedule meeting people in my week.
In the Arctic, the “isolation” of home office is a lot more obvious. My decision to move north so my family could be near grandparents means I have to be intentional about doing the things that make me “human”.
I learned this during the pandemic. Logging on and off my laptop like a worker drone, day in an day out not seeing people.
Now I’m very intentional about meeting people. Hosting my monthly brunches with strangers, trips to cities to recharge (like my trip to London), or finding something new and interesting like exhibitions and concerts worth traveling to.
What do we lose when AI becomes too good?
I haven’t yet met people in Norway who want to dig into AI and design ideas with me, and there's a limit to how much my fiancée wants to hear me talk about ideas.
Chatting ideas with AI, and phone calls with friends is filling those gaps for now, but it’s concerning.
The “human” reviews in my writing groups consistently gives me a different perspective that I like more, but I can't articulate clearly what specific human interactions in sparing ideas I miss that AI can't replicate. I think that in itself is interesting.
I’ve spent so much time breaking down ideas with these models that I found myself getting lost in them, reading writing out loud and checking with my fiancée only to hear her say, "That doesn't sound like you” or “I don’t get it”. My ideas becoming so buried in AI feedback, and critique, that I was over editing to the extreme.
Each time I’d write a new section I’d paste it into Claude to check “if I did okay”. I didn't like the realisation that I was doing this.
In London, it clicked for me what I'd been missing. These moments made me feel more like myself.
Starting up conversations with strangers in a sunny beer garden. Ending up at a theme party as the plus-one’s plus-one. Meeting someone for coffee that turned into brunch, and leaving me more motivated to write. Walking through an exhibition and thinking I should really frame my son’s drawings like modern art.
They were unpredictable, unplanned but all came from being around people. AI doesn’t create those moments. Like right now - I wouldn't have I’d end up writing here on a park bench by a koi pond, watching these ducks waddle over while the city hums around me.
I think we might have to choose the human pace over AI efficiency—not because AI isn’t good enough, but because it’s too good. We can end up sitting by ourselves, more and more isolated as these tools continue to get better.
I think the risk isn’t that AI will replace us. It’s that we’ll replace ourselves—one connection-free day at a time.
I send this email ~weekly. If you would also like to receive it, join 250 other smart people who absolutely love it today.
👉 If you enjoy reading this post, consider sharing it with friends!
Or click the ❤️ button on this post so more people can discover it on Substack 🙏
I was writing an email sequence for my business and after the first draft, I sent it to ChatGPT for review and adjustments, then to Gemini. This back-and-forth went on multiple times a day for a week. With me acting like the delivery guy. It improved in the first few rounds, but then it felt stuck and the more I tried to fix it, the less it sounded like me. It felt like I was missing. I went to a feedback gym and the person pinpointed what was missing immediately.
This level of honesty and self-reflection is badly needed today. Thanks for publishing this, Dominik.
Thinking of Rick’s newsletter “Honestly Human,” then envisioning “Honestly AI,” hopefully we can keep prioritizing humanity over technology.