The people who are most skeptical of AI are often those with the highest standards for quality. They’ve spent years honing their craft and see their work as deeply personal. To them, AI-produced work feels empty and soulless.
I understand where they are coming from. Two years after the public release of ChatGPT, I am skeptical of the way so many people still treat it as a magic shortcut that eliminates the need for critical thought or insight. At the same time, I use ChatGPT throughout every day and find it makes everything I do better. I spent this past Thanksgiving trying (and failing) to convince a variety of skeptical familial writers/architects about AI’s transformative potential. I’m very much on board with the technology, but it took me a while to get here.
The problem is that most people misunderstand what AI is good at. They talk about it “taking over” writing, planning, and problem-solving—as if these were simple, mechanical tasks that could be fully automated without any loss in quality. But while AI can approximate qualities like empathy, judgment, and contextual awareness—sometimes with impressive accuracy—it lacks the deeper agency and intent that make human work meaningful. AI doesn’t understand why something matters, can’t independently prioritize what’s most important, and doesn’t bring the accountability or personal investment that gives work its depth and resonance. When we rely too heavily on AI to “take over” tasks, we risk losing the human ownership and intentionality that elevates work from functional to impactful.
Instead, I think of AI more as a collaborative partner – like having a “thought-check” for your ideas, similar to how spell-check works for your writing. I know this might sound abstract, especially if you’re skeptical. This newsletter alone won’t change your mind, but spending enough time using it might.
The only thing that I have seen convince people (and it always does) is spending a few weeks to a month intentionally exploring different aspects of AI as a tool in your daily life and work. There’s a learning curve, but you will overcome it if you keep at it.
👉 To get started, see this illustrative 30-day calendar of exercises that I use with my team here.
We all have frustrating encounters with AI in the wild. AI customer service agents fail to solve our problem. Some viral stories, like someone using AI to write a eulogy, inspire a deep existential dread. These examples often highlight misuse rather than potential, and they don’t reflect what AI can truly offer when used thoughtfully and collaboratively.
I am here to help. By the end of this guide, you’ll have a framework for collaborating with AI in a way that complements your expertise and enhances your work, rather than replacing or diminishing it.
Many people think of AI as a tool to automate tedious tasks, like taking meeting notes or writing performance reviews. The idea is appealing: delegate these jobs to AI and free yourself up for other work. But in practice, this approach often leads to disappointing results because AI lacks judgment, emotional intelligence, and any sense of organizational context that is not easily codified. It’s not a coincidence that those things are the hallmarks of your favorite colleagues.
Most of the tasks we do in our jobs require some level of human insight. So when you attempt to fully “automate” them, you get results that feel generic, shallow, or even disrespectful.
The personal touch matters
Take performance reviews. Yes, they are time-consuming, and it is tempting to delegate them to AI. But for the person receiving it, a review is deeply personal. A generic statement like, “John has consistently delivered strong results and is a valuable team player,” might check the box, but it sends a troubling message: your manager didn’t care enough to have something specific to say about you, let alone invest time in your growth. This lack of effort can erode trust and drive talented employees to leave the company.
Context is critical, and not straightforward
AI often misses what really matters. Take meeting notes, for example. On the surface, they seem like an ideal task for automation: record what was said, summarize, and move on. However, capturing the essence of a meeting—especially a high-stakes one—requires much more than transcription. It demands the ability to observe key dynamics, interpret subtle and often unspoken signals, and understand the emotional and contextual undercurrents that don’t exist in written form. Was the CEO disengaged, scrolling on their phone the entire time? Did the group genuinely reach a consensus, or were there lingering doubts that could lead to problems later? These nuances are critical, but without specific context, AI will completely overlook them.
AI is not reliable in the way we expect machines to be reliable
Finally, it’s important to understand how ChatGPT works at a high level. If you ask it the same question twice, you’ll get different answers. This makes it unreliable for tasks where accuracy and consistency are crucial. It won’t excel at consistently citing specific papers, building codes, or case law correctly. (Advanced techniques exist for these tasks, but they’re not worth learning when you’re just starting out. For now, consider them out of scope.)
OK…So what is it good for?
So if we’re not going to use AI to fully automate tasks or handle anything requiring personal insight, unwritten context, or absolute reliability, what does that leave?
The answer is to think of it as a conversation. I sometimes explain it this way: I start the work (say, the first 20%), then AI helps develop it (getting it to about 90%), and I finish it (the last mile to 100%). But even this is too simple.
In reality, I go back and forth with AI constantly—sometimes dozens of times on a single piece of work. I refine, iterate, and improve each part through ongoing dialogue. It’s like having a thoughtful and impossibly fast colleague who’s always available to help me develop and sharpen my ideas.
I realize this can sound like a lot of extra and unnecessary work. Can’t I just do this on my own? Well, yes. But I’ve found that this back-and-forth with AI helps me think more clearly and thoroughly. It pushes me to articulate my ideas more precisely, challenges my assumptions, and helps me spot gaps in my thinking.
The goal isn’t speed at first—it’s quality. But here’s what’s interesting: once you get comfortable with this approach, everything changes. It becomes as natural as having a second brain to bounce ideas off of, and suddenly you’re working much faster and in completely different ways than before.
Think of it like learning to type. At first, the effort of memorizing the keyboard layout seems ridiculous when you could just write faster by hand. But master typing and you’ll work so much faster that your entire sense of what’s possible expands. Your ambitions grow with your capabilities.
Where are we going?
Over the next few weeks, I’ll show you exactly how to develop this skill, especially if you’re someone who values quality and craft. I’ll break down how to engage iteratively and collaboratively with AI in writing, unstructured problem solving, and other areas. Your thinking and interaction patterns may start to look quite a lot like this:
We’ll start from skepticism and work our way to mastery. No technical background required.
No recipe has ever done more work in my life than Mark Bittman’s More-Vegetable-Than-Egg Frittata. It is easy, it is filling, it is tasty, it is healthy. You can use literally whatever you have on hand (my favorite filling is cherry tomatoes cut in half & seeded, frozen peas, cauliflower rice, and grated parmesan). It is a perfect recipe in that you can prep each step as you go and the moment you finish prepping each step it is time for it join the other ingredients in the pan. I eat it several times a week. I owe so much to the More-Vegetable-Than-Egg Frittata!
Thanks for reading. See you next week!
Hilary