"You're cheating!" and other unkind things I say to myself

Confidently create content using AI that only you can make

I was staring at my screen last week, watching ChatGPT draft an email that would have taken me an hour to compose. The words weren't mine yet—but the message really gave me the heart of what I should be saying to respond to a really complicated (and frustrated) issue. In twenty seconds, I was propelled into shaping what I needed to say to stay calm, clear and effective in my communication..

And instead of feeling relief, I felt... guilty.

Is this cheating? The thought crept in, carrying with it all the weight of my academic training, all those red-inked papers, all the years of believing that if something didn't feel hard, it probably wasn't honorable.

Maybe you've felt this too—that strange mix of awe and unease when AI does in moments what used to take hours. That wondering if you're somehow cutting corners by accepting help from something so capable, so inexplicably magical.

I've been thinking about this a lot lately, both in my own work and in conversations with the coaches and therapists I support. There's something profound happening beneath our AI anxiety, something that goes deeper than just learning new tools.

Is math without calculators more "authentic"?

When I first started playing with large language models, I kept thinking about calculators.

For most of my educational experience, we didn't think twice about reaching for a calculator. In fact, we wonder why anyone would waste time on long division when there are more interesting problems to solve. The calculator didn't make us less capable—it freed us to think about bigger questions.

But, I think there is something really overwhelming for us about computer models being so adept at language and communication. We seem to feel less ownership over numbers than we do over our words—math feels universal, but language feels deeply personal and uniquely human. Numbers can be calculated; our voice feels like it should be earned.

I think it feels safer when a robot speaks and sounds like a robot, both in tone and word-choice. We feel comfortable when animals demonstrate understanding of language or even can sing "I love you" through groany howls (imagine huskies here).

What LLMs can do with our languages really can blow our minds. We're holding something that feels too powerful, too magical, and we're not sure if we're allowed to use this much ease.

Invisible Trophies

Here's what I've been noticing: our discomfort with AI often isn't really about the technology at all. It's about something much deeper—this belief that we carry that effort equals integrity, that if it didn't feel hard, it might not be honorable.

As much as we worked against the ideas of meritocracy in my academic experience when studying social justice, concepts still seemed to apply to our scholarly work. There was the constant assumption that you do well because you worked hard (even when you are super smart). There's strength in that. There's identity in it. The ability to point to our effort and say, "I earned this."

But here's the thing—so much of the meaningful work we do as helpers doesn't come with visible trophies or clear markers of achievement.

When it comes to coaches and therapists trying to build and market their practice, we forget what matters in our content creation. You are doing the hard work with your clients.

You spent years building an intellectual model about how you help people. You developed frameworks for understanding growth and change. You sat with clients in their hardest moments and learned to hold complexity with grace. You've integrated theory and practice in ways that are uniquely yours.

That foundational work—that's what makes AI powerful in your hands rather than just generically helpful or creating a false expertise.

The guilt comes when we don't see ourselves as the originators, when we let AI lead us instead of learning to lead with AI.

Red Ink Galore

For me, the resistance to use AI runs deeper than business concerns. It goes back to those formative years in school, where writing felt like a mystery accessible only to some special club I was never quite admitted to.

I remember the anxiety of turning in papers, never knowing what invisible standard I was failing to meet. The red ink that came back felt like judgment not just of my ideas, but of my right to have ideas worth sharing.

Writing became something I couldn't truly do myself—at least not well enough to be readable.

It's strange how AI has actually helped heal some of that old wound. Instead of red ink and mysterious standards, it gives me space to experiment without shame. To play with ideas. To see my thoughts reflected back clearly, without waiting for someone else to grant me credibility.

But even with that healing, the old voices sometimes creep in: Is this cheating? Are you really writing if you're getting help?

The Leadership Shift

I think this happens especially in coaching and therapy business contexts, because often AI does know more than we do about marketing or systems or scale—areas that weren't part of our training as helpers and healers.

So we defer. We let it tell us how to write our copy or build lead magnets, following its suggestions without questioning whether they align with our values or our vision for our work.

But what if we flipped that dynamic?

What if instead of asking, "What should I do?" we started asking, "How can I use this tool to express what I already know?"

What if instead of feeling led by AI, we stepped into leadership with it?

This isn't about becoming tech experts overnight. It's about remembering that you are the expert in your own work, your own voice, your own values. AI can help amplify that—but only if you're clear about what you want to amplify.

The Experimental Mindset

I've learned to approach AI the way I approach therapy or coaching—with experimental curiosity rather than perfectionist pressure. I try to align my behaviors with my values (you know, ACT says it's good for your mental health). So, embracing learning means I've gotta try to be okay with mistakes.

Yes, you'll have conversations that go sideways. Yes, you'll get outputs that miss the mark completely. Yes, it can be frustrating when you can't get consistent results because AI is always trying new approaches.

But here's the thing: because AI helps you move faster overall, you can afford to be more patient with the learning process. You can give yourself grace to experiment.

The key isn't getting it perfect immediately. It's learning to be thoughtful about your inputs, ask questions about the outputs, and compassionate with yourself when things get messy.

And sometimes, when people get frustrated with AI, it's because they're being impatient in ways that mirror how they rush through other important processes. They want instant answers without taking time to train the tool, or provide clear direction, or read their own inputs carefully.

The relationship with AI teaches us the same thing good therapy or coaching does: slow down, be intentional, pay attention.

Where can you experiment?

The most powerful AI applications I've seen aren't the ones that replace human insight—they're the ones that amplify it.

What I've learned in my years of doing therapy is that the most powerful learning comes from really good questions. So, often I will build assets for my business where I create prompts for humans and prompts for AI.

In a few of the custom GPTs I've built, (like "Path to Scale") that I have instructed the model to ask one question at a time, to wait before making conclusions, to gather lots of data before offering insights. The outputs move me because they reflect the same patience and curiosity I try to bring to my own assessment work.

I've also created AI-assisted journals that mirror how I actually work with people—asking follow-up questions, encouraging deeper reflection, honoring the complexity of whatever someone is working through.

For areas that make people anxious—like visibility or sales conversations—I've found AI can be remarkably encouraging (hopefully not sycophantic) while still offering honest feedback. It can honor someone's desire not to be pushy while also gently challenging them to consider why they're avoiding certain necessary conversations.

These tools work because they're not trying to be something other than what they are. They're extensions of approaches that already feel true to me, amplified and made more accessible.

What This Means for Your Work

If you're feeling that familiar tension between curiosity and concern about AI, let me offer this: you already have everything you need to use these tools with integrity.

Your expertise isn't in knowing how to prompt an AI perfectly. It's in knowing what questions matter, what responses serve your people, what approaches align with your values.

The work you've already done—understanding how people grow, learning to hold complexity, developing frameworks that actually help—that work makes you uniquely qualified to use AI with integrity. Your years of training taught you to notice patterns, ask good questions, sit with uncertainty, and guide people through change. Those same skills make you a thoughtful collaborator with AI rather than a passive consumer of its outputs.

Start small. Pick one area where you find yourself doing repetitive work—maybe email responses, or client summaries, or content ideation. See if AI can help with the mechanics while you focus on the strategy and the heart.

Pay attention to what feels aligned and what doesn't. Trust your instincts about when something captures your voice and when it misses the mark.

And if you find yourself feeling guilty about accepting help, remember this: every tool we use—from calculators to spell-check to GPS—extends our natural capabilities so we can focus on what matters most.

Your capacity to understand, connect, and guide—that's irreplaceable. AI can just help those gifts reach more people who need them.

What would become possible if you stopped seeing AI as competition or threat, and started seeing it as collaboration? What would you create if you led with your values rather than letting technology lead you?

I'm curious to find out. And I suspect you are too.


If you're ready to explore how AI might support your work while honoring your voice and values, I'd love to have that conversation. I work with therapists and coaches who want to build sustainable practices that feel authentic and aligned—whether that includes AI tools or not. The technology is just one possibility; what matters most is creating work that reflects who you are and serves the people you're called to help.

Previous
Previous

The Tale of Two Niches

Next
Next

Why You Keep Putting Off Showing Up Online (Even Though You Care Deeply)