I'm not saying it's wrong or bad to use AI to write sales outreach email, LinkedIn DMs or call scripts. However, ya better factor in a handful of factors when doing so. That's what I'll cover today. But before that, let's improve your thinking about the output ChatGPT, Claude or whatever tool gives you. Let's dive in...
You don't need better wording
“I need better wording.” Four words you should never think. Because you don't need better wording. You need to throw out all persuasive thinking. That's where "I need better wording" comes from. In most cases, the persuasive baby and bathwater need to go.
The underlying persuasive psychology is poisoning your thinking. Instead, you need to provoke.
Because AI is trained on the crowd. And the crowd is trained on conventional (weak) persuasion tactics.
That's why your prompt asking for the most engaging email is a problem. You may be encoding your biases into the machine, mixing with the crowd's mediocre concepts — and scaling them.
The problem isn’t AI. It’s the ignorant belief system powering your prompt. Do you accept first drafts the AI gives you?
How well are you constraining the AI to think in terms of success principles it currently underappreciates? (and will not otherwise conform output to) Here we go...
A prompting blind spot
The output tools like ChatGPT provides feels professional. But it often lands as needy, low status persuasion to high status decision-makers.
That's why the output looks polished to you. And AI helps you customize, personalize too. But 98% of scripts (GPT is aware of and researches) lean on the same templates, frameworks and “best practices” decision-makers are deleting daily.
It's your blind spot.
Consider: AI outputs are shaped by what AI is being fed.
Recycled subject lines, spammy “value add” statements and repulsive calls to action. These are just a few of the low class tropes dragging you into the mud.
Result: Your messages persuade at best — beg at worst. You AI-powered words are pushing clients away, rather than pulling. Because the AI is highly biased to "average" sales slob.
AI doesn't know exceptional sales tactics
So far, AI isn’t surfacing exceptional, non-persuasive sales outreach tactics. It’s producing what the crowd already does. And the crowd is operating on “best practices” that weren’t effective before AI — and now get spammed even faster.
The result is a self-reinforcing loop:
- Sellers ask AI for outreach help.
- AI generates content mirroring the masses.
- Sellers accept it as “professional” because it looks familiar.
- Decision-makers delete/spam it — because it's simple pattern recognition.
The sales and marketing crowd trains AI. AI trains the crowd. And inboxes fill with polished mediocrity.
Meanwhile, exceptionally talented sellers are out there provoking conversations using tactics they don't share publicly.
That's what makes them exceptional. They don't behave "normally." They're above average.
So don't blame me. And don't blame AI tools. Blame your thinking.
To change this, it’s worth asking yourself better questions:
- What, exactly, is AI learning from?
- How are sellers judging whether its output is “good”?
- And are prompts themselves reinforcing flawed assumptions?
These are the overlooked factors we'll explore. Spoiler alert: They explain why so much AI-powered outreach fails to engage people worth engaging.
Factor A: AI learns from the masses
AI doesn’t invent. It predicts. Think about it. When you prompt AI to “write a cold email” or “draft a LinkedIn message,” you’re not getting creativity. You’re getting a statistical average.
Every time you ask AI to write a personalized outreach email, you’re getting what, statistically, satisfies most sales people — dressed up a little.
Yes, you may ask it to customize message(s) based on specific context — who you're calling on, etc. But the knowledge pool AI draws from is derived from average.
Average sales people comprise 95% of the ecosystem. They are in "persuasion mode" — too afraid to provoke and upset prospects. Which actually works!
That's why ChatGPT, Jasper and Claude predict (mimic) what the lower skilled crowd is practicing. The more a phrase or framework (patterns) gets used, the more likely it shows up in your “sharpened” message draft.
Read that again! It's that important.
Ever notice how many people ready Chris Voss's outstanding sales negotiation book — only to abuse his "going for no" concept?
"Are you opposed to _____" [insert what the seller wants] is now an established spam pattern. It happened nearly overnight.
If you'd like more details on how getting prospects angry or upset helps them engage more, consider diving in via the Stay Curious Blog.
ChatGPT is blending you in
AI like ChatGPT is not helping you break through. It's blending you in. If I'm wrong, push back. Let me know your experience in comments. Meanwhile, here are three examples of common phrases which sabotage.
- “Would you be open to a quick 15-minute call next week?”
AI loves this line. Prospects hate it. Because it attempts to skip steps in the courtship process. - “We help companies like yours unlock hidden value and drive efficiency.”
Sounds polished, but reads like generic brochure copy. Being direct often backfires. - “I’d love to hear your thoughts when you have a chance.”
Sellers think it sounds energetic, polite. Prospects read it as needy, low status. Love your family, not prospects.
Average sellers think these are best practices because they appear so often. That's the problem! They appear so often because they’re mediocre. Statistically average.
That’s the danger: AI learns what’s common, not what pulls. AI can help you think and communicate exceptionally well. But it's not as easy as giving it context to customize messages.
Because the results are formatted into patterns.
So when you prompt AI to “write a cold email” or “draft a LinkedIn message,” you’re getting a statistical average.
AI writes the opposite of a creative message — unless you educate the tool to avoid average.
Ah, but here's the catch: To do that, you need to know what "above average" looks, feels, sounds like!
To do that you need to know the formula to arrive at "above average."
You need to understand and clearly articulate:
- The difference between persuasion and provocation (push vs. pull)
- Actual low and high status behaviors among sales people & customers
- The difference between a conventional and a Facilitative Question
- Why closing with a call-to-action lowers status, what to do instead and why
- The practical application of behavioral science tools like Transactional Analysis
- How and why confusing prospects often causes them to engage
- Why educating early-stage prospects and providing value usually leads to condescending tone
- Why leading with pain usually pushes prospects away, why and how to pull instead
- Why creative message are less impactful than provocative ones, why and how to design provocations
These are just a few insights — what you need to understand about "above average."
Whether you're using AI or not!
How would you describe the odds of standing out — when your AI-authored message looks exactly like the ones training the AI?
What if you used AI to spot and remove a message's weak spot — persuasive "tells" — faster and better than you can? Spoiler alert: It can!
Hungry for details on how to engage more conversations and avoiding being direct? Consider diving in to my Stay Curious Blog where I publish my best tips, insights and surprising psychological triggers.
Factor B: Judging requires mastery
Here’s your blind spot: you likely cannot tell if AI’s output is good. Certainly, the members of our Spark Selling Academy can. But judging communication quality requires some level of mastery. And mastery is rare.
At minimum you must appreciate the necessity of studying how the very best sales people communicate.
Period.
To the average seller, AI-generated messaging looks professional. Polished. “Sendable.”
To a decision-maker, it's often received as needy, predictable, or worse — manipulative. Here are three average examples for you to study, as you re-think what defines average.
- “I’d love to share a few insights with you over a quick 15-minute call.”
You may think it sounds generous. Decision-makers read it as you undervaluing your time, being way too excited to waste 15 minutes without both sides qualifying the conversation. Worse, this kind of phrase helps prospects think, "Your insights are probably already known to me; besides, this is a ploy so you can pitch me." - “We specialize in helping companies like yours achieve their goals.”
You may think it sounds customer-focused. Decision-makers see a vague pitch written for anyone. They see the message as a spam template. Because that's what it is. - “I wanted to introduce myself as your new point of contact here at [Company].”
Sounds professional and helpful to the average ear. But decision-makers receive this as an unnecessary announcement that wastes time. There's no point of relevancy or urgency.
Notice the gap: sellers judge by what they intend. Prospects judge by how it lands.
That’s the problem with outsourcing judgment to AI: if you can’t already spot the difference between persuasion (pushy, needy, assumptive) and provocation (curiosity, self-reflection, high status), you’ll accept average outputs as “effective.”
Take the “I’ll close your file” technique invented by John Barrows, a sales trainer. This final "break-up" email was typically sent at the end of a sequence to non-responsive leads. It once worked — especially in sectors where prospects respected a blunt, no-nonsense tone. But inside software sales it burned out almost overnight.
Buyers learned the pattern, tagged it as pressure and tuned it out. That’s why mastery isn’t a solo sport. Working with other sellers who take communication psychology seriously keeps you current on what language is sparking curiosity — and what’s already been spammed into irrelevance.
So what’s the alternative? Start by shifting how you define “effective.”
- Stop persuading. Start provoking. The goal isn’t to convince — it’s to spark curiosity.
- Test language triggering inward reflection. Instead of “We can help you improve…,” test “How would you describe your ability to…?” That tiny shift puts the prospect in the driver’s seat.
- Measure quality by the conversations sparked, not how professional the draft looks. A message that provokes one thoughtful reply is worth more than fifty polished deletes.
What if your ability to judge “effective” is built on the same flawed beliefs that is training AI?
Factor C: Prompting skill ≠ selling skill
Here’s another blind spot: prompting well doesn’t translate to effective communication using Claude or ChatGPT. Your prompts are probably creating outputs where what you say is too persuasive.
You can spend hours crafting clever prompts. But if what you’re asking AI to polish is mediocre, you’ll just get shinier mediocrity. Prompts must constrain the AI's thinking and form it based on sales success principles it does not currently understand.
Think of it like handing a rookie the keys to a Formula 1 car. The machine is world-class. The driver isn’t. The result is predictable.
Here are three examples of prompts that look smart but produce weak messages:
- “Write a compelling follow-up email to a prospect who hasn’t responded.”
Sounds harmless. AI spits out: “Just circling back to see if you had a chance to review…” Polished persistence. Equally ignorable. And easily detected as a marker — sending you directly to spam. - “Draft a LinkedIn message that builds trust and provides value.”
– AI obliges with: “We help companies like yours unlock hidden value and drive efficiency.” Vague, brochure-speak. This is how marketers speak, not customers. Feels like mass emailed garbage. - “Create a professional voicemail script for a decision-maker.”
– Output: “Hi [Name], I wanted to introduce myself and share how we’re helping companies like yours…” Sellers think it’s concise. Decision-makers hear another templated pitch.
Notice the pattern: the seller thinks the problem is “I need better wording.” AI delivers polished wording. But the underlying psychology — persuade, push, pitch — hasn’t changed.
The result isn’t better outreach. It’s bad outreach, faster.
If the raw idea is weak, does polishing it with AI change the outcome — or just make failure more efficient?
Discover how to prompt AI (like GPT, Claude and others) to write effective provocations by reading my guide. It's part of the Stay Curious blog subscription.
Factor D: Garbage in, garbage out
AI doesn’t just mirror the crowd. It mirrors you. The problem isn’t only what AI has been trained on — it’s what you tell it to do. And how you are instructing it.
Prompts often rely on your flawed assumptions. For example, a belief in "What can I say to get him/her to lean in?" is a good starting point. It may seem so. But for most people, it's the worst place you can start!
Sellers ask AI to “provide value,” “build trust,” or “sound persuasive.” And AI obliges — producing polished emails and DMs of the same persuasion-heavy tactics decision-makers already resist.
Here are examples of flawed prompts and predictable output:
- “Write an email that provides value to a busy CEO.”
Output: A generic template including tips & "education," thinly veiled as strategic advice; often received as condescending. - “Draft a message that builds trust quickly.”
Output: Over-familiar tone, forced compliments, or name-drops — exactly the kind of moves that erode trust. - “Create a call script that overcomes objections.”
Output: A manipulative sequence of rebuttals. Sellers see it as professional. Prospects feel cornered.
This is persuasion in → persuasion out.
The seller thinks they’re asking for “best practice.” In reality, they’re encoding their own biases into the machine and scaling them.
So the problem isn’t just AI. It’s the belief system behind the prompt.
What assumptions are you teaching AI to mirror back at you — and how confident are you that those assumptions are working?
Reflection point
AI isn’t broken. It’s doing its job: predicting the next most likely persuasive catch-phrase, based on your (and the crowd's) belief in them.
The problem is what AI is learning from — and what you’re feeding it.
- Factor A: It learns from the masses.
- Factor B: Judging its output requires mastery most sellers don’t have.
- Factor C: Prompting skill isn’t selling skill.
- Factor D: Prompts themselves encode flawed (and persuasive) assumptions.
Put them together and you see the loop: sellers ask AI for help, AI mirrors mediocrity, sellers can’t spot the flaws, and the cycle reinforces itself.
This is why inboxes are full of polished but predictable outreach messages. They go straight to spam.
So what do you do? Don’t look to AI for sharper persuasion. Look to it as a mirror for your own thinking.
If what you’re feeding in is persuasion-driven, needy, or low-status, that’s what you’ll get back.
Take action. Reflect and act on these questions:
- How do you describe the odds of standing out if your messages sound like everyone else’s?
- What assumptions (of your own and in AI's 'brain') are you scaling without realizing it?
- If the goal isn’t to persuade, but to provoke — how will you prompt AI to think & create differently? How will you define differently in terms of specific guidance?
- How would you instruct AI's to spot and remove low status persuasion "tells?" ... faster and better than you can!
- What would a 35-word, non-persuasive AI draft look like if it only asks one inward-directed Facilitative Question and names one a risk the buyer already suspects?
Until you confront these kinds of questions, AI will keep producing what the crowd already does. And decision-makers will keep deleting/marketing it spam.
Ready to start prompting AI to think & create differently? Get in touch, leave a comment or sign up for my Stay Curious blog to access next steps right now.