TeamPricingNewsletter
Resources
Blog
Events
Playbooks
Get Started
How It WorksTeamPricingNewsletter
Resources
BlogEventsPlaybooks
Get Started
← Back to blog
January 7, 2026
Content Strategy
Thought Leadership
Business Impact

What actually converts in AI marketing (at every stage of the funnel)

What running 10–15 messaging variants taught Lindy about turning curiosity into pipeline.

#
IT / Tech
#
Marketing / Martech
Jen
Levisen
Copywriter
@ storyarb

Subscribe to
The Standard

By submitting this form, you agree to receive recurring marketing communications from storyarb at the email you provide. To opt out, click unsubscribe at the bottom of our emails. By submitting this form, you also agree to our Terms & Privacy Policy
You're in! Welcome to The Standard!
Oops! Something went wrong while submitting the form.
Read more

2026 Marketing Predictions: Show your work, or show yourself out

December 16, 2025

Calculating your ‘marketing math’ to build for next year

November 11, 2025

How to show up in AI search results

November 2, 2025

Make your brand famous

October 28, 2025

How to prove marketing ROI when you have a long sales cycle

October 22, 2025
share

Most AI companies decide how to position themselves in a Google Doc. Everett Butler tested his positioning in-market, and let the results determine if it worked.

Everett has spent more than 15 years scaling marketing teams at companies like Tesla, Uber, Affirm, and Thumbtack. Today, as Head of Marketing at Lindy, he’s applying those lessons to one of the hardest problems facing B2B AI companies: figuring out how to talk about AI in a way that leads to revenue, not just curiosity. 

For Lindy — an AI assistant for work that saves professionals two hours a day by proactively managing their inbox, meetings, and calendar — the challenge was explaining a category that didn't exist yet. Prospects needed education before they could evaluate. They were curious enough to book demos, but showed up confused. 

When Everett joined Lindy, the product was working but the messaging wasn’t. The team had made a reasonable call on how much AI language to lead with, but every sales call followed the same script. Prospects showed up interested, then sales spent the first 10 minutes trying to describe what Lindy actually does.

If your sales team has a well-rehearsed “what we actually do” explanation, your positioning hasn’t landed yet. So, Everett replaced guesswork with rapid testing and let real buyer behavior decide which language turned curiosity into pipeline.

‍

Key takeaways: What actually works when positioning AI products

  • Most AI companies haven’t tested their AI messaging—89% of marketing leaders admit it (more coming soon in our Trade Secrets report).
  • AI-heavy language might attract attention, but in Lindy’s case, outcome-driven language is what actually converts.
  • The fastest path to clear positioning is running tests in tandem, in a concentrated time period, and watching how deals actually move.

‍

Old funnel tactics meet a new industry 

When Everett joined Lindy, the company experimented with positioning like “Zapier for AI” or “no-code workflow automation for AI.” The language sounded innovative. In practice, it slowed sales down.

Sales calls opened with explanations. The team spent valuable time translating before showing a demo, delaying the most important part of any sales conversation—showing how the product would help the buyer.

The problem wasn’t demand. Prospects booked calls, showed up on Zoom, and stayed engaged for 30–60 minutes trying to understand the product. That level of commitment signaled something important.

People don’t spend an hour on Zoom out of politeness.

As Everett puts it, “Booking the call, getting on a Zoom, talking to a salesperson, going through a demo—those are strong signals that there’s a real market here.”

The real issue was category maturity. Lindy wasn’t selling into an established market with clear buyer intent. No one searched for “AI employees” because most buyers didn’t yet know that category existed.

Traditional funnels assume buyers already understand both the problem and the solution. Lindy’s prospects didn’t. They were curious and willing to invest time to learn.

“Traditional funnels assume intent exists,” Everett explains. “But in a new category, there isn’t a clear path. You have to create it.”

That confusion wasn’t a liability. It was proof of an open market.

‍

Find existing demand, then test the language

Instead of rewriting the brand or overhauling the website, Everett focused on one question: Where was demand already showing up?

Early signals clustered around specific use cases—AI email drafting, meeting notetaking, inbox triaging, and virtual assistants. These AI use cases weren’t abstract. They were problems teams already felt in their day-to-day work and understood.

“We focused on use cases where we were already seeing interest,” Everett says. “Those areas are still some of our strongest today.”

The working theory was straightforward: identify a small set of problems prospects already care about, then test different ways of explaining how the product helps solve them.

The goal wasn’t elegance. It was speed.

Perfect language can wait. Education can’t.

As testing progressed, the feedback loop tightened. Sales calls got shorter. Prospects arrived with clearer expectations. Buyers started using Lindy’s own language to describe the product.

‍

Before you test: Map the decisions you need to make

Before running them, Everett’s team clarified the purpose of the tests. Positioning isn’t one decision—it’s a sequence. The space you’re operating in matters, and the buyers you’re speaking to require different approaches. 

  • Horizontal vs. vertical: Lindy supported multiple teams—execs, sales, support, and client services—which required different messaging than a single-purpose product.
  • Go-to-market motion: A hybrid PLG/SLG model created different expectations for self-serve users versus enterprise buyers.
  • Audience segments: A Head of Sales evaluating sales efficiency tools cared about different outcomes than a Customer Support leader reducing time to resolution.
  • Campaign messaging: Only after those layers were clear did it make sense to test individual use cases.

“We needed to get the company-level messaging right first,” Everett says. Before testing campaign copy, the team needed to define what Lindy fundamentally was: a horizontal platform serving multiple teams through a hybrid PLG/SLG model. Those decisions shaped everything else.
‍

Step 1: Identify what to test

The team started by examining where Lindy already showed traction. From there, they mapped two inputs.

  1. Goals: Concrete outcomes prospects wanted—booking more sales meetings, reducing support workload, or getting more done with the same team.
  2. Pain points: Daily frustrations that pushed buyers to act—rewriting the same emails, jumping between tools, or burning budget on headcount they couldn’t justify.

Each goal and pain point combination became material for messaging tests. As Everett explains:

Interest usually starts with pain, not technology.


Step 2: Run 10–15 messaging variants at the same time

Instead of slow, sequential tests, Everett ran 10–15 messaging variants in parallel across Google, Meta, and LinkedIn—“anywhere we could launch quickly and get a signal,” he says.

They tracked 3 things:

  • Click-through rate
  • Demo bookings/Sign-ups
  • Lead quality

Patterns emerged fast. Messages like “Build AI Employees in Minutes” drove heavy click volume, but stalled at demo bookings. Meanwhile, outcome-driven messages like “Get 2 hours of your day back” generated fewer clicks but converted to demos at 2–3x the rate.

“AI grabs attention,” Everett says. “But outcomes move deals forward.”
‍

Step 3: Watch how deals move, not just dashboard metrics

With multiple tests running, Everett focused on signals that predicted real momentum.

3 indicators mattered most:

  • Shorter demos: When sales no longer needed 10 minutes of explanation, the message had landed.
  • Cold traffic converting: When unknown visitors booked qualified demos, the messaging carried its weight.
  • Prospects repeating the language: When buyers described Lindy using the same phrasing as the ads, understanding clicked.

“I love hearing prospects use our own words on sales calls,” Everett says. “That’s when you know it’s working.”

When those signals didn’t appear, the team changed course immediately.

“If you’re not seeing those signs early,” Everett adds, “don’t wait. Test something else.”
‍

What the market told them

Within 90 days, Lindy had clear answers about how to use AI language at different stages of the funnel.

AI-forward messaging worked best at the awareness stage. Outcome-driven messaging converted that interest into qualified pipeline.

Sales calls got shorter. Cold traffic converted at competitive benchmarks. Prospects arrived ready to talk about implementation instead of definitions.

While competitors debated positioning in conference rooms, Everett’s team tested it in ads and landing pages across multiple channels. Testing in-market allowed Lindy to define its messaging before the category caught up.

The takeaway: What attracts attention and what converts are rarely the same thing—test to find your specific balance. The fastest way to a marketer’s enlightenment? Testing in-market (not in a Google Doc). In emerging AI categories, the teams that test fastest discover the language that resonates with their buyers first.

‍

FAQ

Should AI companies lead with AI in their marketing positioning?

It depends on your buyers and where they are in the funnel. For Lindy, AI language attracted attention at the awareness stage, but outcome-driven language converted better once prospects started evaluating solutions. Test what works for your specific audience—curiosity and conversion often require different language.
‍

How many positioning variants should teams test at once?

Running 10–15 variants in parallel produces faster, clearer signals than slow, sequential A/B tests. Test enough variants simultaneously to see patterns emerge quickly, especially in fast-moving categories where the market shifts monthly.
‍

What metrics matter most when testing brand positioning?

Look beyond clicks. Lindy tracked demo bookings, lead quality, and pipeline velocity signals—like whether sales calls got shorter and prospects started using their language. The goal: find positioning that moves deals forward, not just generates traffic.
‍

How quickly should teams pivot if messaging isn’t working?

Watch for stalling signals. If demos aren't getting shorter or prospects can't repeat your positioning back to you, the message isn't working—pivot and test again. In emerging categories, speed matters more than perfection—test, learn, and iterate in weeks, not quarters.

Consider your content solved.

What you just read started as one expert interview.

Now it’s a playbook, newsletter content, and 10+ social posts. storyarb builds content flywheels that multiply your best thinking across every channel, consistently.

Learn more
Subscribe to The Standard
By submitting this form, you agree to receive recurring marketing communications from storyarb at the email you provide. To opt out, click unsubscribe at the bottom of our emails. By submitting this form, you also agree to our Terms & Privacy Policy
You're in! Welcome to The Standard!
Oops! Something went wrong while submitting the form.
hello@storyarb.com
Quick Links
Request a DemoPricingContact
CareersReferralsNewsletter
Privacy PolicySubscription AgreementTerms of Use
Join The Standard
Subscribe to storyarb's free weekly newsletter.
Subscribe
Handwritten by
©2025 storyarb®. All rights reserved.