Outreach Experiments: Improve Win Rate Systematically
Stop guessing what works. This guide shows you how to run simple A/B tests on your outreach without overcomplicating things or needing a statistics degree.
The Experiment Mindset
Every outreach message is a hypothesis. "I think [this approach] will get more responses." Testing turns guesses into knowledge.
What to Test
Focus on variables that have real impact on outcomes:
Targeting
Who you reach out to: industry, company size, job title, budget range
Example: "Tech startups with 10-50 employees" vs. "Any startup"
Offer Angle
What you lead with: speed, quality, price, expertise, guarantee
Example: "Fast delivery" vs. "Premium quality"
Proposal Structure
How you present your pitch: length, format, examples included
Example: Short (3 paragraphs) vs. Detailed (1 page with case study)
Follow-Up Timing
When and how often you follow up
Example: Follow up on day 3 vs. day 5
Subject Line / Opening
The first thing they see
Example: Question opener vs. Direct statement
How to Set a Baseline
Before experimenting, know your current numbers:
Baseline Checklist
Write these down. This is what you're trying to beat.
Minimum Sample Size
Don't declare a winner too early. Here are practical minimums:
Sample Size Rules of Thumb
For Response Rate Tests
At least 50 sends per variation (100+ is better)
With 20% response rate, you need ~50 sends to see meaningful differences
For Win Rate Tests
At least 20 qualified leads per variation
Win rate tests take longer because you need final outcomes
For Quick Directional Tests
30 sends can show big differences (2x or more)
Good for testing dramatically different approaches
Running an Experiment
Follow this simple process:
Pick ONE thing to test
Don't change multiple variables at once. You won't know what worked.
Create two versions (A and B)
A = your current approach. B = the new idea you're testing.
Alternate or randomize
Send A to every other lead, B to the rest. Or flip a coin.
Tag your sends
Add a tag like "exp-short-proposal" so you can filter later.
Wait for enough data
Hit your minimum sample size before drawing conclusions.
Decide: keep, revert, or iterate
If B wins, make it your new default. If A wins, try a different B.
Experiment Log Template
Experiment: [Name]
Start Date
[Date]
End Date
[Date]
Hypothesis
"I believe [change] will improve [metric] because [reason]"
Version A (Control)
[Description of current approach]
Version B (Test)
[Description of new approach]
A Results
Sent: [X] | Responses: [Y] | Rate: [Z%]
B Results
Sent: [X] | Responses: [Y] | Rate: [Z%]
Decision
[Keep B / Revert to A / Need more data]
Learnings
[What did you learn? What to try next?]
Example Experiments for Freelance Platforms
Experiment 1: Proposal Length
Short (3 paragraphs) vs. Detailed (with case study)
Hypothesis: Busy clients prefer shorter proposals they can scan quickly.
Result: Short proposals got 24% response rate vs. 18% for detailed. Kept short as default.
Experiment 2: Opening Line
Question opener vs. Compliment opener
Hypothesis: Questions engage readers and feel more personal.
Result: Questions got 21% response rate vs. 15% for compliments. Switched to question openers.
Experiment 3: Follow-Up Timing
Day 3 follow-up vs. Day 5 follow-up
Hypothesis: Earlier follow-ups catch clients before they hire someone else.
Result: Day 3 got 8% additional responses vs. 5% for day 5. Moved to day 3 default.
Related Guides
Statuses & Outcomes
Clean data enables accurate experiments
Reporting Dashboard
Track experiment results over time
Start Experimenting
Turn guesses into data-driven decisions
No credit card required
