· 17:45
Hey, welcome to 3:03 by Reply
Two, a hooman and AI collab,
powered by Notebook LM.
Ever hit send on a newsletter
and feel like it just vanishes
into the digital ether?
Oh, absolutely.
You might see some opens, maybe, but
you're left wondering, you know, what
actually connected, what got them
clicking, engaging, taking that next step.
It's a common frustration.
Yeah.
That feeling of your emails just
disappearing into a black hole.
It's something so many creators deal with.
Yeah, and it can be seriously
frustrating when you're just.
I guessing at what works.
And the truth is, if you're just
relying on hunches, um, when it comes
to your newsletter strategy, you're
probably spinning your wheels a bit.
Y you might be putting in tons of effort,
but without really understanding what
truly connects with your readers, you're
just not seeing the progress you want.
Exactly.
So today we're tackling
that frustration head on.
We wanna look at how you can move
your newsletter strategy from,
let's say, I hope this works too,
to, I know this works, or at least mm-hmm.
I know this works better.
Yes.
Making decisions based on real data.
Our goal here is to give you a clear
system, really to improve how your
newsletters perform and you know,
ultimately help you make tangible
progress on those creator goals.
Yeah.
Things like more engagement,
maybe increase revenue,
getting that momentum going
Precisely.
And the key to this whole
data-driven approach is a really
effective method called AB testing.
Or split testing.
Some people call it
exactly split testing.
It's basically about running these
small experiments with your newsletters.
You send out two slightly
different versions to.
Uh, a segment of your audience.
Okay.
And you see which one performs
better based on what you care about.
Okay, so let's unpack this a bit.
So instead of just sending out
what you think is great, you're
actually getting direct feedback,
right?
From your readers.
It's like they're voting,
you know, with their clicks.
Yeah, with their engagement.
They're telling you, Hey, this
version, it was more interesting,
or, this got me to act,
and this whole idea having a solid
framework for this kind of testing.
We should definitely give a shout out
here to Dansky of Inbox Collective.
Oh, absolutely.
His insights have been really
foundational in this space.
Yeah, his work's been
incredibly valuable, hasn't it?
Getting creators to
think more strategically.
Totally.
His understanding of the
landscape, the practical advice.
Mm-hmm.
It really forms a lot of the basis
for what we're talking about today.
Okay, so let's dive in.
Why should creators even
bother with testing?
It sounds like maybe more work on top of
everything else they're already doing.
Well, think of it this way.
Sending newsletters without
testing is kind of like trying
to hit a target blindfolded.
Mm-hmm.
You might get lucky sometimes.
Sure.
But the odds are definitely
stacked against you.
Mm-hmm.
Testing takes off that blindfold.
Okay.
It replaces those assumptions we
all make, like, oh, I bet my readers
prefer really long emails or, mm-hmm.
This subject line is definitely a winner.
It replaces that with actual evidence.
So instead of, I think it's.
I know
exactly.
We know our readers click more when
the subject line asks a question.
For example, based on
data, not just a feeling.
That makes perfect sense.
Shifting from guesswork to actual
knowledge about your specific audience.
I. Can you give us some
like real world examples?
Sometimes seeing the
results makes it click.
Yeah, definitely.
We've seen some really
interesting results.
Uh, for instance, one creator assumed
their audience was most active.
You know, typical morning workday
time sent their newsletter
every Tuesday at 10:00 AM
Sounds standard,
right?
But after testing they discovered that
moving their send to Thursday at get this.
3:03 PM
3:03 specific.
Okay.
Yeah.
3:03 PM It resulted in a,
uh, 37% increase in clicks.
Wow.
37%. Just from changing the time,
just the timing.
It's a huge jump in engagement.
It shows you can't just assume the
standard times work best for everyone.
Okay.
That's a powerful example.
What about other kinds of
things, not just timing?
Sure.
Another creator was like.
Hyper-focused on their
call to action button.
Mm-hmm.
You know, agonizing over the color.
The exact wording been
there.
Right.
We all have, but what they found
actually had a much bigger impact
was simply personalizing the greeting
using the subscriber's first name,
it's high name,
basically.
Yeah.
That simple change led to a
22% increase in conversions.
It taps into that, uh,
mere exposure effect.
Makes it feel more personal, more direct.
Interesting.
The button color.
The button color, they were so
worried about, made almost no
noticeable difference in their tests.
Huh.
That's a great reminder, isn't it?
Sometimes the things we obsess
over aren't the actual levers
that drive the big changes.
Exactly.
Okay.
So this testing process, it really
helps you build this unique,
personalized playbook that's
specifically tailored to your audience.
Not just copying what someone else does
and precisely, it's not about blindly
following what works for, I don't
know, a massive covenant or some other
creator in totally different niche.
It's about understanding the
specific nuances of your readers,
what makes them pick, what motivates
them to engage and help you make
that progress you're looking for.
Okay, so we're sold on why we should test.
Now let's get practical.
What are the actual elements we
can start experimenting with?
A really helpful way to think about
this is the newsletter envelope.
The envelope.
Yeah.
Basically everything your subscriber
sees before they even open the email.
Mm-hmm.
This is prime real estate for testing.
Ah, okay.
Like the subject line.
That's the first thing.
Right,
exactly.
Subject lines are crucial
for that first impression.
Now, um, while they definitely influence
opens, we need to be careful here.
Open rates themselves.
They aren't always the most
reliable metric these days,
right?
Lots of things can affect those
Apple mail, privacy, et cetera,
precisely.
So we wanna focus more on what happens
after the open clicks, deeper engagement.
But subject lines are still vital for
getting that open in the first place.
And there's a lot you can test.
Like what?
Well,
different tones.
Serious versus playful.
Maybe try posing a question, see
how LX works short and punchy versus
maybe something more descriptive
emojis.
Love 'em or hate 'em.
Definitely test emojis.
See if using them strategically helps
your audience connect some newsletters.
Even find using a consistent
emoji helps build recognition.
You can also test format consistency.
I. Or personalization, like
using the subscriber's name
though again, track meaningful
engagement there, not just opens.
Okay.
Lots to play with just
in the subject line.
What else is part of this envelope?
The pre-header text.
That little snippet you see right
after the subject line in your inbox.
Oh yeah.
That often just pulls in the
first line in the email, right?
Often.
Yeah, and it's usually wasted space.
Instead, you could test using
it to say, extend your subject
line, give a bit more context,
or create some intrigue.
Exactly.
Create curiosity, hint at what's inside.
Yeah.
Or you could summarize
the key value proposition.
Why should they open this email?
You could even test having
no preheader text at all.
Maybe it just adds
clutter for your audience.
Ah, I've definitely
overlooked Preheader text.
That's a really good one.
And the last part of the envelope.
The From name.
That's right.
It's often overlooked, but
it's a huge trust signal.
Who is this email actually from?
So test my name versus the brand name.
Yeah.
Or a combination.
Yeah.
Like Ambreen from Reply Two.
Or even adding a title like
Sam Newsletter Insights.
See how that perception influences whether
people open and more importantly, engage.
Okay, so that covers the outside.
Now let's get inside the newsletter.
The body content itself.
What can we test there
to boost engagement?
Right?
So once they've opened it, the
goal is keeping them interested,
getting them interacting.
And a big part of that is just
the format, how you present the
information, how it looks and feels.
Exactly.
So experiment.
Do your readers prefer links just
embedded naturally in the text, or do
they respond better to say headline
style links with little descriptions?
Maybe image blocks work
better if it's a visual.
Could be test image,
headline, description blocks.
Pay attention to emphasis.
How does using bolder italics
affect what people focus on?
Even simple things like very
paragraph length and structure can
impact readability and engagement.
So
it's not just what you say,
but how you lay it out.
Makes sense.
What about the actual length
of the whole newsletter?
I feel like I hear different
advice all the time.
Yeah.
Newsletter length is definitely
worth testing because there's
no single right answer.
There's probably a sweet spot
that's unique to your audience.
Like some people might click
way down at the bottom.
Exactly.
Buzzfeed famously found that a significant
chunk of their readers clicked links
way down in their longer newsletters.
But then you have others like
Morning Brew who've likely found an
optimal, maybe shorter word count.
For their specific audience.
So what can you actually
test related to length?
You could try adding or
removing whole sections.
See how that affects engagement metrics
or test shorter versus longer intros.
Mm.
Do your readers want you to get
straight to the point or do they
appreciate a a bit more context setting?
That's fascinating.
So related to that, the order
you put things in the content
hierarchy, that matters too, right?
Absolutely.
Think strategically, what's
the most important thing?
Should it go right at the top?
Or maybe you build up to it, test putting
your key content first versus maybe saving
something really compelling for the end.
What about grouping things?
Yeah.
Test grouping related topics together
versus mixing different types of content.
And don't forget visual hierarchy.
Using formatting headings, white
space to guide the eye to what you
really want them to see and act on.
Okay.
This is all incredibly useful stuff,
but it also feels like, wow, there
are a lot of things you could test.
How do you avoid getting totally
overwhelmed and actually get
results that help you make progress?
Ah,
yes.
I. This brings us to the
crucial one thing rule.
This is probably the single
biggest mistake people make
when they start AB testing,
which is
changing multiple things at once.
You tweak the subject line
and the send time and the CTA
button wording all in one test,
and then you see a change,
right?
Maybe performance goes up, maybe
it goes down, but you have no idea
which change actually caused it.
You've just muddied the waters completely,
right?
You can't isolate the variable.
So focus is key.
What's the right way to run a test?
Then?
The correct process is change
only one element per test.
Just one.
Then split your list randomly.
Your email platform should help with this.
Send version A, the original, your
control and version B, the one with the
single change at the exact same time.
Same time is important,
crucial.
Then analyze the results based
on one clear success metric
you decided on beforehand.
Is it clicks?
Click through rate is a deeper engagement.
Maybe track engaged clicks,
like did they spend more than 30
seconds on the page after clicking?
Got it.
Define success first.
Yes.
Once you have clear results, implement
the winner as your new baseline, your
new control version, and then move on
to testing the next single element.
Okay.
That systematic approach makes sense.
Test one thing, learn, implement, test.
The next thing,
that disciplined approach is
what turns testing from confusing
guesswork into a reliable way to
improve and make real progress.
Now, how do you know if
the results are actually.
Meaningful, like
statistically significant.
Do I need a PhD in stats?
Uh, thankfully, no.
No advanced degree needed, but
you do need enough data for
the results to be reliable.
Okay.
What's enough?
Well, based on what we see work, a
good practical minimum is probably
around 2000 subscribers on your list
total, and you'd want at least say,
500 subscribers getting each version
of your test version A and version B.
500 per variant.
Got it.
Yeah.
And you'll want to aim for a
confidence level of 95% or higher.
Most email tools actually
calculate this for you.
Basically means you're 95%
sure the difference you're
seeing isn't just random luck.
Okay.
So the tools can help there.
Anything else?
Consistency is really important too.
Try to run your tests on the same day
and time for at least three cycles,
like three consecutive sends or three
consecutive weeks depending on the test.
That helps smooth out any
weekly variations in behavior,
right?
So list size, variance
size, confidence level, and
consistency over multiple sends.
What if someone has a
smaller list under 2000?
Should they just forget testing?
Not necessarily.
If your list is smaller, yeah, it
might take longer for results to
become statistically significant.
So maybe your primary focus
should still be on growth, but you
can still do limited testing on
those really high impact things.
We talked about subject line, send times.
You might not get definitive proof as
quickly, but you can still start to
see trends and learn valuable things
about how your small audience behaves.
Okay, that's good advice.
Focus on growth, but maybe dip
your toe in with the big stuff.
Mm-hmm.
Are there any common mistakes, uh,
pitfalls people fall into when they
start testing things to watch out for?
Oh, definitely.
We see a few common
ones pretty frequently.
One is the one and done approach.
Just try it once.
Yeah.
Run one test, maybe see a
small improvement, maybe see
nothing, and then just give up.
Testing isn't a one-time task, it's an
ongoing system of learning and refining.
Okay, so it's a habit, not a project.
Exactly.
Another mistake is, uh,
random acts of testing.
Just testing whatever idea pops
into your head without a strategy
like testing button, color
before subject lines.
Right.
It's usually better to start
with those high impact elements.
First, subject lines, send times,
maybe overall format for you.
Super granular.
Also, ignoring outside variables.
Make holidays or big news.
Exactly.
Those things can definitely skew
your results, so be mindful.
Another one is moving too quickly.
Not letting a test run long
enough to get enough data.
Patience is important.
Give it time.
Yep.
And finally, the big one,
the it worked for them.
Trap.
Just because some tactic worked wonders
for another newsletter doesn't mean
it'll work for your unique audience.
You have to test it yourself.
These are really crucial warnings.
Avoid random testing.
Be patient, be strategic,
and focus on your audience.
So if we do this consistently, if
we make these small data backed
improvements, what kind of overall
impact are we talking about?
Does it really add up?
Oh, it absolutely does.
This is where the power of
compounding small gains really shines.
Even seemingly tiny improvements like
two 3% boosts in your click rate or
meaningful engagement or conversions.
They add up significantly over time.
Can you give us an
example, like with numbers?
Sure.
Let's take a newsletter
with say, 3000 subscribers.
If you consistently improve your
click rate through testing, maybe from
3% to 4%, that's an extra 30 people
clicking your links every single cent.
Okay?
30 more clicks each time.
Now, scale that up.
If you have 15,000 subscribers,
that same 1% increase.
Is an extra 150 clicks percent.
Wow.
Okay.
And think about meaningful engagement.
Maybe you track who spends over
30 seconds on your linked content.
If you can bunk that from 2% to 4%.
Mm-hmm.
For the 3000 list that's 60 more
meaningfully engaged readers each time
for the 15,000 list, that's 300 more
people really digging into your stuff.
That's significant engagement.
It really is.
And even small bumps in conversion
rate, maybe from 1% to just 1.2%.
That can mean extra six conversions
for the smaller list, or 30
more for the larger one every
time you promote something.
So these small percentages consistently
applied, they really translate into
tangible progress towards those goals.
More engagement, more revenue.
Exactly.
It's not about hitting
home runs with every test.
Yeah.
It's about consistent base hits that
add up over the season, you know?
Yeah.
That's a great way to put it.
Those numbers are really encouraging.
It shows how these small data-driven
tweaks can have a real cumulative effect.
So for someone listening who's
feeling inspired, ready to start,
what's a good first few tests?
Where should they begin?
Yeah, great question.
A fantastic starting point is to
focus on three pretty straightforward
tests designed to get you some
initial wins and insights.
Okay.
Test number one,
test your subject line format.
Keep your typical style as version A.
For version B, try using a question-based
subject line, your main metric.
Click through rate.
Run this for three.
Consecutive sense,
simple enough.
Test two,
experiment with send time.
Version A is your current time.
Version B is an alternative day
or time, maybe that Thursday at
3:03 PM For this one, maybe focus
on engaged clicks as your metric.
Track time on page, something like that.
Run it for three consecutive
weeks to see patterns.
All right, subject line format, send time.
Number three.
Test your call to action
presentation version A. Is your
current CTA style version B?
Try making it more prominent or
use slightly different wording,
maybe more benefit oriented.
Metric is click through rate again and
run for three consecutive sentence.
Okay.
Subject line format, send
time, CTA presentation.
Those seem like really
solid starting points.
Exactly.
They cover fundamental areas and should
give you valuable first insights into
what actually moves your audience.
This has been incredibly helpful.
It really feels like the key is
shifting away from just guessing,
from throwing things at the wall
and starting to use actual data.
Yeah, yeah.
To really understand your readers
and make that meaningful progress
towards whatever your goals are.
Right.
Consistent focus testing
seems like the absolute key to
unlocking that understanding.
It really is.
It helps you figure out your
audience's preferences, their
behaviors, and ultimately lets
you make real strides towards, you
know, deeper engagements, stronger
relationships, maybe more revenue.
I. Whatever progress looks like for you.
So as you the listener, are thinking
about your very next newsletter, what
small data informed change could you make?
What will you implement first to start
learning more about what genuinely
resonates with your readers and
actually propels your progress forward?
What's the first element
you're gonna put to the test?
Listen to 3:03 using one of many popular podcasting apps or directories.