“Is _____ a ranking factor?”
If you’re like me, you’ve received that question more times than you can count. Also if you’re like me, your answer to this question has become a lot more subjective over the years.
In my last article for Search Engine Journal, I said that “it depends” is actually a responsible and appropriate answer to most marketing questions. I think the same applies to most SEO questions.
Let me explain.
Living in the Age of Subjective SEO
I remember my feelings of frustration after I’d run an SEO experiment. I would apply the same change to multiple websites, only to get a positive result on some and a negative on others.
It was completely deflating — did I do something wrong? Did some other change throw off the experiment?
I would spend hours trying to find an explanation for my inconclusive results, only to throw up my hands in defeat, resigning myself to the depressing belief that SEO would forever remain an enigma.
Getting Comfortable with Nuance
It wasn’t until much later that I realized, although SEO was much more nuanced than I had originally thought, it wasn’t impossible to figure out.
The solution, in my mind, is to consider SEO as subjective, in that different pages need different factors to rank for different queries.
That’s not to say there aren’t best practices we should follow (like having an up-to-date XML sitemap) or confirmed ranking factors (like links or mobile-friendliness). It’s just that the degree to which these are effective for producing our desired result will vary by factors like the size of your website, your competition, your industry, and even the time of year.
For example, Tom Capper’s presentation on the two-tiered SERP showed how volatile the page 1 SERP was for the term “Mother’s Day flowers” in the two weeks leading up to Mother’s Day.
And Botify’s data shows how crawl budget optimizations can have substantial ranking and traffic benefits on large sites while making little-to-no difference on smaller sites (disclaimer: I work for Botify).
We need to get comfortable with nuance if we want to be effective SEOs.
Same Test, Different Results
Rob Ousbey recently gave a great presentation at MozCon 2019 on this very topic.
He explained that he and the team at Distilled would set up SEO A/B tests (note: these are different than A/B tests for CRO. The former splits pages into groups while the latter splits users into groups) to see, for example, whether removing content on product category pages would help or hurt their organic search traffic.
What did they learn?
The same change led to improved traffic on some sites and decreased traffic on others.
Data like this is what tells us that no change (even if that change is considered “SEO best practice”) has the same impact on any two sites.
So what are we supposed to do?
How to Apply the Scientific Method to Your SEO
We know that Google wants to serve the most relevant answer to searcher’s queries. We also know that “relevance” is a subjective quality. So if you want to find what works for your unique site, you’re going to have to test.
And what better way to test our SEO theories than the scientific method?
Step 1: Make an Observation
What SEO mystery do you want to get to the bottom of?
Write it down.
Documenting your observation or question can help keep your SEO experiments on track. Try to keep a single focus, testing one thing at a time.
If you’re at a larger organization where you need to get executive buy-in before you can run experiments, it’s a good idea to use your own website’s data to point you in a direction where the likelihood of positive impact is high.
If your existing data indicates that your highest-ranked pages are those with a low page depth, you can use that to make a case to your boss.
For example, “Our data indicate that low-depth correlates with better rankings. We’d like to confirm that hypothesis by running an experiment where we reduce the depth of low-ranking pages. If it’s successful on our test group, we’ll work on improving the depth of all our key pages.”
When you do this, your boss is much more likely to approve your experiment.
Step 2: Do Your Research
Next, you’ll want to research your topic. Look for any existing documentation on the subject – Google Webmaster blog posts, Google patents, third-party research, etc.
Research will help you narrow down to the most realistic hypothesis.
Step 3: Form a Hypothesis
What is your educated guess to explain your observation?
The rest of this process will seek to prove or disprove your hypothesis.
Step 4: Conduct an Experiment
Now onto the hard (or fun, depending on how you look at it) stuff. It’s time to run an experiment.
It can be difficult to run an experiment and get clean results.
For example, how do you know if the traffic increase or decrease was caused by your test, and not an algorithm update? Or seasonality? Or some other change made to the website at the same time?
One good way to solve for this is to apply your change to multiple, similar pages on your site, while leaving a second group of pages the same – a test group and a control group.
Not only does this help give you more conclusive results, but it also ensures you’re not wasting your time rolling out a bad or neutral change site-wide. You’ll end up only spending time on changes you’re confident will work in your favor.
Step 5: Analyze Your Data
Now it’s time to analyze your data so that you can draw a logical conclusion. You’re essentially trying to uncover, based on the data, whether your hypothesis was right or wrong.
If you ran your experiment like an SEO A/B test, for example, improved metrics on the test group relative to the control group will tell you that your hypothesis was correct.
But what metrics should you be looking at?
Not all metrics are created equal, even in SEO where keyword rank position seems to reign supreme.
Every metric measures something different, so make sure you’re picking the ones that most directly measure the specific work that you did.
For example, if your hypothesis was “reducing redirect chains on our internal links will improve Google’s crawl of our site” then the metric you’ll want to use is crawl ratio (how many of your pages is Google crawling vs. missing), which you’ll want your log files for.
This isn’t to say that each SEO activity only impacts a single metric – not at all!
Even technical changes like crawl budget optimizations can have a positive impact on your rankings and traffic. It’s just always a good idea to pick the most direct measures of your activities in order to determine whether the experiment was a success or failure.
Step 6: Report on Your Results
Finally, it’s time to report on your results.
This is an important step, because the conclusions you draw can affect how other people think about SEO.
Here are some tips to keep in mind when publishing your results:
- If you found a correlation, don’t phrase it as a causation. For example, if you found that low bounce rate correlated with high rankings, don’t report that low bounce rate will cause you to rank higher.
- Explain your methodology. How big was your test? What type of site(s) did you run it on? How did you collect the data? All these factors can influence the results, so people deserve to know these answers.
- Avoid sweeping generalities. Remember, the same factor could have different effects on different sites. When reporting your results, avoid being overly prescriptive. Instead of saying “content pruning worked on my site, therefore everyone should do it” you could say “content pruning worked on my site, therefore it may be worth testing on your own site.”
We Can Do Better Than ‘SEO Best Practices’ Lists
There are plenty of “SEO best practices” and “ranking factors” lists out there – too many to count.
While these listicles might offer you the temporary relief of thinking “If I just follow this list, I’ll be successful!” they disappoint in the long run.
Why? A few reasons:
- Ranking factor studies reveal what factors correlate with high rankings, not what changes cause higher rankings.
- Ranking factor lists and best practice checklists typically aren’t very specific. For example, a study might conclude that “pages with 2,000 words or more correlate with higher rankings” but following that advice would be wildly overkill for an ecommerce product page.
- SEO best practice lists typically focus on providing tips to help you rank better, without much mention of other SEO improvements.
Following a checklist is simply not sufficient for SEOs living in the era of the modern web and a search engine that learns faster than we ever could.
We need to embrace a “test everything” spirit to see what works and doesn’t work to improve key SEO metrics (not just rankings!) on our own unique websites.
There are likely even variations in what works and doesn’t work within a single website! For example, your product pages might need much different treatment than your blog pages or your forum pages.
Test…
Test…
And test again.
Using the scientific method on your website is a much more definitive way to reach educated conclusions about what works and what doesn’t.
Now go out there and be the best SEO scientists you can be.
More Resources:
- Real World Ranking Factors
- Google Discusses Ranking Factors
- A Complete Guide to SEO: What You Need to Know
Image Credits
Featured Image: Canva
In-Post Images #1, 3-9: Created by author, September 2019
In-Post Image #2: Botify