Advertisement
  1. SEJ
  2.  ⋅ 
  3. SEO

CTR as a Ranking Factor: 4 Research Papers You Need to Read

Is click-through rate REALLY a ranking factor? Here are four important research papers that will help you understand the role of CTR in search engine rankings and SEO.

CTR as a Ranking Factor: 4 Research Papers You Need to Read

There have been many discussions about click-through rate (CTR) and rankings. Some say CTR is a ranking factor, others insist it’s a part of machine learning and quality control. A third group claims it’s all three plus a bag of chips.

Regardless of which camp you pitch your tent at, here are four research papers I believe are helpful for understanding the role of CTR in search engine rankings and SEO.

Thorsten Joachims and the Study of CTR

Thorsten Joachims is a researcher associated with Cornell University. He has produced many influential research papers, among them research on the use of CTR for the purposes of search engine algorithms. Check out his Google Scholar page which cites all the research he’s done on this topic.

If you want to learn about how search engines use CTR, this is the person to start with.

If you are interested in understanding the possible roles of CTR in search engines, then these four research papers authored by Joachims will prove enlightening.

1. Optimizing Search Engines with CTR

Optimizing Search Engines Using Clickthrough Data (PDF) is a research paper from 2002. This research paper introduced the concept of using the CTR data as indicators of how relevant search results links are and to use that information to rank better webpages.

That this research paper is from 2002 shows just how old research into CTR is. Studying CTR for relevance information is a mature area of research. Search engine related research has progressed far beyond this area.

Nevertheless, it’s important to understand how CTR is actually used. This knowledge provides a foundation of understanding. Once you have a foundation of understanding, you will be less likely to be fooled by baseless speculation about click-through rates and it’s role in ranking webpages.

Here’s what the research paper states:

“The goal of this paper is to develop a method that utilizes clickthrough data for training, namely the query-log of the search engine in connection with the log of links the users clicked on in the presented ranking.

…The key insight is that such clickthrough data can provide training data in the form of relative preferences.”

In my opinion, this paper recognizes limitations in the algorithm. The algorithms are limited to learning which of the top 10 links are most relevant. But it learns nothing about the webpages in second and third or fourth pages of the search engine results pages (SERPs).

This is what the research paper observes:

“…there is a dependence between the links presented to the user, and those for which the system receives feedback.”

Right from the beginnig of CTR research it was understood that CTR data from the top 10 of the SERPs was of limited but important value. The research paper also notes that using this kind of algorithm was open to spamming and that steps would need to be taken to make it immune to spamming.

This is what Thorsten Joachims noted:

“..it might also be possible to explore mechanisms that make the algorithm robust against “spamming”. It is currently not clear in how far a single user could maliciously influence the ranking function by repeatedly clicking on particular links.”

This is important information because it shows that even in 2002 researchers were thinking about how to prevent click spamming. What this means is that the advice to click on one’s own listing to upvote their own sites probably doesn’t work.

2. The Concept of CTR as Biased Feedback

This paper, authored with a researcher from Stanford University is entitled, Accurately Interpreting Clickthrough Data as Implicit Feedback – 2005 (PDF) . This is an important research paper because it introduces the concept that maybe CTR data is not that reliable.

Here is how the CTR research paper expresses the idea that CTR data is noisy:

“This paper examines the reliability of implicit feedback generated from clickthrough data in WWW search. Analyzing the users’ decision process using eyetracking and comparing implicit feedback against manual relevance judgments, we conclude that clicks are informative but biased. While this makes the interpretation of clicks as absolute relevance judgments difficult, we show that relative preferences derived from clicks are reasonably accurate on average.”

This paper is concerned with understanding which links users have scanned, if users scan from top to bottom, which links do users linger over before clicking and how the title and meta description in the SERPs influence user decisions to click one link over another. And this, that the title and meta description influenced the users behavior, is the bias that this research paper discovered.

Yet the paper was optimistic that because there is a large amount of data to be mined, machine learning could be applied in order to reach accurate determinations of what links are more relevant than other links.

The research paper on CTR reached this conclusion:

“Our results indicate that user’s clicking decisions are influenced by the relevance of the results, but that they are biased by the trust they have in the retrieval function, and by the overall quality of the result set. This makes it difficult to interpret clicks as absolute feedback.

However, we examine several strategies for generating relative feedback signals from clicks, which are shown to correspond well with explicit judgments. …The fact that implicit feedback from clicks is readily available in virtually unlimited quantity might more than overcome this quality gap, if implicit feedback is properly interpreted using machine learning methods…”

I believe it is important to note that this research paper is not concerned with finding spam or with finding low quality sites to exclude. It is simply concerned with finding relevant sites that satisfy users.

3. Machine Learning and Simulated CTR

The third research paper is also from 2005. This paper is titled: Evaluating the Robustness of Learning from Implicit Feedback . The goal of this paper is to understand when CTR data is useful and when CTR data is biased and less useful.

This is how the paper framed the problem and the solution:

“…this data tends to be noisy and biased… In this paper, we consider a method for learning from implicit feedback and use modeling to understand when it is effective.”

This paper is especially interesting because it introduces the possibility of modeling user behavior and using that data instead of actual user behavior. This paper also mentions reinforcement learning, which is machine learning.

Here’s a link to an introduction to reinforcement learning.  It uses the example of a child learning that a fire is good because it gives off heat. But later learns the fire is bad if you get to close.

This is how the research paper presented it:

“This type of interactive learning requires that we either run systems with real users, or build simulations to evaluate algorithm performance.

The alternative, often used in reinforcement learning, is to build a simulation environment. Obviously this has the drawback that it is merely a simulation, but it also has significant advantages. It allows more rapid testing of algorithms than by relying on user participation. It also allows exploration of the parameters of user behavior. In particular, we can use a model to explore the robustness of a learning algorithm to noise in the training data.”

This is really cool. It shows how a search engine can use machine learning to understand user behavior and then train the algorithm without actual CTR data but with simulated CTR.

This means that a search engine can theoretically model user behavior on webpages even if those pages do not rank on the first page of the SERPs. This overcomes the limitations noted in the research way back in 2002.

4. User Intent and CTR – 2008

The final research paper I want to introduce to you is, Learning Diverse Rankings with Multi-Armed Bandits (PDF). This research paper does not use the phrase user intent. It uses the phrase, user satisfaction.

This paper is focused on the importance of showing results that satisfy the most users. And satisfying the most users means understanding what clicks results in the least amount of clicks back to the search engine, also known as abandonment.

Satisfying all users means showing different kinds of webpages. The user intent for many search queries is different.

What’s relevant for one user is less relevant to another. Thus, it’s important to show diverse search results, not the same kind of answer ten times.

Here’s what the paper says about showing multiple kinds of results:

…user studies have shown that diversity at high ranks is often preferred. We present two online learning algorithms that directly learn a diverse ranking of documents based on users’ clicking behavior. We show that these algorithms minimize abandonment, or alternatively, maximize the probability that a relevant document is found in the top k positions of a ranking.

And this is what the papers says about user satisfaction:

“…previous algorithms for learning to rank have considered the relevance of each document independently of other documents. In fact, recent work has shown that these measures do not necessarily correlate with user satisfaction…”

And here is the part that really nails the problem that search engines today have solved:

“…web queries often have different meanings for different users… suggesting that a ranking with diverse documents may be preferable.”

The only downside to this kind of CTR algorithm for determining user satisfaction is that it may not work well for topics where what users want is in a state of change.

“We expect such an algorithm to perform best when few documents are prone to radical shifts in popularity. ”

Read the CTR Research

And there you have it. These are, in my opinion, four important research papers to read before forming an opinion about the role of CTR in ranking webpages.

It’s important to note that the first research paper cited in this article is from 2002. The last one is from 2008. This gives an idea of how mature research into CTR is. Most research today is no longer focused on CTR. It is focused on artificial intelligence.

Nevertheless, if you are interested in CTR data and how it may play a role in ranking, you will benefit from reading these four research papers.

ADVERTISEMENT
SEJ STAFF Roger Montti Owner - Martinibuster.com at Martinibuster.com

I have 25 years hands-on experience in SEO, evolving along with the search engines by keeping up with the latest ...