SEO anxiety? Don’t worry. How to live peacefully in a world where Google’s organic CTR drops and keywords lose importance.

These days, among specialized sites and amateur blogs, it’s all a flourishing of SEO trends for 2020.
And there is no better occasion to proclaim the mother of all trends: that is the death of SEO as we have known it so far, due to the overcoming of the so-called keyword research and old Google logics. On the other hand, it is a classic exercise at the beginning of the year, to which many people apply, with a more or less authoritative voice. Except then recognize, a few months later, that SEO is not dead. To paraphrase the telegram that Mark Twain sent to the Associated Press Agency after learning that his obituary had been prematurely released. That announcement was “greatly exaggerated.”

No, it won’t be the end of SEO

We want to avoid swelling the ranks of unwary prophets here. And not because SEO does not actually require a continuous update, which must go hand in hand with the innovations implemented by Google and other search engines. However, it’s a question of keeping a critical attitude and evaluating things on the basis of empirical results. Catch phrases, like “nothing will be as before”, have easy grip on the audience, but do not help to understand what is going on. We have tried to draw up, as you can find below, a list of things that a SEO specialist should keep an eye on in 2020. We know that it is an incomplete list and that on some points we may even be wrong.

Index

  • Google evolution and its impact on SEO
  • The road from Hummingbird to BERT
  • Work on research intent
  • Hunting for the featured snippet
  • Restorative care for organic CTR
  • The growing weight of voice search

Google evolution and its impact on SEO

Google’s algorithm adjustments are daily, although their impact often seems unnoticeable. Then, at least two or three times a year, there are the so-called core updates (in a recent post we talked about the January 2020 one) and the even more substantial changes, such as those of Hummingbird (2013), RankBrain (2015) and BERT (2019).

The point is that, as often happens, there is a tendency to exaggerate the short term impact, underestimating, on the contrary, the medium-long term one. Today, to be questioned, is the SEO approach based on positioning and frequency of individual keywords within a document, considered obsolete by many. Because now, the increased semantic ability of the algorithms implemented by Google allows search engine to understand the meaning of what we publish. But also – just as important – the intent in relation to each query.

So, if Google understands what our document is talking about and what about the user who sets up the search needs, regardless of the position and density of the individual keywords,
why do we care about the keyword strategy?

The point is that no semantic understanding will ever happen “regardless” of the terms in the document.

Semantics work on three levels: 1) single word (phrase), 2) simple or complex sentence, 3) context. Understanding the meaning of the sentence presupposes semantic competence at the level of the phrase, but this last presupposes the ability to establish a relationship between each term and all the others. The meaning of a word depends on the other words that surround it and on the context that “surrounds” the sentence.

In this case it would be wrong to talk about the search engine’s semantic capacity, especially if we  move in a cognitive perspective. It is a serious mistake falling into the misunderstanding that the algorithm would be able to interpret the meaning of a linguistic act in the same way a human being does it.

The road from Hummingbird to BERT

So let’s try to clarify a bit. Google’s first attempt to recognize the meaning of a text on the basis of the relationship of each term with the other terms of the same text, rather than on the basis of their frequency, dates back to the launch of Hummingbird. Hummingbird is based on a patent filed by Google in 2012 with the title Synonym identification based on co-occurring terms. This technology implements the ability, starting from a specific term, to identify its synonyms within a text, considering all the other terms contained in that text, and to determine the confidence index for the identified semantic link.

Two years later, Google Search introduced a new algorithm called RankBrain. Again, this is a machine learning procedure, which learns to recognize semantic similarities between different terms. Received an input term, an algorithm formulates hypotheses on terms that could have a similar meaning. Each term is placed within a vector, called distributed representation, that is a group of words connected by semantic relationships.

A further step towards the semantic competence of Google Search has been taken with the introduction of BERT. This is not an entirely new project, even if the Mountain View engineers made it official only on October 25, 2019. BERT, at the moment, is not implemented in the Italian edition of Google.

Work on research intent

What are the impacts on the SEO side now? It is not a question of decreeing the end of the keyword strategy, but of recognizing the need for a new type of keyword strategy. Google is learning to perceive the semantic significance of a document, in the sense that it identifies the relationships of meaning that exist between different terms and therefore reconstructs the underlying semantic field. Our strategy will have to respond to the need to create dense and rich semantic fields, working not only on the frequency of a single term, but also and above all, on the presence of synonyms and semantically associable expressions.

We must expect Google to reward the most cohesive texts, although it is very difficult to define a precise measure of this growing search engine capacity. Probably, today, four occurrences of the word ‘fish’ are worth less than a single occurrence of the same word, associated with the terms ‘fish’, ‘fishing’, ‘salmon’, ‘sea’, ‘aquatic animal’, etc.

But setting a new type of keyword strategy, also means working better with other keywords, different from those on which we have focused our efforts so far. It means reasoning on terms that meet users’possible intent. Moreover, we must not consider these two things necessarily in conflict: the so-called intent research does not replace the keyword research. So we agree with John Mueller, Google’s webmaster trends analyst, when he says that “showing specific words to users can make it a little easier for them to understand what our pages are about and can sometimes guide them through the conversion process” (Google’s John Mueller on intent research vs keyword research for 2020).

So SEO is not dead, in short. If anything, all this suggests to reactivate old content from a SEO perspective, trying to determine a new balance between the different keywords. After all, bring fresh traffic on old glories is always a noble goal (see the good practices suggested in the interesting post How to breathe fresh life into evergreen content (and get fresh traffic, too), by George Nguyen).

Hunting for the featured snippet

QueAnother hot topic to pay attention from 2020. One thing’s for sure: featured snippets have an increasing weight in the composition of SERP, as shown by the dynamic analysis of SEMrush Sensor. Today it contains a featured snippet over 12% of SERPs in the United States and almost 8% in Italy.

Featured snippets are three: those containing a paragraph with a document content description, the list ones, unorganized or numbered, and the table ones. So we must decide in which of the three scenarios we want to compete, and organize our document accordingly. And, before that, we have to decide if the featured snippet suits us. We need to be aware, in fact, that it will have a negative impact on the click-through rate process, as is shown by an extensive study by Ahref three years ago, although Christian Carere seems to express a different point of view in this post on Ryte Magazine.

Finally, a SEMrush research carried out in 2018 suggests a series of good practices:

  • Publish documents with texts of a certain length (at least 2000 words)
  • Use a highlighted image with horizontal development (4:3, 600×425 pixels)
  • Include links that lead to authoritative external resources
  • Stimulate social engagement on the page
  • Guarantee users an excellent use with mobile devices
  • Add a 40-60 word “sinppet bait” to the page (if you point to the paragraph snippet)
  • Use H2 or H3 header tags for each of the elements of the list (in the case of list snippets)
  • Insert a table in the text (table snippet)

Another trend that should consolidate in 2020 is the increase in featured snippets with video format. So we need to make the videos we publish more easily indexable by Google Search. This means organizing them in separate sections in our YouTube channel, optimizing them from a SEO point of view (remembering to enhance the title, description and tags), providing the transcription of the audio track and inserting the videos within the articles that we publish on our site.

Restorative care for organic CTR

It’s official: the organic click-through rate is decreasing on the SERP, especially in the Google mobile version. In 2018 Rand Fishkin photographed a very significant drop measured over three years: from 66% to 39%. (see New Data: How Google’s Organic & Paid CTRs Have Changed 2015-2018, on SparkToro). However this does not correspond to a particularly significant increase in sponsored CTR, which does not reach 4%.

How to counter this trend? We need to work on factors that most influence CTR. The most important, of course, is positioning in the SERP. According to a 2019 Brian Dean elaboration, made for Backlinko on ClickFlow data, the average CTR of the snippets in the first position is 31.7%, ten times higher than the tenth one. Moving from the third to the second position can lead to an increase in CTR of over 30%. The impact is much less significant going down, and there is no big difference between the tenth and seventh position.

Other important factors for stimulating CTR are:

  • the presence of a question in the title tag (+ 14.1% compared to the average CTR)
  • the presence of the main keyword in the URL (+ 45%)
  • the emotional cut of the title of our content (+ 7%)
  • the presence of content in the meta description tag (+ 5.8%)
  • a title tag length between 15 and 40 characters (+ 8.6%)

The growing weight of voice search

Voice search is entering the maturity phase. Thanks to two trends that feed each other: on one hand, the rapid progress of speech recognition technologies capabilities, on the other the spread of personal assistants (Google Assistant, Amazon Alexa, Microsoft Cortana and Apple Siri) on smartphones and smartspeaker (Amazon Echo, Google Home).

This does not mean that voice recognition has overcome all the application challenges. In particular, it is not ready yet to be used in many IoT contexts. Let’s think, for example, of the persistent difficulty we face in managing an apparatus through the voice commands imparted simultaneously by different sources. Two people talking in the same room are enough to put systems like Alexa or Siri in difficulty. Not to mention the reduced customization possibilities.

In the field of online research, however, the use of voice is now well established. We do not believe that the share of searches commanded by the voice is equal to 50%, as stated. The misunderstanding was probably generated by the results of an Adobe study from July 2019, according to which 48% of Americans use the voice assistant for their online searches. And, in almost half of the cases, this happens on a daily basis.

What does it mean to optimize a web document for voice search? Here some tips for those involved in SEO content:

  • Adopt a natural and conversational writing style
  • Lengthen the tail of your keyword strategy and don’t forget the old stop-words
  • Always try to put your content into context
  • Integrate structured data into the documents we publish
  • Working with the speakable property of Schema.org, to allow the search engine to identify the sections suitable for audio reproduction using text-to-speech (TTS)