About frans

Website:
frans has written 4625 articles so far, you can find them below.

Wake Up, SEOs – the NEW New Google is Here

Posted by gfiorelli1

In 2011 I wrote a post here on Moz. The title was “Wake Up SEOs, the New Google is Here.”

In that post I presented some concepts that, in my personal opinion, we SEOs needed to pay attention to in order to follow the evolution of Google.

Sure, I also presented a theory which ultimately proved incorrect; I was much too confident about things like rel=”author”, rel=”publisher”, and the potential decline of the Link Graph influence.

However, the premises of that theory were substantially correct, and they remain correct five years later:

  1. Technical SEO is foundational to the SEO practice;
  2. The user is king, which means that Google will focus more and more on delivering the best user search experience — hence, SEO must evolve from “Search Engine Optimization” into “Search Experience Optimization”;
  3. That web performance optimization (SiteSpeed), 10X content, and semantics would have played a big role in SEO.

Many things have changed in our industry in the past 5 years. The time has come to pause, take a few minutes, and assess what Google is and where it’s headed.

I’ll explain how I “study” Google and what I strongly believe we, the SEOs, should pay attention to if we want not only to survive, but to anticipate Google’s end game, readying ourselves for the future.

Obviously, consider that, while I believe it’s backed up by data, facts, and proof, this is my opinion. As such, I kindly ask you not to take what I write for granted, but rather as an incentive for your own investigations and experiments.

Exploring the expanded universe of Google

Credit: Robson Ribeiro

SEO is a kingdom of uncertainty.

However, one constant never changes: almost every SEO dreams of being a Jedi at least once in her life.

I, too, fantasize about using the Force… Gianlu Ka Fiore Lli, Master Jedi.

Honestly, though, I think I’m more like Mon Mothma.

Like her, I am a strategist by nature. I love to investigate, to see connections where nobody else seems to see them, and to dig deeper into finding answers to complex questions, then design plans based on my investigations.

This way of being means that, when I look at the mysterious wormhole that is Google, I examine many sources:

  1. The official Google blogs;
  2. The “Office Hours” hangouts;
  3. The sometimes contradictory declarations Googlers make on social media (when they don’t share an infinite loop of GIFs);
  4. The Google Patents and the ones filed by people now working for Google;
  5. The news (and stories) about the companies Google acquires;
  6. The biographies of the people Google employs in key areas;
  7. The “Google Fandom” (aka what we write about it);
  8. Rumors and propaganda.

Now, when examining all these sources, it’s easy to create amazing conspiranoiac (conspiracy + paranoia) theories. And I confess: I helped create, believed, and defended some of them, such as AuthorRank.

In my opinion, though, this methodology for finding answers about Google is the best one for understanding the future of our beloved industry of search.

If we don’t dig into the “Expanded Universe of Google,” what we have is a timeline composed only by updates (Panda 1.N, Penguin 1.N, Pigeon…), which is totally useless in the long term:

Click to open a bigger version in a new tab

Instead, if we create a timeline with all the events related to Google Search (which we can discover simply by being well-informed), we begin to see where Google’s heading:

Click to open a bigger version in a new tab

The timeline above confirms what Google itself openly declared:

“Machine Learning is a core, transformative way by which we’re rethinking how we’re doing everything.”
– (Sundar Pichai)

Google is becoming a “Machine Learning-First Company,” as defined by Steven Levy in this post.

Machine learning is becoming so essential in the evolution of Google and search, perhaps we should go beyond listening only to official Google spokespeople like Gary Illyes or John Mueller (nothing personal, just to be clear… for instance, read this enlightening interview of Gary Illyes by Woj Kwasi). Maybe we should start paying more attention to what people like Christine Robson, Greg Corrado, Jeff Dean, and the staff of Google Brain write and say.

The second timeline tells us that starting in 2013 Google started investing money, intellectual efforts, and energy on a sustained scale in:

  • Machine learning;
  • Semantics;
  • Context understanding;
  • User behavior (or “Signals/Semiotics,” as I like to call it).

2013: The year when everything changed

Google rolled out Hummingbird only three years ago, but it’s not just a saying: that feels like decades ago.

Let’s quickly rehash: what’s Hummingbird?

Hummingbird is the Google algorithm as a whole. It’s composed of four phases:

  1. Crawling, which collects information on the web;
  2. Parsing, which identifies the type of information collected, sorts it, and forwards it to a suitable recipient;
  3. Indexing, which identifies and associates resources in relation to a word and/or a phrase;
  4. Search, which…
    • Understands the queries of the users;
    • Retrieves information related to the queries;
    • Filters and clusters the information retrieved;
    • Ranks the resources; and
    • Paints the search result page and so answers the queries.

This last phase, Search, is where we can find the “200+ ranking factors” (RankBrain included) and filters like Panda or anti-spam algorithms like Penguin.

Remember that there are as many search phases as vertical indices exist (documents, images, news, video, apps, books, maps…).

We SEOs tend to fixate almost exclusively on the Search phase, forgetting that Hummingbird is more than that.

This approach to Google is myopic and does not withstand a very simple logical square exercise.

  1. If Google is able to correctly crawl a website (Crawling);
  2. to understand its meaning (Parsing and Indexing);
  3. and, finally, if the site itself responds positively to the many ranking factors (Search);
  4. then that website will be able to earn the organic visibility it aims to reach.

If even one of the three elements of the logical square is missing, organic visibility is missing; think about non-optimized AngularJS websites, and you’ll understand the logic.

The website on the left in a non-JS enabled browser. On the right, JS enabled reveals all of the content. Credit: Builtvisible.com

How can we be SEO Jedi if we only see one facet of the Force?

Parsing and indexing: often forgotten

Over the past 18 months, we’ve a sort of technical SEO Renaissance, as defined by Mike King in this fundamental deck and despite attempts to classify technical SEOs as makeup artists.

On the contrary, we’re still struggling to fully understand the importance of the Parsing and Indexing phases.

Of course, we can justify that by claiming that parsing is the most complex of the four phases. Google agrees, as it openly declared when announcing SintaxNet.

Announcing SintaxNext.gif

However, if we don’t optimize for parsing, then we’re not going to fully benefit from organic search, especially in the months and years to come.

How to optimize for parsing and indexing

As a premise to parsing and indexing optimization, we must remember an oft-forgotten aspect of search, which Hummingbird highlighted and enhanced: entity search.

If you remember what Amit Singhal said when he announced Hummingbird, he declared that it had “something of Knowledge Graph.”

That part was — and I’m simplifying here for clarity’s sake — entity search, which is based over two kinds of entities:

  1. Named entities are what the Knowledge Graph is about, such as persons, landmarks, brands, historic movements, and abstract concepts like “love” or “desire”;
  2. Search entities are “things” related to the act of searching. Google uses them to determine the answer for a query, especially in a personalized context. They include:
    • Query;
    • Documents and domain answering to the query;
    • Search session;
    • Anchor text of links (internal and external);
    • Time when the query is executed;
    • Advertisements responding to a query.

Why does entity search matter?

It matters because entity search is the reason Google better understands the personal and almost unique context of a query.

Moreover, thanks to entity search, Google better understands the meaning of the documents it parses. This means it’s able to index them better and, finally, to achieve its main purpose: serving the best answers to the users’ queries.

This is why semantics is important: semantic search is optimizing for meaning.

Credit: Starwars.com

It’s not a ranking factor, it’s not needed to improve crawling, but it is fundamental for Parsing and Indexing, the big forgotten-by-SEOs algorithm phases.

Semantics and SEO

First of all, we must consider that there are different kinds of semantics and that, sometimes, people tend to get them confused.
  1. Logical semantics, which is about the relations between concepts/linguistic elements (e.g.: reference, presupposition, implication, et al)
  2. Lexical semantics, which is about the meaning of words and their relation.

Logical semantics

Structured data is the big guy right now in logical semantics, and Google (both directly and indirectly) is investing a lot in it.

A couple of months ago, when the mainstream marketing gurusphere was discussing the 50 shades of the new Instagram logo or the average SEO was (justifiably) shaking his fists against the green “ads” button in the SERPs, Google released the new version of Schema.org.

This new version, as Aaron Bradley finely commented here, improves the ability to disambiguate between entities and/or better explain their meaning.

For instance, now:

At the same time, we shouldn’t forget to always use the most important property of all: “SameAs”, one of few properties that’s present in every Schema.org type.

Finally, as Mike Arnesen recently explained quite well here on the Moz blog, take advantage of the semantic HTML attributes ItemRef and ItemID.

How do we implement Schema.org in 2016?

It is clear that Google is pushing JSON-LD as the preferred method for implementing Schema.org

The best way to implement JSON-LD Schema.org is to use the Knowledge Graph Search API, which uses the standard Schema.org types and is compliant with JSON-LD specifications.

As an alternative, you can use the recently rolled out JSON-LD Schema Generator for SEO tool by Hall Analysis.

To solve a common complaint about JSON-LD (its volume and how it may affect the performance of a site), we can:

  1. Use Tag Manager in order to fire Schema.org when needed;
  2. Use PreRender in order to let the browser begin uploading the pages your users may visit after the one they’re currently on, anticipating the upload of the JSON-LD elements of those pages.

The importance Google gives to Schema.org and structured data is confirmed by the new and radically improved version of the Structured Data Testing Tool, which is now more actionable for identifying mistakes and test solutions thanks to its JSON-LD (again!) and Schema.org contextual autocomplete suggestions.

Semantics is more than structured data #FTW!

One mistake I foresee is thinking that semantic search is only about structured data.

It’s the same kind of mistake people do in international SEO, when reducing it to hreflang alone.

The reality is that semantics is present from the very foundations of a website, found in:

  1. Its code, specifically HTML;
  2. Its architecture.

HTML

Click to open a bigger version in a new tab

Since its beginnings, HTML included semantic markup (e.g.: title, H1, H2…).

Its latest version, HTML5, added new semantic elements, the purpose of which is to semantically organize the structure of a web document and, as W3C says, to allow “data to be shared and reused across applications, enterprises, and communities.”

A clear example of how Google is using the semantic elements of HTML are its Featured Snippets or answer boxes.

As declared by Google itself (“We do not use structured data for creating Featured Snippets”) and explained well by Dr. Pete, Richard Baxter, and very recently Simon Penson, the documents that tend to be used for answer boxes usually display these three factors:

  1. They already rank on the first page for the query pulling out the answer box;
  2. They positively answer using basic on-page factors;
  3. They have a clean — or almost clean — HTML code

The conclusion, then, is that semantic search starts in the code and that we should pay more attention to those “boring,” time-consuming, not-a-priority W3C error reports.

Architecture

The semiotician in me (I studied semiotics and the philosophy of language in university with the likes of Umberto Eco) cannot help but not consider information architecture itself as semantics.

Let me explain.

Open http://www.starwars.com/ in a tab of your browser to follow along below

Everything starts with the right ontology

Ontology is a set of concepts and categories in a subject area (or domain) that shows their properties and the relations between them.

If we take the Starwars.com site as example, we can see in the main menu the concepts in the Star Wars subject area:

  1. News/Blog;
  2. Video;
  3. Events;
  4. Films;
  5. TV Shows;
  6. Games/Apps;
  7. Community;
  8. Databank (the Star Wars Encyclopedia).
Ontology leads to taxonomy (because everything can be classified)

If we look at Starwars.com, we see how every concept included in the Star Wars domain has its own taxonomy.

For instance, the Databank presents several categories, like:

  1. Characters;
  2. Creatures;
  3. Locations;
  4. Vehicles;
  5. Et cetera, et cetera.
Ontology and taxonomy, then, lead to context

If we think of Tatooine, we tend to think about the planet where Luke Skywalker lived his youth.

However, if we visit a website about deep space exploration, Tatooine would be one of the many exoplanets that astronomers have discovered in the past few years.

As you can see, ontology (Star Wars vs celestial bodies) and taxonomies (Star Wars planets vs exoplanets) determine context and help disambiguate between similar entities.

Ontology, taxonomy, and context lead to meaning

The better we define the ontology of our website, structure its taxonomy, and offer better context to its elements, the better we explain the meaning of our website — both to our users and to Google.

Starwars.com, again, is very good at doing this.

For instance, if we examine how it structures a page like the one on TIE fighters, we see that every possible kind of content is used to help explain what a TIE fighter is:

  1. Generic description (text);
  2. Appearances of the TIE fighter in the Star Wars movies (internal links with optimized anchor text);
  3. Affiliations (internal links with optimized anchor text);
  4. Dimensions (text);
  5. Videos;
  6. Photo gallery;
  7. Soundboard (famous quotes by characters. In this case, it would be the classic “zzzzeeewww” sound many of us used as the ring tone on our old Nokias :D);
  8. Quotes (text);
  9. History (a substantial article with text, images, and links to other documents);
  10. Related topics (image plus internal links).

In the case of characters like Darth Vader, the information can be even richer.

The effectiveness of the information architecture of the Star Wars website (plus its authority) is such that its Databank is one of the very few non-Wikidata/Wikipedia sources that Google is using as a Knowledge Graph source.

Click to enlarge

What tool can we use to semantically optimize the structure of a website?

There are, in fact, several tools we can use to semantically optimize the information architecture of a website.

Knowledge Graph Search API

The first one is the Knowledge Graph Search API, because in using it we can get a ranked list of the entities that match given criteria.

This can help us better define the subjects related to a domain (ontology) and can offer ideas about how to structure a website or any kind of web document.

RelFinder

A second tool we can use is RelFinder, which is one of the very few free tools for entity research.

As you can see in the screencast below, RelFinder is based on Wikipedia. Its use is quite simple:

  1. Choose your main entity (eg: Star Wars);
  2. Choose the entity you want to see connections with (eg: Star Wars Episode IV: A New Hope);
  3. Click “Find Relations.”

RelFinder will detect entities related to both (e.g.: George Lucas or Marcia Lucas), their disambiguating properties (e.g.: George Lucas as director, producer, and writer) and factual ones (e.g.: lightsabers as an entity related to Star Wars and first seen in Episode IV).

RelFinder is very useful if we must do entity research on a small scale, such as when preparing a content piece or a small website.

However, if we need to do entity research on a bigger scale, it’s much better to rely on the following tools:

AlchemyAPI and other tools

AlchemyAPI, which was acquired by IBM last year, uses machine and deep learning in order to do natural language processing, semantic text analysis, and computer vision.

AlchemyAPI, which offers a 30-day trial API Key, is based on the Watson technology; it allows us to extract a huge amount of information from text, with concepts, entities, keywords, and taxonomy offered by default.

Resources about AlchemyAPI

Others tools that allow us to do entity extraction and semantic analysis on a big scale are:

Lexical semantics

As said before, lexical semantics is that branch of semantics that studies the meaning of words and their relations.

In the context of semantic search, this area is usually defined as keyword and topical research.

Here on Moz you can find several Whiteboard Friday videos on this topic:

How do we conduct semantically focused keyword and topical research?

Despite its recent update, Keyword Planner still can be useful for performing semantically focused keyword and topical research.

In fact, that update could even be deemed as a logical choice, from a semantic search point of view.

Terms like “PPC” and “pay-per-click” are synonyms, and even though each one surely has a different search volume, it’s evident how Google presents two very similar SERPs if we search for one or the other, especially if our search history already exhibits a pattern of searches related to SEM.

Yet this dimming of keyword data is less helpful for SEOs in that it makes for harder forecasting and prioritization of which keywords to target. This is especially true when we search for head terms, because it exacerbates a problem that Keyword Planner had: combining stemmed keywords that — albeit having “our keyword” as a base — have nothing in common because they mean completely different things and target very different topics.

However (and this is a pro tip), there is a way to discover the most useful keyword, even when they all have the same search volume: how much advertisers bids for it. Trust the market ;-).

(If you want to learn more about the recent changes to Keyword Planner, go read this post by Bill Slawski.)

Keyword Planner for semantic search

Let’s say we want to create a site about Star Wars lightsabers (yes, I am a Star Wars geek).

What we could do is this:

  1. Open Keyword Planner / Find new Keywords and get (AH!) search volume data;
  2. Describe our product or service (“News” in the snapshot above);
  3. Use the Wikipedia page about lightsabers as a landing page (if your site were Spanish, the Wikipedia should be the Spanish one);
  4. Indicate our product category (Movies & Films above);
  5. Define the target and eventually indicate negative keywords;
  6. Click on “Get Ideas.”

Google will offer us these Ad Groups as results:

Click to open a bigger version in a new tab

The Ad Groups are a collection of semantically related keywords. They’re very useful for:

  1. Individuating topics;
  2. Creating a dictionary of keywords that can be given to writers for text, which will be both natural and semantically consistent.

Remember, then, that Keyword Planner allows us to do other kinds of analysis too, such as breaking down how the discovered keywords/Ad Groups are used by device or by location. This information is useful for understanding the context of our audience.

If you have one or a few entities for which you want to discover topics and grouped keywords, working directly in Keyword Planner and exporting everything to Google Sheets or an Excel file can be enough.

However, when you have tens or hundreds of entities to analyze, it’s much better to use the Adwords API or a tool like SEO Powersuite, which allows you to do keyword research following the method I described above.

Google Suggest, Related Searches, and Moz Keyword Explorer

Alongside with using Keyword Planner, we can use Google Suggest and Related Searches. Not for simply individuating topics that people search and then writing an instant blog post or a landing page about them, but for reaffirming and perfecting our site’s architecture.

Continuing with the example of a site or section specializing in lightsabers, if we look at Google Suggest we can see how “lightsaber replica” is one of the suggestions.

Moreover, amongst the Related Searches for “lightsaber,” we see “lightsaber replica” again, which is a clear signal of its relevance to “lightsaber.”

Finally, we can click on and discover “lightsaber replica”-related searches, thus creating what I define as the “search landscape” about a topic.

The model above is not scalable if we have many entities to analyze. In that case, a tool like Moz Keyword Explorer can be helpful thanks to the options it offers, as you can see in the snapshot below:

Click to open a bigger version in a new tab

Other keywords and topical research sources

Recently, Powerreviews.com presented survey results that state how Internet users tend to prefer Amazon over Google for searching information about a product (38% vs 35%).

So, why not use Amazon for doing keyword and topical research, especially if we are doing it for ecommerce websites or for the MOFU and BOFU phases of our customers’ journey?

We can use the Amazon Suggest:

Or we can use a free tool like the Amazon Keyword Tool by SISTRIX.

The Suggest function, though, is present in (almost) every website that has a search box (your own site, even, if you have it well-implemented!).

This means that if we’re searching for more mainstream and top-of-the-funnel topics, we can use the suggestions of social networks like Pinterest (i.e.: explore the voluptous universe of the “lightsaber cakes” and related topics):

Pinterest, then, is a real topical research goldmine thanks to its tagging system:

Pinterest Lightsaber Tags

On-page

Once we’ve defined the architecture, the topics, and prepared our keyword dictionaries, we can finally work on the on-page facet of our work.

The details of on-page SEO are another post for another time, so I’ll simply recommend you read this evergreen post by Cyrus Shepard.

The best way to grade the semantic search optimization of a written textis to use TF-IDF analysis, offered by sites like OnPage.org (which offers also a clear guide about the advantages and disadvantages of TF-IDF analyisis).

Remember that TF-IDF can also be used for doing competitive semantic search analysis and to discover the keyword dictionaries used by our competitors.

User behavior / Semiotics and context

In the beginning of this post, we saw how Google is heavily investing in better understanding the meaning of the documents it crawls, so to better answer the queries users perform.

Semantics (and semantic search) is only one of the pillars on which Google is basing this tremendous effort.

The other pillar consists of understanding user search behaviors and the context of the users performing a search.

User search behavior

Recently, Larry Kim shared two posts based on experiments he did, demonstrating his theory about how RankBrain is about factors like CTR and dwell time.

While these posts are super actionable, present interesting information with original data, and confirm other tests conducted in the past, these so-called user signals (CTR and dwell time) may not be directly related to RankBrain but, instead, to user search behaviors and personalized search.

Be aware, however, that my statement here above should be taken as a personal theory, because Google itself doesn’t really know how RankBrain works.

AJ Kohn, Danny Sullivan, and David Harry wrote additional interesting posts about RankBrain, if you want to dig into it (for the record, I wrote about it too here on Moz).

Even if RankBrain may be included in the semantic search landscape due to its use of Word2Vec technology, I find it better to concentrate on how Google may use user search behaviors to better understand the relevance of the parsed and indexed documents.

Click-through rate

Since Rand Fishkin presented his theory — backed up with tests — that Google may use CTR as a ranking factor more than two years ago, a lot has been written about the importance of click-through rate.

Common sense suggests that if people click more often on one search snippet than another that perhaps ranks in a higher position, then Google should take that users’ signal into consideration, and eventually lift the ranking of the page that consistently receives higher CTR.

Common sense, though, is not so easy to apply when it comes to search engines, and repeatedly Googlers have declared that they do not use CTR as a ranking factor (see here and here).

And although Google has long since developed a click fraud detection system for Adwords, it’s still not clear if it would be able to scale it for organic search.

On the other hand — let me be a little bit conspiranoiac — if CTR is not important at all, then why Google has changed the pixels of the title tag and meta description? Just for “better design?”

But as Eric Enge wrote in this post, one of the few things we know is that Google filed a patent (Modifying search result ranking based on a temporal element of user feedback, May 2015) about CTR. It’s surely using CTR in testing environments to better calculate the value and grade of other rankings factors and — this is more speculative — it may give a stronger importance to click-through rate in those subsets of keywords that clearly express a QDF (Query Deserves Freshness) need.

What’s less discussed is the importance CTR has in personalized search, as we know that Google tends to paint a custom SERP for each of us depending on both our search history and our personal click-through rate history. They’re key in helping Google determine which SERPs will be the most useful for us.

For instance:

  1. If we search something for the first time, and
  2. for that search we have no search history (or not enough to trigger personalized results), and
  3. the search presents ambiguous entities (i.e.: “Amber“),
  4. then it’s only thanks to our personal CTR/search history that Google will determine which search results related to a given entity to show or not (amber the stone or Amber Rose or Amber Alerts…).

Finally, even if Google does not use CTR as a ranking factor, this doesn’t mean it’s not an important metric and signal for SEOs. We have years of experience and hundreds of tests proving how important is to optimize our search snippets (and now Rich Cards) with the appropriate use of structured data in order to earn more organic traffic, even if we rank worst than our competitors.

Watch time

Having good CTR metrics is totally useless if the pages our visitors land on don’t fulfill the expectation the search snippet created.

This is similar to the difference between a clickbait and a persuasive headline. The first will probably cause a click back to the search results page and the second, instead, will trap and engage the visitors.

The ability of a site to retain its users is what we usually call dwell time, but that Google defines as watch time in this patent: Watch Time-Based Ranking (March 2013).

This patent is usually cited in relation to video because the patent itself uses video as content example, but Google doesn’t restrict its definition to videos alone:

In general, “watch time” refers to the total time that a user spends watching a video. However, watch times can also be calculated for and used to rank other types of content based on an amount of time a user spends watching the content.

Watch time is indeed a more useful user signal than CTR for understanding the quality of a web document and its content.

Are you skeptical and don’t trust me? Trust Facebook, then, because it also uses watch time in its news feed algorithm:

We’re learning that the time people choose to spend reading or watching content they clicked on from News Feed is an important signal that the story was interesting to them.

We are adding another factor to News Feed ranking so that we will now predict how long you spend looking at an article in the Facebook mobile browser or an Instant Article after you have clicked through from News Feed. This update to ranking will take into account how likely you are to click on an article and then spend time reading it. We will not be counting loading time towards this — we will be taking into account time spent reading and watching once the content has fully loaded. We will also be looking at the time spent within a threshold so as not to accidentally treat longer articles preferentially.

With this change, we can better understand which articles might be interesting to you based on how long you and others read them, so you’ll be more likely to see stories you’re interested in reading.

Context and the importance of personalized search

I usually joke and say that the biggest mistake a gang of bank robbers could do is bring along their smartphones. It’d be quite easy to do PreCrime investigations simply by checking their activity board, which includes their location history on Google Maps.

A conference day in Adelaide.

In order to fulfill its mission of offering the best answers to its users, Google must not only understand the web documents it crawls so to index them properly, and not only improve its own ranking factors (taking into consideration the signals users give during their search sessions), but it also needs to understand the context in which users performs a search.

Here’s what Google knows about us:

It’s because of this compelling need to understand our context that Google hired the entire Behav.io team back in 2013.

Behav.io, if you don’t know already, was a company that developed an alpha test software based on its open source framework Funf (still alive), the purpose of which was to record and analyze the data that smartphones keep track of: location, speed, nearby devices and networks, phone activity, noise levels, et al.

All this information is required in order to better understand the implicit aspects of a query, especially if done from a smartphone and/or via voice search, and to better process what Tom Anthony and Will Critchlow define as compound queries.

However, personalized search is also determined by (again) entity search, specifically by search entities.

The relation between search entities creates a “probability score,” which may determine if a web document is shown in a determined SERP or not.

For instance, let’s say that someone performs a search about a topic (e.g.: Wookies) for which she never clicked on a search snippet of our site, but on another that had content about that same topic (e.g.: Wookieepedia) and which linked to the page about it on our site (e.g.: “How to distinguish one wookiee from another?”).

Those links — specifically their anchor texts — would help our site and page to earn a higher probability score than a competitor site that isn’t linked to by those sites present in the user’s search history.

This means that our page will have a better probability of appearing in that user’s personalized SERP than our competitors’.

You’re probably asking: what’s the actionable point of this patent?

Link building/earning is not dead at all, because it’s relevant not only to the Link Graph, but also to entity search. In other words, link building is semantic search, too.

The importance of branding and offline marketing for SEO

One of classic complaints SEOs have about Google is how it favors brands.

The real question, though, should be this: “Why aren’t you working to become a brand?”

Be aware! I am not talking about “vision,” “mission,” and “values” here — I’m talking about plain and simple semantics.

All throughout this post I spoke of entities (named and search ones), cited Word2Vec (vectors are “vast amounts of written language embedded into mathematical entities”), talked about lexical semantics, meaning, ontology, personalized search, and implied topics like co-occurrences and knowledge base.

Branding has a lot to do with all of these things.

I’ll try to explain it with a very personal example.

Last May in Valencia I debuted as conference organizer with The Inbounder.

One of the problems I faced when promoting the event was that “inbounder,” which I thought was a cool name for an event targeting inbound marketers, is also a basketball term.

The problem was obvious: how do I make Google understand that The Inbounder was not about basketball, but digital marketing?

The strategy we followed from the very beginning was to work on the branding of the event (I explain more about The Inbounder story here on Inbound.org).

We did this:

  • We created small local events, so as to
    • develop presence in local newspapers online and offline, a tactic that also obliged marketers to search on Google about the event using branded keywords (e.g.: “The Inbounder conference,” “The Inbounder Inbound Marketing Conference,” etc…), and
    • click on our search results snippets, hence activating personalized search
  • We worked with influencers (the speakers themselves) to trigger branded searches and direct traffic (remember: Chrome stores every URL we visit);
  • We did outreach and published guest posts about the event on sites visited by our audience (and recorded in its search history).

As a result, right now The Inbounder occupies all the first page of Google for its brand name and, more importantly in semantics terms, Google presents The Inbounder events as suggested and related searches. It associates it with all the searches I could ever want:

Another example is Trivago and its global TV advertising campaigns:

Trivago was very smart in constantly showing “Trivago” and “hotel” in the same phrase, even making their motto “Hotel? Trivago.”

This is a simple psychological trick for creating word associations.

As a result, people searched on Google for “hotel Trivago” (or “Trivago hotel”), especially just after the ads were broadcasted:

One of the results is that now, Google suggests “hotel Trivago” when we start typing “hotel” and, as in the case of The Inbounder, it presents “hotel Trivago” as a related search:

Wake up SEOs, the new new Google is here

Yes, it is. And it’s all about better understanding web documents and queries in order to provide the best answers to its users (and make money in the meantime).

To achieve this objective, ideally becoming the long-desired “Star Trek computer,” Google is investing money, people, and efforts into machine/deep learning, neural networks, semantics, search behavior, context analysis, and personalized search.

Remember, SEO is no longer just about “200 ranking factors.” SEO is about making our websites become the sources Google cannot help but use for answering queries.

This is exactly why semantic search is of utmost importance and not just something worth the attention of a few geeks passionate about linguistics, computer science, and patents.

Work on parsing and indexing optimization now, seriously implement semantic search in your SEO strategy, take advantage of the opportunities personalized search offers you, and always put users at the center of everything you do.

In doing so you’ll build a solid foundation for your success in the years to come, both via classic search and with Google Assistant/Now.



Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

301 Redirects Rules Change: What You Need to Know for SEO

Posted by Cyrus-Shepard

Is it time to rewrite the SEO playbooks?

For what seems like forever, SEOs have operated by a set of best practices that dictate how to best handle redirection of URLs. (This is the practice of pointing one URL to another. If you need a quick refresher, here’s a handy guide on HTTP status codes.)

These tried and true old-school rules included:

  1. 301 redirects result in around a 15% loss of PageRank. Matt Cutts confirmed this in 2013 when he explained that a 301 loses the exact same amount of PageRank as a link from one page to another.
  2. 302s don’t pass PageRank. By definition, 302s are temporary. So it makes sense for search engines to treat them different.
  3. HTTPS migrations lose PageRank. This is because they typically involve lots of 301 redirects.

These represent big concerns for anyone who wants to change a URL, deal with an expired product page, or move an entire website.

The risk of losing traffic can mean that making no change at all becomes the lesser of two evils. Many SEOs have delayed site migrations, kept their URLs ugly, and have put off switching to HTTPS because of all the downsides of switching.

The New Rules of 3xx Redirection

Perhaps because of the downsides of redirection — especially with HTTPS — Google has worked to chip away at these axioms over the past several months.

  • In February, Google’s John Mueller announced that no PageRank is lost for 301 or 302 redirects from HTTP to HTTPS. This was largely seen as an effort by Google to increase webmaster adoption of HTTPS.
  • Google’s Gary Illyes told the SEO world that Google doesn’t care which redirection method you use, be it 301, 302, or 307. He explained Google will figure it out and they all pass PageRank.
  • Most recently, Gary Illyes cryptically announced on Twitter that 3xx (shorthand for all 300) redirects no longer lose PageRank at all.
30x redirects don’t lose PageRank anymore.
— Gary Illyes (@methode) July 26, 2016

Do these surprising changes mean all is well and good now?

Yes and no.

While these are welcome changes from Google, there are still risks and considerations when moving URLs that go way beyond PageRank. We’ll cover these in a moment.

First, here’s a diagram that attempts to explain the old concepts vs. Google’s new announcements.

Let’s cover some myths and misconceptions by answering common questions about redirection.

Q: Can I now 301 redirect everything without risk of losing traffic?

A: No

All redirects carry risk.

While it’s super awesome that Google is no longer “penalizing” 301 redirects through loss of PageRank, keep in mind that PageRank is only one signal out of hundreds that Google uses to rank pages.

Ideally, if you 301 redirect a page to an exact copy of that page, and the only thing that changes is the URL, then in theory you may expect no traffic loss with these new guidelines.

That said, the more moving parts you introduce, the more things start to get hairy. Don’t expect to your redirects to non-relevant pages to carry much, if any, weight. Redirecting your popular Taylor Swift fan page to your affiliate marketing page selling protein powder is likely dead in the water.

In fact, Glenn Gabe recently uncovered evidence that Google treats redirects to irrelevant pages as soft 404s. In other words, it’s a redirect that loses both link equity and relevance.

See: How to Completely Ruin (or Save) Your Website With Redirects

Q: Is it perfectly safe to use 302 for everything instead of 301s?

A: Again, no

A while back we heard that the reason Google started treating 302 (temporary) redirects like 301s (permanent) is that so many websites were implementing the wrong type (302s when they meant 301s), that it caused havoc on how Google ranked pages.

The problem is that while we now know that Google passes PageRank though 302s, we still have a few issues. Namely:

  1. We don’t know if 301s and 302s are equal in every way. In the past, we’ve seen 302s eventually pass PageRank, but only after considerable time has passed. In contrast to 301s that pass link signals fairly quickly, we don’t yet know how 302s are handled in this manner.
  2. 302 is a web standard, and Google isn’t the only player on the block. 302s are meant to indicate a temporary redirect, and it’s quite possible that other search engines (Baidu, Bing, DuckDuckGo) and social services (Facebook, Twitter, etc) treat 302s differently than Google.

Rand Fishkin summed it up nicely.

On Google’s announcement that “30xs pass pagerank” — be wary. Test. Don’t assume. Pagerank isn’t the only or most important ranking signal.
— Rand Fishkin (@randfish) July 26, 2016
Google’s made announcements like this before that later showed to work differently in the real world. Pays to be a skeptic in our field.
— Rand Fishkin (@randfish) July 26, 2016

Q: If I migrate my site to HTTPS, will I keep all my traffic?

A: Maybe

Here’s the thing about HTTPS migrations: they’re complicated.

A little backstory. Google wants the entire web to switch to HTTPS. To this end, they announced a small rankings boost to encourage sites to make the switch.

The problem was that a lot of webmasters weren’t willing to trade a tiny rankings boost for the 15% loss in link equity they would experience by 301 redirecting their entire site. This appears to be the reason Google made the switch to 301s not losing PageRank.

Even without PageRank issues, HTTPS migrations can be incredibly complicated, as Wired discovered to their dismay earlier this year. It’s been over a year since we migrated Moz.com, and we’re glad we did, but there were lots of moving parts in play and the potential for lots of things to go wrong. So as with any big project, be aware of the risks as well as the rewards.

Case study: Does it work?

Unknowingly, I had the chance to test Google’s new 3xx PageRank rules when migrating a small site a few months ago. (While we don’t know when Google made the change, it appears it’s been in place for awhile now.)

This particular migration not only moved to HTTPS, but to an entirely new domain as well. Other than the URLs, every other aspect of the site remained exactly the same: page titles, content, images, everything. That made it the perfect test.

Going in, I fully expected to see a drop in traffic due to the 15% loss in PageRank. Below in the image, you can see what actually happened to my traffic.

Instead of a decline as expected, traffic actually saw a boost after the migration. Mind. Blown. This could possibly be from the small boost that Google gives HTTPS sites, though we can’t be certain.

Certainly this one small case isn’t enough to prove decisively how 301s and HTTPS migrations work, but it’s a positive sign in the right direction.

The New Best Practices

While it’s too early to write the definitive new best practices, there are a few salient points to keep in mind about Google’s change to how PageRank passes through 3xx redirects.

  1. All redirects carry a degree of SEO risk.
  2. While 3xx redirects preserve PageRank, 301s remain the preferred method of choice for permanent redirects. (It is unknown if search engines treat all redirects equally)
  3. Keep in mind that PageRank — and other link equity signals — are only a portion of the factors used by Google in ranking web pages.
  4. Beyond PageRank, all other rules about redirection remain. If you redirect to a non-relevant page, or buy a website in order to redirect 1,000 pages to your homepage, you likely won’t see much of a boost.
  5. The best redirect is where every other element stays the same, as much as possible, except for the URL.
  6. Successful migrations to HTTPS are now less prone to lose PageRank, but there are many other crawling and indexing issues that may negatively impact traffic+rankings.
  7. Changing URLs for SEO purposes, including…
    • Removing multiple query parameters
    • Improving directory/subfolder structure
    • Including keywords in the URL
    • Making URLs human-readable
    … is less risky now that 3xx redirects preserve PageRank. That said, always proceed with caution when redirecting.

When in doubt, see Best Practice #1.

Happy redirecting!


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

Should SEOs and Marketers Continue to Track and Report on Keyword Rankings? – Whiteboard Friday

Posted by randfish

Is the practice of tracking keywords truly dying? There’s been a great deal of industry discussion around the topic of late, and some key points have been made. In today’s Whiteboard Friday, Rand speaks to the biggest challenges keyword rank tracking faces today and how to solve for them.

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to chat about keyword ranking reports. There have been a few articles that have come out recently on a number of big industry sites around whether SEOs should still be tracking their keyword rankings.

I want to be clear: Moz has a little bit of a vested interest here. And so the question is: Can you actually trust me, who obviously I’m a big shareholder in Moz and I’m the founder, and so I care a lot about how Moz does as a software business. We help people track rankings. Does that mean I’m biased? I’m going to do my best not to be. So rather than saying you absolutely should track rankings, I’m instead going to address what most of these articles have brought up as the problems of rank tracking and then talk about some solutions by which you can do this.

My suspicion is you should probably be rank tracking. I think that if you turn it off and you don’t do it, it’s very hard to get a lot of the value that we need as SEOs, a lot of the intelligence. It’s true there are challenges with keyword ranking reports, but not true enough to avoid doing it entirely. We still get too much value from them.

The case against — and solutions for — keyword ranking data

A. People, places, and things

So let’s start with the case against keyword ranking data. First off, “keyword ranking reports are inaccurate.” There’s personalization, localization, and device type, and that biases and has removed what is the “one true ranking.” We’ve done a bunch of analyses of these, and this is absolutely the case.

Personalization, turns out, doesn’t change ranking that much on average. For an individual it can change rankings dramatically. If they visited your website before, they could be historically biased to you. Or if they visited your competitor’s, they could be biased. Their previous search history might have biased them in a single session, those kinds of things. But with the removal of Google+ from search results, personalization is actually not as dramatically changing as it used to be. Localization, though, still huge, absolutely, and device differences, still huge.

Solution

But we can address this, and the way to do that is by tracking these things separately. So here you can see I’ve got a ranking report that shows me my mobile rankings versus my desktop rankings. I think this is absolutely essential. Especially if you’re getting a lot of traffic from both mobile and desktop search, you need to be tracking those separately. Super smart. Of course we should do that.

We can do the same thing on the local side as well. So I can say, “Here, look. This is how I rank in Seattle. Here’s how I rank in Minneapolis. Here’s how I rank in the U.S. with no geographic personalization,” if Google were to do that. Those types of rankings can also be pretty good.

It is true that local ranked tracking has gotten a little more challenging, but we’ve seen that folks like, well Moz itself, but folks like STAT (GetStat), SERPs.com, Search Metrics, they have all adjusted their rank tracking methodologies in order to have accurate local rank tracking. It’s pretty good. Same with device type, pretty darn good.

B. Keyword value estimation

Another big problem that is expressed by a number of folks here is we no longer know how much traffic an individual keyword sends. Because we don’t know how much an individual keyword sends, we can’t really say, “What’s the value of ranking for that keyword?” Therefore, why bother to even track keyword rankings?

I think this is a little bit of spurious logic. The leap there doesn’t quite make sense to me. But I will say this. If you don’t know which keywords are sending you traffic specifically, you still know which pages are receiving search traffic. That is reported. You can get it in your Google Analytics, your Omniture report, whatever you’re using, and then you can tie that back to keyword ranking reports showing which pages are receiving traffic from which keywords.

Most all of the ranked tracking platforms, Moz included, has a report that shows you something like this. It says, “Here are the keywords that we believe are likely to have sent these percentages of traffic to this page based on the keywords that you’re tracking, based on the pages that are ranking for them, and how much search traffic those pages receive.”

Solution

So let’s track that. We can look at pages receiving visits from search, and we can look at which keywords they rank for. Then we can tie those together, which gives us the ability to then make not only a report like this, but a report that estimates the value contributed by content and by pages rather than by individual keywords.

In a lot of ways, this is almost superior to our previous methodology of tracking by keyword. Keyword can still be estimated through AdWords, through paid search, but this can be estimated on a content basis, which means you get credit for how much value the page has created, based on all the search traffic that’s flowed to it, and where that’s at in your attribution lifecycle of people visiting those pages.

C. Tracking rankings and keyword relevancy

Pages often rank for keywords that they aren’t specifically targeting, because Google has gotten way better with user intent. So it can be hard or even impossible to track those rankings, because we don’t know what to look for.

Well, okay, I hear you. That is a challenge. This means basically what we have to do is broaden the set of keywords that we look at and deal with the fact that we’re going to have to do sampling. We can’t track every possible keyword, unless you have a crazy budget, in which case go talk to Rob Bucci up at STAT, and he will set you up with a huge campaign to track all your millions of keywords.

Solution

If you have a smaller budget, what you have to do is sample, and you sample by sets of keywords. Like these are my high conversion keywords — I’m going to assume I have a flower delivery business — so flower delivery and floral gifts and flower arrangements for offices. My long tail keywords, like artisan rose varieties and floral alternatives for special occasions, and my branded keywords, like Rand’s Flowers or Flowers by Rand.

I can create a bunch of different buckets like this, sample the keywords that are in them, and then I can track each of these separately. Now I can see, ah, these are sets of keywords where I’ve generally been moving up and receiving more traffic. These are sets of keywords where I’ve generally been moving down. These are sets of keywords that perform better or worse on mobile or desktop, or better or worse in these geographic areas. Right now I can really start to get true intelligence from there.

Don’t let your keyword targeting — your keyword targeting meaning what keywords you’re targeting on which pages — determine what you rank track. Don’t let it do that exclusively. Sure, go ahead and take that list and put that in there, but then also do some more expansive keyword research to find those broad sets of search terms and phrases that you should be monitoring. Now we can really solve this issue.

D. Keyword rank tracking with a purpose

This one I think is a pretty insidious problem. But for many organizations ranking reports are more of a historical artifact. We’re not tracking them for a particular reason. We’re tracking them because that’s what we’ve always tracked and/or because we think we’re supposed to track them. Those are terrible reasons to track things. You should be looking for reasons of real value and actionability. Let’s give some examples here.

Solution

What I want you to do is identify the goals of rank tracking first, like: What do I want to solve? What would I do differently based on whether this data came back to me in one way or another?

If you don’t have a great answer to that question, definitely don’t bother tracking that thing. That should be the rule of all analytics.

So if your goal is to say, “Hey, I want to be able to attribute a search traffic gain or a search traffic loss to what I’ve done on my site or what Google has changed out there,” that is crucially important. I think that’s core to SEO. If you don’t have that, I’m not sure how we can possibly do our jobs.

We attribute search traffic gains and losses by tracking broadly, a broad enough set of keywords, hopefully in enough buckets, to be able to get a good sample set; by tracking the pages that receive that traffic so we can see if a page goes way down in its search visits. We can look at, “Oh, what was that page ranking for? Oh, it was ranking for these keywords. Oh, they dropped.” Or, “No, they didn’t drop. But you know what? We looked in Google Trends, and the traffic demand for those keywords dropped,” and so we know that this is a seasonality thing, or a fluctuation in demand, or those types of things.

And we can track by geography and device, so that we can say, “Hey, we lost a bunch of traffic. Oh, we’re no longer mobile-friendly.” That is a problem. Or, “Hey, we’re tracking and, hey, we’re no longer ranking in this geography. Oh, that’s because these two competitors came in and they took over that market from us.”

We could look at would be something like identify pages that are in need of work, but they only require a small amount of work to have a big change in traffic. So we could do things like track pages that rank on page two for given keywords. If we have a bunch of those, we can say, “Hey, maybe just a few on-page tweaks, a few links to these pages, and we could move up substantially.” We had a Whiteboard Friday where we talked about how you could do that with internal linking previously and have seen some remarkable results there.

We can track keywords that rank in position four to seven on average. Those are your big wins, because if you can move up from position four, five, six, seven to one, two, three, you can double or triple your search traffic that you’re receiving from keywords like that.

You should also track long tail, untargeted keywords. If you’ve got a long tail bucket, like we’ve got up here, I can then say, “Aha, I don’t have a page that’s even targeting any of these keywords. I should make one. I could probably rank very easily because I have an authoritative website and some good content,” and that’s really all you might need.

We might look at some up-and-coming competitors. I want to track who’s in my space, who might be creeping up there. So I should track the most common domains that rank on page one or two across my keyword sets.

I can track specific competitors. I might say, “Hey, Joel’s Flower Delivery Service looks like it’s doing really well. I’m going to set them up as a competitor, and I’m going to track their rankings specifically, or I’m going to see…” You could use something like SEMrush and see specifically: What are all the keywords they rank for that you don’t rank for?

This type of data, in my view, is still tremendously important to SEO, no matter what platform you’re using. But if you’re having these problems or if these problems are being expressed to you, now you have some solutions.

I look forward to your comments. We’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

Case Study: How We Created Controversial Content That Earned Hundreds of Links

Posted by KelseyLibert

Content marketers, does the following scenario sound familiar?

You’re tasked with creating content that attracts publicity, links, and social shares. You come up with great ideas for content that you’re confident could accomplish these goals. However, any ideas that push the envelope or might offend anyone in the slightest get shot down by your boss or client. Even if a provocative idea gets approved, after feedback from higher-ups and several rounds of editing, you end up with a boring, watered-down version of what you originally envisioned.

Given the above, you’re not surprised when you achieve lackluster results. Repeat this cycle enough times, and it may lead to the false assumption that content marketing doesn’t work for the brand.

In this post, I’ll answer two questions:

  1. How can I get my boss or clients to sign off on envelope-pushing content that will attract the attention needed to achieve great results?
  2. How can we minimize the risk of backlash?

Why controversy is so powerful for content marketing

To get big results, content needs to get people talking. Often times, the best way to do this is by creating an emotional reaction in the audience. Content that deals with a controversial or polarizing topic can be a surefire way to accomplish this.

On the other hand, when you play it too safe with your content, it becomes extremely difficult to ignite the emotional response needed to drive social sharing. Ultimately, you don’t attract the attention needed to earn high-quality links.

Below is a peek at the promotions report from a recent controversial campaign that resulted in a lot of high-quality links, among other benefits.

abodo-promotions-report.png

Overcoming a client’s aversion to controversy

We understand and respect a client’s fierce dedication to protecting their brand. The thought of attaching their company to anything controversial can set off worst-case-scenario visions of an angry Internet mob and bad press (which isn’t always a terrible thing).

One such example of balancing a sensitive topic while minimizing the potential risk is a recent campaign we created for apartment listing site Abodo. Our idea was to use Twitter data to pinpoint which states and cities had the highest concentration of prejudiced and tolerant tweets. Bigotry in America is an extremely sensitive topic, yet our client was open to the idea.

Want to get a contentious idea approved by your boss or client? Here’s how we did it.

1. Your idea needs to be relevant to the brand, either directly or tangentially.

Controversy for the sake of controversy is not going to provide value to the brand or the target audience.

I asked Michael Taus, VP of Growth and Business Development at Abodo, why our campaign idea got the green light. He said Abodo’s mission is to help people find a home, not to influence political discourse. But they also believe that when you’re moving to a new community, there’s more to the decision than what your house or apartment looks like, including understanding the social and cultural tone of the location.

So while the campaign dealt with a hot topic, ultimately this information would be valuable to Abodo’s users.

2. Prove that playing it safe isn’t working.

If your “safe” content is struggling to get attention, make the case for taking a risk. Previous campaign topics for our client had been too conservative. We knew by creating something worth talking about, we’d see greater results.

3. Put safeguards in place for minimizing risk to the brand.

While we couldn’t guarantee there wouldn’t be a negative response once the campaign launched, we could guarantee that we’d do everything in our power to minimize any potential backlash. We were confident in our ability to protect our client because we’d done it so many times with other campaigns. I’ll walk you through how to do this throughout the rest of the post.

On the client’s end, they can get approval from other internal departments; for example, having the legal and PR teams review and give final approval can help mitigate the uncertainty around running a controversial campaign.

Did taking a risk pay off?

The campaign was a big success, with results including:

  • More than 620 placements (240 dofollow links and 280 co-citation links)
  • Features on high-authority sites including CNET, Slate, Business Insider, AOL, Yahoo, Mic, The Daily Beast, and Adweek
  • More than 67,000 social shares
  • A whole lot of discussion

cnet-coverage.png

Beyond these metrics, Abodo has seen additional benefits such as partnership opportunities. Since this campaign launched, they were approached by a nonprofit organization to collaborate on a similar type of piece. They hope to repeat their success by leveraging the nonprofit’s substantial audience and PR capabilities.

Essential tips for minimizing risk around contentious content

We find that good journalism practices can greatly reduce the risk of a negative response. Keep the following five things in mind when creating attention-grabbing content.

1. Presenting data vs. taking a stance: Let the data speak

Rather than presenting an opinion, just present the facts. Our clients are usually fine with controversial topics as long as we don’t take a stance on them and instead allow the data we’ve collected to tell the story for us. Facts are facts, and that’s all your content needs to offer.

If publishers want to put their own spin on the facts you present or audiences see the story the data are telling and want to respond, the conversation can be opened up and generate a lot of engagement.

For the Abodo campaign, the data we presented weren’t a direct reflection of our client but rather came from an outside source (Twitter). We packaged the campaign on a landing page on the client’s site, which includes the design assets and an objective summary of the data.

abodo-landing-page.png

The publishers then chose how to cover the data we provided, and the discussion took off from there. For example, Slate called out Louisiana’s unfortunate achievement of having the most derogatory tweets.

slate-coverage.png

2. Present more than one side of the story

How do you feel when you watch a news report or documentary that only shares one side of the story? It takes away credibility from the reporting, doesn’t it?

To keep the campaign topic from being too negative and one-sided, we looked at the most prejudiced and least prejudiced tweets. Including states and cities with the least derogatory tweets added a positive angle to the story. This made the data more objective, which improved the campaign’s credibility.

least-derogatory.png

Regional publishers showed off that their state had the nicest tweets.

idaho-article.png

And residents of these places were proud to share the news.

Pleased WI was one of the top-10 least nasty places for pejorative tweets! Stay away from Louisiana. https://t.co/ijoAMsmKao
— Sam Million-Weaver (@millionweaver) March 9, 2016

If your campaign topic is negative, try to show the positive side of it too. This keeps the content from being a total downer, which is important for social sharing since people usually want to pass along content that will make others feel good. Our recent study on the emotions behind viral content found that even when viral content evokes negative emotions, it’s usually not purely negative; the content also makes the audience feel a positive emotion or surprise.

Aside from objective reporting, a huge benefit to telling more than one side of the story is that you’re able to pitch the story for multiple angles, thus maximizing your potential coverage. Because of this, we ended up creating 18 visual assets for this campaign, which is far more than we typically do.

3. Don’t go in with an agenda

Be careful of twisting the data to fit your agenda. It’s okay to have a thesis when you start, but if your aim is to tell a certain story you’re apt to stick with that storyline regardless of what the data show. If your information is clearly slanted to show the story you want to tell, the audience will catch on, and you’ll get called out.

Instead of gathering research with an intent of “I’m setting out to prove XYZ,” adopt a mindset of “I wonder what the reality is.”

4. Be transparent about your methodology

You don’t want the validity of your data to become a point of contention among publishers and readers. This goes for any data-heavy campaign but especially for controversial data.

To combat any doubts around where the information came from or how the data were collected and analyzed, we publish a detailed methodology alongside all of our campaigns. For the Abodo campaign, we created a PDF document of the research methodology which we could easily share with publishers.

methodology-example.pngInclude the following in your campaign’s methodology:

  • Where and when you received your data.
  • What kind and how much data you collected. (Our methodology went on to list exactly which terms we searched for on Twitter.)
  • Any exceptions within your collection and analysis, such as omitted information.
  • A list of additional sources. (We only use reputable, new sources ideally published within the last year.)

sources-example.png

For even more transparency, make your raw data available. This gives publishers a chance to comb through the data to find additional story angles.

5. Don’t feed the trolls

This is true for any content campaign, but it’s especially important to have an error-free campaign when dealing with a sensitive topic since it may be under more scrutiny. Don’t let mistakes in the content become the real controversy.

Build multiple phases of editing into your production process to ensure you’re not releasing inaccurate or low-quality content. Keep these processes consistent by creating a set of editorial guidelines that everyone involved can follow.

We put our campaigns through fact checking and several rounds of quality assurance.

Fact checking should play a complementary role to research and involves verifying accuracy by making sure all data and assertions are true. Every point in the content should have a source that can be verified. Writers should be familiar with best practices for making their work easy to fact-check; this fact-checking guide from Poynter is a good resource.

Quality assurance looks at both the textual and design elements of a campaign to ensure a good user experience. Our QA team reviews things like grammar, clarity (Is this text clearly making a point? Is a design element confusing or hard to read?), and layout/organization.

Include other share-worthy elements

Although the controversial subject matter helped this campaign gain attention, we also incorporated other proven elements of highly shareable content:

  • Geographic angle. People wanted to see how their state or city ranked. Many took to social media to express their disappointment or pride in the results.
  • Timeliness. Bigotry is a hot-button issue in the U.S. right now amidst racial tension and a heated political situation.
  • Comparison. Rankings and comparisons stimulate discussion, especially when people have strong opinions about the rankings.
  • Surprising. The results were somewhat shocking since some cities and states which ranked “most PC” or “most prejudiced” were unexpected.

The more share-worthy elements you can tack onto your content, the greater your chances for success.

Have you seen success with controversial or polarizing content? Did you overcome a client’s objection to controversy? Be sure to share your experience in the comments.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

Ranking #0: SEO for Answers

Posted by Dr-Pete

It’s been over two years since Google launched Featured Snippets, and yet many search marketers still see them as little more than a novelty. If you’re not convinced by now that Featured Snippets offer a significant organic opportunity, then today is my attempt to change your mind.

If you somehow haven’t encountered a Featured Snippet searching Google over the past two years, here’s an example (from a search for “ssl”):

This is a promoted organic result, appearing above the traditional #1 ranking position. At minimum, Featured Snippets contain an extracted answer (more on that later), a display title, and a URL. They may also have an image, bulleted lists, and simple tables.

Why should you care?

We’re all busy, and Google has made so many changes in the past couple of years that it can be hard to sort out what’s really important to your customer or employer. I get it, and I’m not judging you. So, let’s get the hard question out of the way: Why are Featured Snippets important?

(1) They occupy the “#0” position

Here’s the top portion of a SERP for “hdmi cable,” a commercial query:

There are a couple of interesting things going on here. First, Featured Snippets always (for now) come before traditional organic results. This is why I have taken to calling them the “#0” ranking position. What beats #1? You can see where I’m going with this… #0. In this case, the first organic is pushed down even more, below a set of Related Questions (the “People also ask” box). So, the “#1” organic position is really third in this example.

In addition, notice that the “#0” (that’s the last time I’ll put it in quotes) position is the same URL as the #1 organic position. So, Amazon is getting two listings on this result for a single page. The Featured Snippet doesn’t always come from the #1 organic result (we’ll get to that in a minute), but if you score #0, you are always listed twice on page one of results.

(2) They’re surprisingly prevalent

In our 10,000-keyword tracking data set, Featured Snippets rolled out at approximately 2% of the queries we track. As of mid-July, they appear on roughly 11% of the keywords we monitor. We don’t have good historical data from the first few months after roll-out, but here’s a 12-month graph (July 2015 – July 2016):

Featured Snippets have more than doubled in prevalence in the past year, and they’ve increased by a factor of roughly 5X since launch. After two years, it’s clear that this is no longer a short-term or small-scale test. Google considers this experiment to be a success.

(3) They often boost CTR

When Featured Snippets launched, SEOs were naturally concerned that, by extracting and displaying answers, click-through rates to the source site would suffer. While extracting answers from sites was certainly uncharted territory for Google, and we can debate their use of our content in this form, there’s a growing body of evidence to suggest that Featured Snippets not only haven’t harmed CTR, but they actually boost it in some cases.

In August of 2015, Search Engine Land published a case study by Glenn Gabe that tracked the loss of a Featured Snippet for a client on a competitive keyword. In the two-week period following the loss, that client lost over 39K clicks. In February of 2016, HubSpot did a larger study of high-volume keywords showing that ranking #0 produced a 114% CTR boost, even when they already held the #1 organic position. While these results are anecdotal and may not apply to everyone, evidence continues to suggest that Featured Snippets can boost organic search traffic in many cases.

Where do they come from?

Featured Snippets were born out of a problem that dates back to the early days of search. Pre-Google, many search players, including Yahoo, were human-curated directories first. As content creation exploded, humans could no longer keep up, especially in anything close to real-time, and search engines turned to algorithmic approaches and machine curation.

When Google launched the Knowledge Graph, it was based entirely on human-curated data, such as Freebase and Wikidata. You can see this data in traditional “Knowledge Cards,” sometimes generically called “answer boxes.” For example, this card appears on a search for “Who is the CEO of Tesla?”:

The answer is short and factual, and there is no corresponding source link for it. This comes directly from the curated Knowledge Graph. If you run a search for “Tesla,” you can see this more easily in the Knowledge Panel on that page:

In the middle, you can see an entry for “CEO: Elon Musk.” This isn’t just a block of display text — each of these line items are factoids that exist individually as structured data in the Knowledge Graph. You can test this by running searches against other factoids, like “When was Tesla founded?”

While Google does a decent job of matching many forms of a question to answers in the Knowledge Graph, they can’t escape the limits of human curation. There are also questions that don’t easily fit the “factoid” model. For example, if you search “What is ludicrous mode Tesla?” (pardon the weird syntax), you get this Featured Snippet:

Google’s solution was obvious, if incredibly difficult — take the trillions of pages in their index and use them to generate answers in real-time. So, that’s exactly what they did. If you go to the source page on Engadget, the text in the Featured Snippet is taken directly from on-page copy (I’ve added the green highlighting):

It’s not as simple as just scraping off the first paragraph with a spatula and flipping it onto the SERP, though. Google does seem to be parsing content fairly deeply for relevance, and they’ve been improving their capabilities constantly since the launch of Featured Snippets. Consider a couple of other examples with slightly different formats. Here’s a Featured Snippet for “How much is a Tesla?”:

Note the tabular data. This data is being extracted and reformatted from a table on the target page. This isn’t structured data — it’s plain-old HTML. Google has not only parsed the table but determined that tabular data is a sensible format in response to the question. Here’s the original table:

Here’s one of my favorite examples, from a search for “how to cook bacon.” For any aspiring bacon wizards, please pay careful attention to step #4:

Note the bulleted (ordered) list. As with the table, not only has Google determined that a list is a relevant format for the answer, but they’ve created this list. Now look at the target page:

There’s no HTML ordered list (<ol></ol>) on this page. Google is taking a list-like paragraph style and converting it into a simpler list. This content is also fairly deep into a long page of text. Again, there is no structured data in play. Google is using any and all content available in the quest for answers.

How do you get one?

So, let’s get to the tactical question — how can you score a Featured Snippet? You need to know two things. First, you have to rank organically on the first page of results. Every Featured Snippet we’ve tracked also ranks on page one. Second, you need to have content that effectively targets the question.

Do you have to rank #1 to get the #0 position? No. Ranking #1 certainly doesn’t hurt, but we’ve found examples of Featured Snippet URLs from across all of page one. As of June, the graph below represents the distribution of organic rankings for all of the Featured Snippets in our tracking data set:

Just about 1/3 of Featured Snippets are pulled from the #1 position, with the bulk of the remaining coming from positions #2–#5. There are opportunties across all of page one, in theory, but searches where you rank in the top five are going to be your best targets. The team at STAT produced an in-depth white paper on Featured Snippets across a very large data set that showed a similar pattern, with about 30% of Featured Snippet URLs ranking in the #1 organic position.

If you’re not convinced yet, here’s another argument for the “Why should you care?” column. Once you’re ranking on page one, our data suggests that getting the Featured Snippet is more about relevance than ranking/authority. If you’re ranking #2–#5 it may be easier to compete for position #0 than it is for position #1. Featured Snippets are the closest thing to an SEO shortcut you’re likely to get in 2016.

The double-edged sword of Featured Snippets (for Google) is that, since the content comes from our websites, we ultimately control it. I showed in a previous post how we fixed a Featured Snippet with updated data, but let’s get to what you really want to hear — can we take a Featured Snippet from a competitor?

A while back, I did a search for “What is Page Authority?” Page Authority is a metric created by us here at Moz, and so naturally we have a vested interest in who’s ranking for that term. I came across the following Featured Snippet.

At the time, DrumbeatMarketing.net was ranking #2 and Moz was ranking #1, so we knew we had an opportunity. They were clearly doing something right, and we tried to learn from it. Their page title addressed the question directly. They jumped quickly to a concise answer, whereas we rambled a little bit. So, we rewrote the page, starting with a clear definition and question-targeted header:

This wasn’t the only change, but I think it’s important to structure your answers for brevity, or at least summarize them somewhere on the page. A general format of a quick summary at the top, followed by a deeper dive seems to be effective. Journalists sometimes call this an “inverted pyramid” structure, and it’s useful for readers as well, especially Internet readers who tend to skim articles.


In very short order, our changes had the desired impact, and we took the #0 position:

This didn’t take more authority, deep structural changes, or a long-term social media campaign. We simply wrote a better answer. I believe we also did a service to search users. This is a better page for people in a hurry and leads to a better search snippet than before. Don’t think of this as optimizing for Featured Snippets, or you’re going to over-optimize and be haunted by the Ghost of SEO Past. Think of it as being a better answer.


What should you target?

Featured Snippets can require a slightly different and broader approach to keyword research, especially since many of us don’t routinely track questions. So, what kind of questions tend to trigger Featured Snippets? It’s helpful to keep in mind the 5 Ws (Who, What, When, Where, Why) + How, but many of these questions will generate answers from the Knowledge Graph directly.

To keep things simple, ask yourself this: is the answer a matter of simple fact (or a “factoid”)? For example, a question like “How old is Beyoncé?” or “When is Labor Day?” is going to be pulled from the Knowledge Graph. While human curation can’t keep up with the pace of the web, WikiData and other sources are still impressive and cover a massive amount of territory. Typically, these questions won’t produce Featured Snippets.

What and implied-what questions

A good starting point is “What…?” questions, such as our “What is Page Authority?” experiment. This is especially effective for industry terms and other specialized knowledge that can’t be easily reduced to a dictionary definition.

Keep in mind that many Featured Snippets appear on implied “What…” questions. In other words, “What” never appears in the query. For example, here’s a Featured Snippet for “PPC”:

Google has essentially decided that this fairly ambiguous query deserves an answer to “What is PPC?” In other words, they’ve implied the “What.” This is fairly common now for industry terms and phrases that might be unfamiliar to the average searcher, and is a good starting point for your keyword research.

Keep in mind that common words will produce a dictionary entry. For example, here’s a Knowledge Card for “What is search?”:

These dictionary cards are driven by human-curated data sources and are not organic, in the typical sense of the word. Google has expanded dictionary results in the past year, so you’ll need to focus on less common terms and phrases.

Why and how questions

“Why… ?” questions are good fodder for Featured Snippets because they can’t easily be answered with factoids. They often require some explanation, such as this snippet for “Why is the sky blue?”:

Likewise, “How…?” questions often require more in-depth answers. An especially good target for Featured Snippets is “How to… ?” questions, which tend to have practical answers that can be summarized. Here’s one for “How to make tacos”:

One benefit of “Why,” “How,” and “How to” questions is that the Featured Snippet summary often just serves as a teaser to a longer answer. The summary can add credibility to your listing while still attracting clicks to in-depth content. “How… ?” may also be implied in some cases. For example, a search for “convert PDF to Word” brings up a Featured Snippet for a “How to…” page.

What content is eligible?

Once you have a question in mind, and that question/query is eligible for Featured Snippets, there’s another piece of the targeting problem: which page on your site is best equipped to answer that question? Let’s take, for example, the search “What is SEO?”. It has the following Featured Snippet from Wikipedia:

Moz ranks on page one for that search, but it still begs two questions: (1) is the ranking page the best answer to the question (in Google’s eyes), and (2) what content on the page do they see as best matching the question. Fortunately, you can use the “site:” operator along with your search term to help answer both questions. Here’s a Featured Snippet for [site:moz.com “what is seo”]:

Now, we know that, within just our own site, Google is seeing The Beginner’s Guide as the best match to the question, and we have an idea of how they’re parsing that page for an answer. If we were willing to rewrite the page just to answer this question (and that certainly involves trade-offs), we’d have a much better sense of where to start.

What about Related Questions?

Featured Snippets have a close cousin that launched more recently, known to Google as Related Questions and sometimes called the “People Also Ask” box. If I run a search for “page authority,” it returns the following set of Related Questions (nestled into the organic results):

Although Related Questions have a less dominant position in search results than Featured Snippets (they’re not generally at the top), they’re more prevalent, occurring on almost 17% of the searches in our tracking data set. These boxes can contain up to four related questions (currently), and each question expands to look something like this:

At this point, that expanded content should look familiar — it’s being generated from the index, has an organic link, and looks almost exactly like a Featured Snippet. It also has a link to a Google search for the related question. Clicking on that search brings up the following Featured Snippet:

Interestingly, and somewhat confusingly, that Featured Snippet doesn’t exactly match the snippet in the Related Questions box, even though they’re answering the same question from the same page. We’re not completely sure how Featured Snippets and Related Questions are connected, but they share a common philosophy and very likely a lot of common code. Being a better answer will help you rank for both.

What’s the long game?

If you want to know where all of this is headed in the future, you have to ask a simple question: what’s in it for Google? It’s easy to jump to conspiracy theories when Google takes our content to provide direct answers, but what do they gain? They haven’t monetized this box, and a strong, third-party answer draws attention and could detract from ad clicks. They’re keeping you on their page for another few seconds, but that’s little more than a vanity metric.

I think the answer is that this is part of a long shift toward mobile and alternative display formats. Look at the first page of a search for “what is page authority” on an Android device:

Here, the Featured Snippet dominates the page — there’s just not room for much more on a mobile screen. As technology diversifies into watches and other wearables, this problem will expand. There’s an even more difficult problem than screen space, though, and that’s when you have no screen at all.

If you do a voice search on Android for “what is page authority,” Google will read back to you the following answer:

“According to Moz, Page Authority is a score developed by Moz that predicts how well a specific page will rank on search engines.”

This is an even more truncated answer, and voice search appends the attribution (“According to Moz…”). You can still look at your phone screen, of course, but imagine if you had asked the question in your car or on Google’s new search appliance (their competitor to Amazon’s Echo). In those cases, the Featured Snippet wouldn’t just be the most prominent answer — it would be the only answer.

Google has to adapt to our changing world of devices, and often those devices requires succinct answers and aren’t well-suited to a traditional SERP. This may not be so much about profiting from direct answers for Google as it is about survival. New devices will demands new formats.

How do you track all of this?

After years of tracking rich SERP features, watching the world of organic search evolve, and preaching that evolution to our customers and industry, I’m happy to say that our Product Team has been hard at work for months building the infrastructure and UI necessary to manage the rich and complicated world of SERP features, including Featured Snippets. Spoiler alert: expect an announcement from us very soon.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →