Archives for 

seo

Off with Your Head Terms: Leveraging Long-Tail Opportunity with Content

Posted by SimonPenson

Running an agency comes with many privileges, including a first-hand look at large amounts of data on how clients’ sites behave in search, and especially how that behavior changes day-to-day and month-to-month.

While every niche is different and can have subtle nuances that frustrate even the most hardened SEOs or data analysts, there are undoubtedly trends that stick out every so often which are worthy of further investigation.

In the past year, the Zazzle Media team has been monitoring one in particular, and today’s post is designed to shed some light on it in hopes of creating a wider debate.

What is this trend, you ask? In simple terms, it’s what we see as a major shift in the way results are presented, and it’s resulting in more traffic for the long tail.

2014 growth

It’s a conclusion supported by a number of client growth stories throughout the last 12 months, all of whom have seen significant growth coming not from head terms, but from an increasing number of URLs gaining search traffic from organic.

The Searchmetrics visibility chart below is just one example of a brand in the finance space seeing digital growth year-over-year as a direct result of this phenomenon. They’ve even seen some head terms drop backwards by a couple of places while still seeing this overall.

To understand why this may be happening we need to take a very quick crash course into how Google has evolved over the past two years.

Keyword matching

Google built its empire on a smart system; one which was able to match “documents” (webpages) to keywords by scanning and organizing those documents based upon keyword mentions.

It’s an approach that has been getting increasingly too simplistic in a “big data” world.

The answer, it seems, is to focus more on the user intent behind that query and get at exactly what it is the searcher is actually looking for.

Hummingbird

The solution to that challenge is Hummingbird, Google’s new “engine” for sorting the results we see when we search.

In the same way that Caffeine, the former search architecture, allowed the company to produce fresher results and roll worldwide algorithm changes (such as Panda and Penguin) out faster, Hummingbird is designed to do the same for personalized results.

And while we are only at the very beginning of that journey, from the data we have seen over the past year it seems to be crystallizing into more traffic for deeper pages.

Why is this happening? The answer lies in further analysis of what Google is trying to achieve.

Implicit vs. explicit

To better explain this change let’s look at how it is affecting a search for something obvious, like “coffee shop.”

Go back two or so years and a search for this may well have presented 10 blue links of the obvious chains and their location pages.

For the user, however, this isn’t useful—and the search giant knows it. Instead, they want to understand the user intent behind the query, or the “implicit query,” as previously explained by Tom Anthony on this blog.

What that means, in practice, is that a search for “coffee shop” will actually have context, and one of the reasons for wanting you signed in is to allow the search engine to collect further signals from you to help understand that query in detail. That means things like your location, perhaps even your brand preferences, etc.

Knowing these things allows the search to be personalized to your exact needs, throwing up the details of the closest Starbucks to your current location (if that is your favourite coffee).

If you then expand this trend out into billions of other searches you can see how deeper-level pages, or even articles, present a better, more refined option for Google.

Here we see how a result for something like “Hotels” may change if Google knows where you are, what you do for a living and therefore what kind of disposable income you have. The result may look completely different, for instance, if Google knows you are a company CEO who stays in nice hotels and has a big meeting the following day, thus requiring a quiet room so you can get some sleep.

Instead of the usual “best hotels in London” result we get something much more personalised and, critically, something more useful.

The new long-tail curve

What this appears to be doing is reshaping the traditional long-tail curve we all know so well. It is beginning to change shape along the lines of the chart below:

That’s a noteworthy shift. With another client of ours, we have seen a 135% increase in the number of pages receiving traffic from search, delivering a 98% increase in overall organic traffic because of it.

The primary factor behind this rise is the creation of the “right” content to take advantage of this changing marketplace. Getting that right requires an approach reminiscent of the way traditional marketing has worked for decades—before the web even existed.

In practice, that means understanding the audience you are attempting to capture and, in doing so, outlining the key questions they are asking every day.

This audience-centric marketing approach is something I have written about previously on this blog and others, as it is critical to understanding that “context” and what your customers or clients are actually looking for.

The way to do that? Dive into data, and also speak to those who may already be buying from or working with you.

Digging into available data

The first step of any marketing process is to collect and process any and all available information about your existing audience and those you may want to attract in the future.

This is a huge subject area—one I could easily spend the next 10,000 words writing about—but it has been covered brilliantly on the more traditional research side by sites like this and this.

The latter of those two links breaks this side of the research process into the two key critical elements you will need to master to ensure you have a thorough understanding of who you are “talking” to in search.

Quantitative concentrates on the numbers. Focus is on larger data sets and statistical information, as opposed to painting a rich picture of the likes and dislikes of your audience.

Qualitative focuses on the words and on painting in the “richness.” The way your customers speak and explain problems, likes and dislikes. It’s more of a study on human behavior than stats.

This information can be combined with a plethora of other data sources from CRMs, email lists, and other customer insight pots, but where we are increasingly seeing more opportunity is in the social data arena.

Platforms such as Facebook can give all brands access to hugely valuable big-data insight about almost any audience you could possibly imagine.

What I’d like to do here is explain how to go about extracting that data to form rich pictures of those we are either already speaking to or the very people we want to attract.

There is also little doubt that the amount of insight you have into your audience is directly proportional to the success of your content, hence the importance of this research cycle.

Persona creation

Your data comes to life through the creation of personas, which are designed to put a human face on that data and group it into a small number of shared interest sets.

Again, the point of this post is not to explain how to best manage this process. Posts like this one and this one go over that in great detail—the point here is to go over what having them in place allows you to do.

We’ve also created a free persona template, which can help make the process of pulling them together much easier.

When you’ve got them created, you will soon realize that your personas each have very different needs from a content perspective.

To give you an example of that let’s look at these example profiles below:

Here we can see three very distinct segments of the audience, and immediately it is easy to see how each of them is looking for a different experience from your brand.

Take the “Maturing Spender” for example. In this fictional example for a banking brand we can see he not only has very different content needs but is actually “activated” by a different approach to the buying cycle too.

While the traditional buyer will follow a process of awareness, research, evaluation and purchase, a new kind of purchase behaviour is materializing that’s driven by social.

In this new world we are seeing consumers driven to more impulsive purchases that are often driven by social sharing. They’ll see something in their social feeds and are more likely to purchase there and then (or at least within a few days), especially if there is a limited offer on.

Much of this is driven by our increasingly “disposable” culture that creates an accelerated buying process.

You can learn this and other data-driven insights from the personas, and we recommend using a good persona template, then adding further descriptive detail and “colour” to each one so that everyone understands whom it is they are writing for.

It can also work well to align those characters to famous people, if possible, as doing so makes it much easier to scale understanding across whole organizations.

Having them in place and universally adopted allows you to do many things, including:

  • Create focus on the customer
  • Allow teams to make and defend decisions
  • Create empathy with the audience

Ultimately, however, all of this is designed to ensure you have a better understanding of those you want to converse with, and in doing so you can map out the key questions they ask and understand their individual needs.

If you want to dig into this area more then I highly recommend Mike King’s post from 2014 here on Moz for further background.

New keyword research – personas

Understanding the specific questions your audience is asking is where the real win can be found, and the next stage is to utilize the info gleaned from the persona process in the next phase: keyword research.

To do that, let’s walk through an example for our Happy Couple persona (the first from the above graphic), and see how things plays out for this fictional banking brand.

The first step is to gather a list of tools to help unearth related keywords. Here are the ones we use:

There are many more that can help, but it is very easy to complicate the process with data, so we like to limit that as much as possible and focus on where we can get the most benefit quickly.

Before we get into the data mining process, however, we begin with a group brainstorm to surface as many initial questions as possible.

To do this, we will gather four people for a quick 15-minute stand-up conversation around each persona. The aim is to gather five questions from which the main research phase can be constructed.

Some possibilities for our Happy Couple example may include:

  • How much can I borrow for a mortgage?
  • How do I buy a house?
  • How large a deposit do I need to buy a house?
  • What is the best regular savings account?

From here we can use this framework as a starting point for the keyword research and there is no better place to start than with our first tool.

SEMRush

For those unfamiliar with this tool it is designed to make it easier to accurately assess competitor and market opportunity by plugging into search data. In this example we will use it to highlight longer-tail keyword opportunity based upon the example questions we have just unearthed.

To uncover related keyword opportunity around the first question we type in something similar to the below:

This will highlight a number of phrases related to our question:

As you can see, this gives us a lot of ammunition from a content perspective to enable us to write about this critical subject consistently without repeating the same titles.

Each of those long-tail terms can be analyzed ever deeper by clicking on them individually. That will generate a further list of even more specifically related terms.

Soovle

The next stage is to use this vastly underrated tool to further mine user search data. It allows you to gather regular search phrases from sites such as YouTube, Yahoo, Bing, Answers.com and Wikipedia in one place.

The result is something a little like the below. It may not be the prettiest but it can save a lot of time and effort as you can download the results in a single CSV.

Google Autocomplete / KeywordTool.io

There are several ways you can tap into Google’s Autocomplete data and with an API in existence there are a number of tools making good use of it. My current favourite is KeywordTool.io, which actually has its own API, mashing data from Google, YouTube, Bing, and the Apple App Store.

The real value is in how it spits out that data, as you are able to see suggestions by letter or number, creating hundreds of potential areas for content development. The App Store data is particularly useful, as you will often see greater refinement in search behavior here and as a result very specific ‘questions’ to answer.

A great example for this would be “how to prequalify yourself for a mortgage,” a phrase which would be very hard to surface using Google Autocomplete tools alone.

Forum searches

Another fantastic area worthy of research focus is forums. We use these to ask our peers and topic experts questions, so spending some time understanding what is being asked within the key ones for your market can be very helpful.

One of the best ways of doing this is to perform a simple advanced Google search as outlined below:

“keyword” + “forum”

For our example we might type:

This then presents us with more than 85,000 results, many of which will be questions that have been asked on this subject.

Examples include:

  • First-time buyer’s mortgage guide
  • Getting a Mortgage: Boost your Mortgage Chances
  • Mortgage Arrears: What help is available?
  • Are Fixed Rate Mortgages best?

As you can see, this also opens up a myriad of content opportunities.

Competitive research

Another way of laterally expanding your reach is to look at the content your best competitors are producing.

In this example we will look at two ways of doing that, firstly by analyzing top content and then by looking at what those competitors rank for that you don’t.

Most shared content

There are several tools that can give you a view on the most-shared content, but my personal favourites are Buzzsumo or the awesome new ahrefs Content Explorer.

Below, we see a search for “mortgages” using the tool, and we are presented with a list of content on that subject sorted by “most shared.” The result can be filtered by time frame, language, or even by specific domain inclusions or exclusions.

This data can be exported and titles extracted to be used as the basis of further keyword research around that specific topic area, or within a brainstorm.

For example, I might want to look at where the volume is from an organic search perspective for something like “mortgage paperwork.”

I can type this term into SEMRush and search through related phrases for long-tail opportunity on that specific area.

Competitor terms opportunity

A smart way of working out where you can gain further market share is to dive a little deeper into your key competitors and understand what they rank for and, critically, what you don’t.

To do this, we return to SEMRush and make use of a little-publicized but hugely useful tool within the suite called Domain Comparison Tool.

It allows you to compare two domains and visualize the overlap they have from a keyword ranking perspective. For this example, we will choose to compare two UK banks – Lloyds and HSBC.

To do that simply type both domains into the tool as below:

Next, click on the chart button and you will be presented with two overlapping circles, representing the keywords that each domain ranks for. As we can see, both rank for a similar number of keywords (the overall number affects the size of the circles) with some overlap but there are keywords from both sides that could be exploited.

If we were working for HSBC, for instance, it would be the blue portion of the chart we would be most interested in in this scenario. We can download a full list of keywords that both banks rank for, and then sort by those that HSBC don’t rank for.

You can see in the snapshot below that the data includes columns on where each site ranks for each keyword, so sorting is easy.

Once you have the raw data in spreadsheet format, we would sort by the “HSBC” column so the terms at the top are those we don’t rank for, and then strip away the rest. This leaves you with the opportunity terms that you can create content to cover, and this can be prioritized by search volume or topic area if there are specific sub-topics that are more important than others within your wider plan.

Create the calendar

By this point in the process you should have hundreds, if not thousands of title ideas, and the next job is to ensure that you organise them in a way that makes sense for your audience and also for your brand.

Content flow

To do this properly requires not just a knowledge of your audience via extensive research, but also content strategy.

One of the biggest rules is something we call content flow. In a nutshell, it is the discipline of creating a content calendar that delivers variation over time in a way that keeps the audience engaged. 

If you create the same content all of the time it can quickly become a turn-off, and so varying the type (video, image-led piece, infographics, etc.) and read time, or the amount of time you put into creating the piece, will produce that “flow.”

This handy tool can help you sense check it as you go.

Clearly your “other” content requirements as part of your wider strategy will need to fit into this strategy, too. The vast majority of the output here will be article-focused, and it is critical to ensure that other elements of your strategy are also covered to round out your content output.

This free content strategy toolkit download gives you everything you need to ensure you get the rest of it right.

The result

This is a strategy we have followed for many of our search-focused clients over the last 18 months, and we have some great real-world case studies to prove that it works.

Below you can see how just one of those has played out in search visibility improvement terms over that period as proof of its effectiveness.

All of that growth directly correlates with a huge growth in the number of URLs receiving traffic from search and that is a key metric in measuring the effectiveness of this strategy.

In this example we saw a 15% monthly increase in the number of URLs receiving traffic from search, with organic traffic up 98% year-on-year despite head terms staying relatively static.

Give it a go for yourself as part of your wider strategy and see what it can do for your brand.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

Get Unbeatable Insights into Local SEO: Buy the LocalUp Advanced Video Bundle

Posted by EricaMcGillivray

Missed LocalUp Advanced 2015? Forgot to take notes, or just want to relive the action? You can now  purchase the bundle of 13 videos in order to get all the knowledge our speakers shared about local SEO in. Dive deep and learn how to wrangle content, get reviews, overcome technical mobile challenges, and so much more from top industry leaders.

Moz and Local U are offering a super-special $99 deal for everyone—an unbeatable value for anyone looking to level-up their local SEO skills.

Buy the LocalUp Advanced Video Bundle

Get a preview of what you’ll learn and hear what attendees had to say about how much they enjoyed it:

LocalUp Advanced 2015 Video Sales Promo


In addition to the videos, you also get the slide decks. Follow along and go back as you start implementing these tips into your strategy and work. You can watch these videos and download them to any device you use: desktop, laptop, tablet, and mobile.

Watch the following great talks and more:

Getting Local Keyword Research and On-page Optimization Right with Mary Bowling
Local keyword data is often difficult to find, analyze, and prioritize. Get tips, tools, and processes for zeroing in on the best terms to target when optimizing your website and directory listings, and learn how and why to structure your website around them.
Mary Bowling

Exposing the Non-Obvious Elements of Local Businesses That Dominate on the Web with Rand Fishkin
In some categories and geographies, a local small business wholly dominates the rankings and visibility across channels. What are the secrets to this success, and how can small businesses with remarkable products/services showcase their traits best online? In this presentation, Rand digs deep into examples and highlight the recurring elements that help the best of the best stand out.

Rand Fishkin


Local Content + Scale + Creativity = Awesome with Mike Ramsey
If you are wondering who is crushing it with local content and how you can scale such efforts, then tune in as Mike Ramsey walks through ideas, examples, and lessons he has learned along the way.

Mike Ramsey


Don’t Just Show Up, Stand Out with Dana DiTomaso
Learn how to destroy your competitors by bringing personality to your marketing. Confront the challenges of making HIPPOs comfortable with unique voice, keep brand standards while injecting some fun, and stay in the forefront of your audience’s mind.

Dana DiTomaso


Playing to Your Local Strengths with David Mihm
Historically, local search has been one of the most level playing fields on the web with smaller, nimbler businesses having an advantage as larger enterprises struggled to adapt and keep up. Today, companies of both sizes can benefit from tactics that the other simply can’t leverage. David shares some of the most valuable tactics that scale—and don’t scale—in a presentation packed with actionable takeaways, no matter what size business you work with.

David Mihm

Wondering if it’s truly “advanced?”

79 percent of attendees found the information perfectly advanced

Seventy-nine percent of attendees found the LocalUp Advanced presentations to be at just the right level.

Buy the LocalUp Advanced Video Bundle


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

Check Your Local Business Listings in the UK

Posted by David-Mihm

One of the most consistent refrains from the Moz community as we’ve released features over the last two years has been the desire to see Moz Local expand to countries outside the U.S. Today I’m pleased to announce that we’re embarking on our journey to global expansion with support for U.K. business listing searches in our Check Listing tool.

Some of you may remember limited U.K. functionality as part of GetListed.org, but as a very small company we couldn’t keep up with the maintenance required to present reliable results. It’s taken us longer than we would have liked to get here, but now with more resources, the Moz Local team has the bandwidth and important experience from the past year of Moz Local in the U.S. to fully support U.K. businesses.

How It Works

We’ve updated our search feature to accept both U.S. and U.K. postal codes, so just head on over to moz.com/local/search to check it out!

After entering the name of your business and a U.K. postcode, we go out and ping Google and other important local search sites in the U.K., and return what we found. Simply select the closest-matching business and we’ll proceed to run a full audit of your listings across these sites.

You can click through and discover incomplete listings, inconsistent NAP information, duplicate listings, and more.

This check listing feature is free to all Moz community members.

You’ve no doubt noted in the screenshot above that we project a listing score improvement. We do plan to release a fully-featured U.K. version of Moz Local later this spring (with the same distribution, reporting, and duplicate-closure features that are available in the U.S.), and you can enter your email address—either on that page or right here—to be notified when we do!

U.K.-Specific Partners

As I’ve mentioned in previous blog comments, there are a certain number of global data platforms (Google, Facebook, Yelp, Bing, Foursquare, and Factual, among others) where it’s valuable to be listed correctly and completely no matter which country you’re in.

But every country has its own unique set of domestically relevant players as well, and we’re pleased to have worked with two of them on this release: Central Index and Thomson Local. (Head on over to the Moz Local Learning Center for more information about country-specific data providers.)

We’re continuing discussions with a handful of other prospective data partners in the U.K. If you’re interested in working with us, please let us know!

What’s Next?

Requests for further expansion, especially to Canada and Australia, I’m sure will be loud and clear in the comments below! Further expansion is on our roadmap, but it’s balanced against a more complete feature set in the (more populous) U.S. and U.K. markets. We’ll continue to use our experience in those markets as we prioritize when and where to expand next.

A few lucky members of the Moz Local team are already on their way to BrightonSEO. So if you’re attending that awesome event later this week, please stop by our booth and let us know what you’d like to see us work on next.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

9 Things You Need to Know About Google’s Mobile-Friendly Update

Posted by Suzzicks

Rumors are flying about Google’s upcoming mobile-friendly update, and bits of reliable information have come from several sources. My colleague Emily Grossman and I wanted to cut through the noise and bring online marketers a clearer picture of what’s in store later this month. In this post, you’ll find our answers to nine key questions about the update.

1. What changes is Google making to its algorithm on April 21st?

Answer: Recently, Google has been rolling out lots of changes to apps, Google Play, the presentation of mobile SERPS, and some of the more advanced development guidelines that impact mobile; we believe that many of these are in preparation for the 4/21 update. Google has been downplaying some of these changes, and we have no exclusive advanced knowledge about anything that Google will announce on 4/21, but based on what we have seen and heard recently, here is our best guess of what is coming in the future (on 4/21 or soon thereafter):

We believe Google will launch a new mobile crawler (probably with an Android user-agent) that can do a better job of crawling single-page web apps, Android apps, and maybe even Deep Links in iOS apps. The new Mobile-Friendly guidelines that launched last month focus on exposing JS and CSS because Android apps are built in Java, and single-page web apps rely heavily on JavaScript for their fluid, app-like experience.

Some example sites that use Responsive Design well in a single-page app architecture are:

Also, according to Rob Ousbey of Distilled, Google has been testing this kind of architecture on Blogspot.com (a Google property).

Google has also recently been pushing for more feeds from Trusted Partners, which are a key component of both mobile apps and single-page web apps since Phantom JS and Prerender IO (and similar technologies) together essentially generate crawlable feeds for indexing single-page web apps. We think this increased focus on JS, CSS, and feeds is also the reason why Google needs the additional mobile index that Gary Illyes mentioned in his “Meet the Search Engines” interview at SMX West a couple weeks ago, and why suddenly Google has been talking about apps as “first class citizens,” as called out by Mariya Moeva in the title of her SMX West presentation.

A new mobile-only index to go with the new crawler also makes sense because Google wants to index and rank both app content and deep links to screens in apps, but it does not necessarily want to figure them into the desktop algorithm or slow it down with content that should never rank in a desktop search. We also think that the recent increased focus on deep links and the announcement from Google about Google Play’s new automated and manual review process are related. This announcement indicates, almost definitively, that Google has built a crawler that is capable of crawling Android apps. We believe that this new crawler will also be able to index more than one content rendering (web page or app screen data-set) to one URL/URI and it will probably will focus more on feeds, schema and sitemaps for its own efficiency. Most of the native apps that would benefit from deep linking are driven by data feeds, and crawling the feeds instead of the apps would give Google the ability to understand the app content, especially for iOS apps, (which they are still not likely able to crawl), without having to crawl the app code. Then, it can crawl the deep-linked web content to validate the app content.

FYI: Garry Illyes mentioned that Google is retiring their old AJAX indexing instructions, but did not say how they would be replaced, except to specify in a Google+ post that Google would not click links to get more content. Instead, they would need an OnLoad event to trigger further crawling. These webmaster instructions for making AJAX crawlable were often relied on as a way to make single-page web apps crawlable, and we think that feeds will play a role here, too, as part of the replacement. Relying more heavily on feeds also makes it easier for Google to scrape data directly into SERPS, which they have been doing more and more. (See the appendix of this slide deck, starting on slide 30, for lots of mobile examples of this change in play already.) This probably will include the ability to scrape forms directly into a SERP, à la the form markup for auto-complete that Google just announced.

We are also inclined to believe that the use of the new “Mobile-Friendly” designation in mobile SERPS may be temporary, as long as SEOs and webmasters feel incentivized to make their CSS and JavaScript crawlable, and get into the new mobile index. “Mobile-Friendly” in the SERP is a bit clunky, and takes up a lot of space, so Google may decide switch to something else, like the “slow” tag shown to the right, originally spotted in testing by Barry Schwartz. In fact, showing the “Slow” tag might make sense later in the game, after most webmasters have made the updates, and Google instead needs to create a more serious and impactful negative incentive for the stragglers. (This is Barry’s image; we have not actually seen this one yet).

In terms of the Mobile-Friendly announcement, it is surprising that Google has not focused more on mobile page speed, minimizing redirects and avoiding mobile-only errors—their historical focus for mobile SEO. This could be because page speed does not matter as much in the evaluation of content if Google is getting most of its crawl information from feeds. Our guess is that things like page speed and load time will rebound in focus after 4/21. We also think mobile UX indicators that are currently showing at the bottom of the Google PageSpeed tool (at the bottom of the “mobile” tab) will play into the new mobile algorithm—we have actually witnessed Google testing their inclusion in the Mobile-Friendly tool already, as shown below, and of course, they were recently added to everyone’s Webmaster Tools reports. It is possible that the current focus on CSS and JavaScript is to ensure that as many pages are in the new index as possible at launch.

2. If my site is not mobile-friendly, will this impact my desktop rankings as well?

Answer: On a panel at SMX Munich (2 weeks after SMX West) Zineb from Google answered ‘no’ without hesitation. We took this as another indication that the new index is related to a new crawler and/or a major change to the infrastructure they are using to parse, index, and evaluate mobile search results but not desktop results. That said, you should probably take some time soon to make sure that your site works—at least in a passable way—on mobile devices, just in case there are eventual desktop repercussions (and because this is a user experience best practice that can lead to other improvements that are still desktop ranking factors, such as decreasing your bounce rate).

3. How much will mobile rankings be impacted?

Answer: On the same panel at SMX Munich (mentioned above), Zineb said that this 4/21 change will be bigger than the Panda and Penguin updates. Again, we think this fits well with an infrastructure change. It is unclear if all mobile devices will be impacted in the change or not. The change might be more impactful for Android devices or might impact Android and iOS devices equally—though currently we are seeing significant differences between iOS and Android for some types of search results, with more significant changes happening on Android than on iOS.

Deep linking is a key distinction between mobile SERPs on the Android OS and SERPs on iOS (currently, SERPs only display Android app deep links, and only on Android devices). But there is reason to believe this gap will be closing. For example, in his recent Moz post and in his presentation at SMX West, Justin Briggs mentioned that a few sample iOS deep links were validating in Google’s deep link tool. This may indicate that iOS apps with deep links will be easier to surface in the new framework, but it is still possible that won’t make it into the 4/21 update. It is also unclear whether or not Google will maintain its stance on tablets being more like desktop experiences than they are like mobile devices, and what exactly Google is considering “mobile.” What we can say here, though, is that Android tablets DO appear to be including the App Pack results, so we think they will change their stance here, and start to classify tablets as mobile on 4/21.

Emails are also increasingly impacting SERPs—particularly mobile SERPs), since mobile email opens have grown by 180% in three years, and Google is trying to take advantage of this increased engagement on mobile devices. As of now, schema can be included in emails to drive notifications in the Google Now app, and also to let Google surface marked-up emails in a browser-based search. This all happens by virtue of Google crawling all emails that come into your Gmail account, and indexing them to your user-profile so that they are accessible and able to rank like this across all of your devices (even if you aren’t currently logged into your Gmail account on your phone). Optimizing emails for mobile search is also becoming more important, and in the 4/21 update Google could do more to push the use of Schema markup in emails to drive personalized search results like the one shown to the right.

Inclusions like this mean that even if you are able to maintain your keyword rankings in mobile search after April 21, you may not necessarily be able to sustain your mobile traffic.

4. What about sites that redirect to a mobile subdomain? Will they be considered mobile-friendly?

Answer: This is an interesting question, because immediately after the roll-out of the Mobile-Friendly tagging, we actually saw significantly more mDot (‘m.’) websites ranking well in the mobile SERPS. It’s almost like they counted the mobile subdomain as a Mobile-Friendly signal, but started the algorithm fresh, with no historical data to indicate which other sites had fewer obvious signals of mobility, like a responsive design, or an adaptive or dynamically served mobile site. It is also interesting to note that many of the Google representatives seem to have recently backed off of their strong insistence on responsive design. They still say that it is the least error-prone, and easiest to crawl and index, but they also now seem to be more willing to acknowledge the other viable mobile site architectures.

5. How do I know if my site meets Google’s requirements for mobile friendliness?

Answer: Google has created a Mobile-Friendliness tool that will give you a ‘yes’ or ‘no’ answer on a per-url basis. Pages are evaluated individually, so another quick way to get a sense for how your top pages perform is to do a “site:” query for the domain in question on your phone. That will allow you to see all the pages indexed to the domain, and evaluate which ones are considered Mobile-Friendly and which are not, without having to submit them to the tool one at a time.

Google has been clear that Mobile-Friendly test results are binary, meaning that your page is either Mobile-Friendly or it is not. There is no 50% or 70% Mobile-Friendly result possible—no middle ground. They have also taken care to specify that Google’s Mobile-Friendly evaluations are somewhat instant, implying that there is no proving-time or “sandbox” associated with the tag, but this could be somewhat misleading. There may be no intentional time-delay before a page is awarded the Mobile-Friendly notation, but it will only change after a crawl of the site indicates that the page is now Mobile-Friendly, so it is close to instantaneous if the pages are getting crawled on a very regular basis.

We have found that the tool result does not necessarily match up with what we are seeing on our phones. We have occasionally also noticed that sometimes two pages in the same page template will perform differently, even though the content that changes between the template is primarily text. Both of these variations could simply be an indication of real-time delay between the tool and the crawler—the tool does an ad-hoc check on the URL to assess mobile-friendliness, but if the bot has not been by the site to evaluate its mobile friendliness recently, then the page in question would not yet have the Mobile-Friendly designation in the SERP. With this in mind, remember that when you are updating a page, and pushing it live for testing, you must use the tool to see if the update has been successful, until the site is re-crawled. This also means that once you see success in the tool, the best way to get the Mobile-Friendly designation to show up in the results faster might just be to push a sitemap in Webmaster tools, and try to trigger a fresh crawl.

6. How does having a mobile app impact my mobile rankings?

Answer: There are two things to consider here. First, if a mobile search query is highly correlated with mobile app listings (the app “download pages” in the Google Play and iOS App Stores), your app could see significantly more visibility within mobile search results pages. This is because Google has started treating apps as a new kind of universal search result, returning an “App Pack” of Google Play results for certain searches on Android devices (shown at the right), and adding an Apps drop-down to the main nav-bar on iOS devices (not shown).

An “App Pack” is a group of related apps that rank together for a given query, shown together in a box separate from the inline organic search results. It has different formatting and an “Apps” header. These often float to the top of a mobile search result, pushing the second or sometimes even the first organic result below the fold. This is also discussed in Justin Briggs’ article about apps. Currently, there is a high correlation between Google Play “App Pack” rankings and exact-match keywords in the app title. Google also seems to be evaluating app quality here and tries to serve only higher-than-average rated apps in the App Pack (this generally tends to be around a 3.5 – 4 star minimum for common keyword phrases).

If Google starts to serve these App Packs on iOS device searches as well, all apps that have keyword-optimized titles and have high-quality ratings and reviews could jump up to the top of the mobile web SERPs, increasing their visability and likely downloads. Conversely, mobile websites that currently enjoy an above-the-fold #1 or #2 organic ranking may get pushed below the fold in mobile SERPs, especially for queries that are highly correlated with mobile app results. This could cause a negative impact on mobile website visibility (without necessarily changing standard numeric rankings), in cases where a query returns a mobile App Pack—regardless of whether or not an app within that pack is yours.

Second, Mariya Moeva (Google Webmaster Trends Analyst) recently announced at SMX West that Google will be considering “high quality” apps to be a positive ranking factor in mobile search. We took this to mean that Deep Links between your website and your app will improve your website rankings in mobile search. Deep Links are different from app store listings in the App Store or Google Play, because they link directly to a specific screen within your app experience. They look just like regular links in the mobile search result, but when you click them, you are given the option of opening the link in on the web or in the app.

Currently, if you add Deep Links to your Android mobile app and associate your app URIs with corresponding (content-matching) webpages, Google will recognize the connection between your app content and your web content (and allow users who have your app installed to access your content directly in the mobile app). As it is now, the only way for Deep Links to your app contents to appear in search results is:

  • For app screens to have a 1-1 content parity with webpages
  • For those screens to have proper Deep Link coding that associates them with the corresponding pages on the website, AND
  • For your app to be installed on the searcher’s device. If the app is not installed or there is no corresponding web content, the links in the SERP will just behave as normal, web links.

Mariya didn’t state exactly how Google will be evaluating the quality of apps, but we can guess that Google will be considering signals like star ratings, reviews, and +1s. And if what we assume about the 4/21 update proves to be true, it is possible that app URIs without corresponding Deep Linked web content may rank independently in a mobile SERP from information that Google aquired via app feeds. In this case, “app quality” could be a positive mobile ranking signal for its own URIs/ screens, and not just the website it is associated with. This would be a great boon for app descovery.

7. Do I need an app, and if so, should it be Android, iOS or both? What if I have a limited budget?

Answer: If you have the budget to develop both a mobile app and a mobile website, there can be significant value to maintaining both, particularly if you leverage the mobile app as a “value add” for your customers and not just a website duplicate (though enabling some functionality duplication is necessary for deep linking). If you have a limited budget, you will have to make a choice, but it is important to consider this a business choice and not primarily an SEO choice. Your business might be well served by a mobile website or might be better served by a mobile app with only a promotional mobile web landing page meant to send web traffic to app stores (ex. Tinder). In general, most businesses can be extremely well served by a mobile website and should focus their budget on making that experience great across many devices. We only recommend going “app-first” if you are trying to offer an experience that cannot be delivered well on a mobile website. Experiences that offer a valuable offline utilities (think photo-editing apps), or take advantage of heavy computing (like gaming apps) or rely on non-web input elements such as device accelerometers or GPS, are often better suited for an app.

Apps are generally riskier because they require more up-front investment, and have to be tightly in sync with app store guidelines and approval processes that you have no control over. There are a lot of barriers to entry; just building and maintaining an experience can cost an average of $100k per platform, so it’s important that you know this is the right experience for your customers before you choose this path.

If you decide that an app experience is the best choice for your business (or you have budgeted an app in addition to your mobile web experience), you can use the operating system data in Google Analytics to help you determine which Operating System is more popular among your users. If you don’t have this data because you don’t have a website yet or you have too limited a mobile audience to determine a trend, you should choose the platform that best matches with your monetization strategies. iOS users tend to spend more money than their Android counterparts, but there are more total Android users around the world than iOS users. The implication is that if you plan to monetize your app with user transactions like In App Purchases (IAPs) or Subscriptions, iOS may be the way to start, but if you plan to monetize your app with advertisements, Android could be just as lucrative, if not more so. If Android app discovery is made easier with the 4/21 update but iOS app discovery is not, that could also factor into the decision process.

8. How is mobile traffic impacted by the user search query? Is there a way I can find out if my top keywords are mostly desktop or mobile keywords?

Answer: Search queries actually matter more and more for mobile, because Google is trying to do a much better job of anticipating and embracing a user’s intent from the query. This means that often, Google is presenting the information a searcher requests directly in the search result above the organic rankings. SEOs are used to this for local-mobile searches, but it is now happening for all kinds of searches, so it can steal traffic that would otherwise go to the site and can skew success metrics.

Google has expanded the types of information that they scrape and pull from a site directly into an answer box, especially in mobile. They have also increased and diversified the number of aggregator-style “Sponsored” results that show up in mobile—especially on Android. The top mobile search result for most flight, hotel, music, and TV show queries are now specially designed, sponsored, aggregated results that push the old organic results below the fold. Whenever you see a little grey ‘i’ in the upper right hand corner of a mobile search result – especially a specially formatted list of results that Google has aggregated for you, that means that Google is probably getting a small portion of any related transaction, even if it is just the website paying for the click. Simple blue-link search results may soon be a thing of the past—especially above the fold.

Even regular, non-aggregator-style PPC results are taking up more room and looking more compelling with click-to-call, star ratings, app icons, links for directions and ad extensions, so these may be more of a threat for SEO moving forward (shown on the right). There is a long list of examples that we shared in the Appendix of Cindy’s SMX Munich deck about the Future of Mobile SEO. With all the scraping, PPC may be the only way to out-rank Google and get above the fold for some queries in the mobile SERP.

If you have not seen Dr. Pete’s presentation from SMX West this year about the Changing of Google SERPS, you really must! It addresses this question in the desktop format, but I think the crux of what he is saying is even truer in mobile. This Dr. Pete quote from a related interview is very telling:

“Google is essentially competing against us with our own information, and I think that’s a turning point in the relationship between Google and webmasters.” -Dr. Pete

In terms of which keywords are more mobile-oriented than desktop-oriented, this can be a difficult question. You can get some basic information from Webmaster Tools by filtering the keyword information to show mobile only queries, and you can do something similar in Google Analytics. Beyond that, there are some more sophisticated solutions, like those from Search Metrics and Brightedge, but those are often out of reach for smaller operations.

9. What is Google’s goal with all of these mobile-friendly changes?

There are obviously a lot of goals in the mix here, but we do believe that Google is making these changes primarily to provide a better mobile experience for searchers, and give people exactly what they want. That said though, they are also in it to make money. Being able to easily surface apps in a search result will help them drive more and better app development for Google Play and monetize their other content like TV shows, books, magazines, movies, and music—all of which have been threatened recently by competitors like Hulu, Amazon, and of course iOS App Store and iTunes.

Google has been encouraging publishers to include transcripts with videos and song lyrics with songs. In the long run, those will help Google scrape and show those things in answer boxes, as shown at the right, but eventually they will probably also surface their own version of the content from Google Play, with links just below the answer box, so that you can watch the video or download the song directly to your phone on Google Play. When you think about Google’s intentions on this front, and try to envision the future, it is important to note that Google is actually already offering Google Play for iOS, which currently just provides the Google Music cloud-storage and a music subscription model. We expect this to expand as well, so that Google can expand their level of competitiveness here too. 


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

Understanding and Applying Moz’s Spam Score Metric – Whiteboard Friday

Posted by randfish

This week, Moz released a new feature that we call Spam Score, which helps you analyze your link profile and weed out the spam (check out the blog post for more info). There have been some fantastic conversations about how it works and how it should (and shouldn’t) be used, and we wanted to clarify a few things to help you all make the best use of the tool.

In today’s Whiteboard Friday, Rand offers more detail on how the score is calculated, just what those spam flags are, and how we hope you’ll benefit from using it.


For reference, here’s a still of this week’s whiteboard. 

Understanding and Applying Moz's Spam Score Metric

Click on the image above to open a high resolution version in a new tab!

Video transcription

Howdy Moz fans, and welcome to another edition of Whiteboard Friday. This week, we’re going to chat a little bit about Moz’s Spam Score. Now I don’t typically like to do Whiteboard Fridays specifically about a Moz project, especially when it’s something that’s in our toolset. But I’m making an exception because there have been so many questions and so much discussion around Spam Score and because I hope the methodology, the way we calculate things, the look at correlation and causation, when it comes to web spam, can be useful for everyone in the Moz community and everyone in the SEO community in addition to being helpful for understanding this specific tool and metric.

The 17-flag scoring system

I want to start by describing the 17 flag system. As you might know, Spam Score is shown as a score from 0 to 17. You either fire a flag or you don’t. Those 17 flags you can see a list of them on the blog post, and we’ll show that in there. Essentially, those flags correlate to the percentage of sites that we found with that count of flags, not those specific flags, just any count of those flags that were penalized or banned by Google. I’ll show you a little bit more in the methodology.

Basically, what this means is for sites that had 0 spam flags, none of the 17 flags that we had fired, that actually meant that 99.5% of those sites were not penalized or banned, on average, in our analysis and 0.5% were. At 3 flags, 4.2% of those sites, that’s actually still a huge number. That’s probably in the millions of domains or subdomains that Google has potentially still banned. All the way down here with 11 flags, it’s 87.3% that we did find banned. That seems pretty risky or penalized. It seems pretty risky. But 12.7% of those is still a very big number, again probably in the hundreds of thousands of unique websites that are not banned but still have these flags.

If you’re looking at a specific subdomain and you’re saying, “Hey, gosh, this only has 3 flags or 4 flags on it, but it’s clearly been penalized by Google, Moz’s score must be wrong,” no, that’s pretty comfortable. That should fit right into those kinds of numbers. Same thing down here. If you see a site that is not penalized but has a number of flags, that’s potentially an indication that you’re in that percentage of sites that we found not to be penalized.

So this is an indication of percentile risk, not a “this is absolutely spam” or “this is absolutely not spam.” The only caveat is anything with, I think, more than 13 flags, we found 100% of those to have been penalized or banned. Maybe you’ll find an odd outlier or two. Probably you won’t.

Correlation ≠ causation

Correlation is not causation. This is something we repeat all the time here at Moz and in the SEO community. We do a lot of correlation studies around these things. I think people understand those very well in the fields of social media and in marketing in general. Certainly in psychology and electoral voting and election polling results, people understand those correlations. But for some reason in SEO we sometimes get hung up on this.

I want to be clear. Spam flags and the count of spam flags correlates with sites we saw Google penalize. That doesn’t mean that any of the flags or combinations of flags actually cause the penalty. It could be that the things that are flags are not actually connected to the reasons Google might penalize something at all. Those could be totally disconnected.

We are not trying to say with the 17 flags these are causes for concern or you need to fix these. We are merely saying this feature existed on this website when we crawled it, or it had this feature, maybe it still has this feature. Therefore, we saw this count of these features that correlates to this percentile number, so we’re giving you that number. That’s all that the score intends to say. That’s all it’s trying to show. It’s trying to be very transparent about that. It’s not trying to say you need to fix these.

A lot of flags and features that are measured are perfectly fine things to have on a website, like no social accounts or email links. That’s a totally reasonable thing to have, but it is a flag because we saw it correlate. A number in your domain name, I think it’s fine if you want to have a number in your domain name. There’s plenty of good domains that have a numerical character in them. That’s cool.

TLD extension that happens to be used by lots of spammers, like a .info or a .cc or a number of other ones, that’s also totally reasonable. Just because lots of spammers happen to use those TLD extensions doesn’t mean you are necessarily spam because you use one.

Or low link diversity. Maybe you’re a relatively new site. Maybe your niche is very small, so the number of folks who point to your site tends to be small, and lots of the sites that organically naturally link to you editorially happen to link to you from many of their pages, and there’s not a ton of them. That will lead to low link diversity, which is a flag, but it isn’t always necessarily a bad thing. It might still nudge you to try and get some more links because that will probably help you, but that doesn’t mean you are spammy. It just means you fired a flag that correlated with a spam percentile.

The methodology we use

The methodology that we use, for those who are curious — and I do think this is a methodology that might be interesting to potentially apply in other places — is we brainstormed a large list of potential flags, a huge number. We cut that down to the ones we could actually do, because there were some that were just unfeasible for our technology team, our engineering team to do.

Then, we got a huge list, many hundreds of thousands of sites that were penalized or banned. When we say banned or penalized, what we mean is they didn’t rank on page one for either their own domain name or their own brand name, the thing between the www and the .com or .net or .info or whatever it was. If you didn’t rank for either your full domain name, www and the .com or Moz, that would mean we said, “Hey, you’re penalized or banned.”

Now you might say, “Hey, Rand, there are probably some sites that don’t rank on page one for their own brand name or their own domain name, but aren’t actually penalized or banned.” I agree. That’s a very small number. Statistically speaking, it probably is not going to be impactful on this data set. Therefore, we didn’t have to control for that. We ended up not controlling for that.

Then we found which of the features that we ideated, brainstormed, actually correlated with the penalties and bans, and we created the 17 flags that you see in the product today. There are lots things that I thought were going to correlate, for example spammy-looking anchor text or poison keywords on the page, like Viagra, Cialis, Texas Hold’em online, pornography. Those things, not all of them anyway turned out to correlate well, and so they didn’t make it into the 17 flags list. I hope over time we’ll add more flags. That’s how things worked out.

How to apply the Spam Score metric

When you’re applying Spam Score, I think there are a few important things to think about. Just like domain authority, or page authority, or a metric from Majestic, or a metric from Google, or any other kind of metric that you might come up with, you should add it to your toolbox and to your metrics where you find it useful. I think playing around with spam, experimenting with it is a great thing. If you don’t find it useful, just ignore it. It doesn’t actually hurt your website. It’s not like this information goes to Google or anything like that. They have way more sophisticated stuff to figure out things on their end.

Do not just disavow everything with seven or more flags, or eight or more flags, or nine or more flags. I think that we use the color coding to indicate 0% to 10% of these flag counts were penalized or banned, 10% to 50% were penalized or banned, or 50% or above were penalized or banned. That’s why you see the green, orange, red. But you should use the count and line that up with the percentile. We do show that inside the tool as well.

Don’t just take everything and disavow it all. That can get you into serious trouble. Remember what happened with Cyrus. Cyrus Shepard, Moz’s head of content and SEO, he disavowed all the backlinks to its site. It took more than a year for him to rank for anything again. Google almost treated it like he was banned, not completely, but they seriously took away all of his link power and didn’t let him back in, even though he changed the disavow file and all that.

Be very careful submitting disavow files. You can hurt yourself tremendously. The reason we offer it in disavow format is because many of the folks in our customer testing said that’s how they wanted it so they could copy and paste, so they could easily review, so they could get it in that format and put it into their already existing disavow file. But you should not do that. You’ll see a bunch of warnings if you try and generate a disavow file. You even have to edit your disavow file before you can submit it to Google, because we want to be that careful that you don’t go and submit.

You should expect the Spam Score accuracy. If you’re doing spam investigation, you’re probably looking at spammier sites. If you’re looking at a random hundred sites, you should expect that the flags would correlate with the percentages. If I look at a random hundred 4 flag Spam Score sites, 7.5% of those I would expect on average to be penalized or banned. If you are therefore seeing sites that don’t fit those, they probably fit into the percentiles that were not penalized, or up here were penalized, down here weren’t penalized, that kind of thing.

Hopefully, you find Spam Score useful and interesting and you add it to your toolbox. We would love to hear from you on iterations and ideas that you’ve got for what we can do in the future, where else you’d like to see it, and where you’re finding it useful/not useful. That would be great.

Hopefully, you’ve enjoyed this edition of Whiteboard Friday and will join us again next week. Thanks so much. Take care.

Video transcription by Speechpad.com

ADDITION FROM RAND: I also urge folks to check out Marie Haynes’ excellent Start-to-Finish Guide to Using Google’s Disavow Tool. We’re going to update the feature to link to that as well.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →