Archives for 

seo

Panda Pummels Press Release Websites: The Road to Recovery

Posted by russvirante

This post was originally in YouMoz, and was promoted to the main blog because it provides great value and interest to our community. The author’s views are entirely his or her own and may not reflect the views of Moz, Inc.

Many of us in the search industry were caught off guard by the release of Panda 4.0. It had become common knowledge that Panda was essentially “baked into” the algorithm now several times a month, so a pronounced refresh was a surprise. While the impact seemed reduced given that it coincided with other releases including a payday loans update and a potential manual penalty on Ebay, there were notable victims of the Panda 4.0 update which included major press release sites. Both Search Engine Land and Seer Interactive independently verified a profound traffic loss on major press release sites following the Panda 4.0 update. While we can’t be certain that Google did not, perhaps, roll out a handful of simultaneous manual actions or perhaps these sites were impacted by the payday loans algo update, Panda remains the inference to the best explanation for their traffic losses.

So, what happened? Can we tease out why Press Release sites were seemingly singled out? Are they really that bad? And why are they particularly susceptible to the Panda algorithm? To answer this question, we must first address the main question: what is the Panda algorithm?

Briefly: What is the Panda Algorithm?

The Panda algorithm was a ground-breaking shift in Google’s methodology for addressing certain search quality issues. Using patented machine learning techniques, Google used real, human reviewers to determine the quality of a sample set of websites. We call this sample the “training set”. Examples of the questions they were asked are below:

  1. Would you trust the information presented in this article?
  2. Is this article written by an expert or enthusiast who knows the topic well, or is it more shallow in nature?
  3. Does the site have duplicate, overlapping, or redundant articles on the same or similar topics with slightly different keyword variations?
  4. Would you be comfortable giving your credit card information to this site?
  5. Does this article have spelling, stylistic, or factual errors?
  6. Are the topics driven by genuine interests of readers of the site, or does the site generate content by attempting to guess what might rank well in search engines?
  7. Does the article provide original content or information, original reporting, original research, or original analysis?
  8. Does the page provide substantial value when compared to other pages in search results?
  9. How much quality control is done on content?
  10. Does the article describe both sides of a story?
  11. Is the site a recognized authority on its topic?
  12. Is the content mass-produced by or outsourced to a large number of creators, or spread across a large network of sites, so that individual pages or sites don’t get as much attention or care?
  13. Was the article edited well, or does it appear sloppy or hastily produced?
  14. For a health related query, would you trust information from this site?
  15. Would you recognize this site as an authoritative source when mentioned by name?
  16. Does this article provide a complete or comprehensive description of the topic?
  17. Does this article contain insightful analysis or interesting information that is beyond obvious?
  18. Is this the sort of page you’d want to bookmark, share with a friend, or recommend?
  19. Does this article have an excessive amount of ads that distract from or interfere with the main content?
  20. Would you expect to see this article in a printed magazine, encyclopedia or book?
  21. Are the articles short, unsubstantial, or otherwise lacking in helpful specifics?
  22. Are the pages produced with great care and attention to detail vs. less attention to detail?
  23. Would users complain when they see pages from this site?

Once Google had these answers from real users, they built a list of variables that might potentially predict these answers, and applied their machine learning techniques to build a model of predicting low performance on these questions. For example, having an HTTPS version of your site might predict a high performance on the “trust with a credit card” question. This model could then be applied across their index as a whole, filtering out sites that would likely perform poorly on the questionnaire. This filter became known as the Panda algorithm.

How do press release sites perform on these questions?

First, Moz has a great tutorial on running your own Panda questionnaire on your own website, which is useful not just for Panda but really any kind of user survey. The graphs and data in my analysis come from PandaRisk.com, though. Full disclosure, Virante, Inc., the company for which I work, owns PandaRisk. The graphs were built by averaging the results from several pages on each press release site, so they represent a sample of pages from each PR distributor.

So, let’s dig in. In the interest of brevity, I have chosen to highlight just four of the major concerns that came from the surveys, question-by-question.

Q1. Does this site contain insightful analysis?

Google wants to send users to web pages that are uniquely useful, not just unique and not just useful. Unfortunately, press release sites uniformly fail on this front. On average, only 50% of reviewers found that BusinessWire.com content contained insightful analysis. Compare this to Wikipedia, EDU and Government websites which, on average, score 84%, 79% and 94% respectively, and you can see why Google might choose not to favor their content.

But does this have to be the case? Of course not. Press release websites like BusinessWire.com have first mover status on important industry information. They should be the first to release insightful analysis. Now, press release sites do have to be careful about editorializing the content of their users, but there are clearly improvements that could be made. For example, we know that use of structured data and visual aids improves performance on this question (ie: graphs and charts). BusinessWire could extract stock exchange symbols from press releases and include graphs and data related to the business right in the post. This would separate their content from other press release sites that simply reproduce the content verbatim. There are dozens of other potential improvements that can be added either programmatically or by an editor. So, what exactly would these kinds of changes look like?

In this case, we simply inserted a graph from stock exchange data and included on the right-hand side some data from  Freebase on the Securities and Exchange Commission, which could easily be extracted as an entity from the documentation using, for example, Alchemy API. These modest improvements to the page increased the “insightful analysis” review score by 15%. 

Q2. Would you trust this site with your credit card?

This is one of the most difficult ideals to measure up to. E-Commerce sites, in general, perform better automatically, but there are clear distinctions between sites people trust and don’t trust. Press release websites do have an e-commerce component, so one would expect them to fare comparatively well to non-commercial sites. Unfortunately, this is just not the case. PR.com failed this question in what can only be described as epic fashion. 91% of users said they would not trust the site with their credit card details. This isn’t just a Panda issue for PR.com, this is a survival-of-the-business issue. 

Luckily, there are some really clear, straight-forward solutions to this address this problem. 

  • Extend HTTPS/SSL Sitewide
    Not every site needs to have HTTPS enabled, but if you have a 600,000+ page site with e-commerce functionality, let’s just go ahead and assume you do. Users will immediately trust your site more if they see that pretty little lock icon in their browser. 
  • Site Security Solutions
    Take advantage of solutions like Comodo Hacker Proof or McAfee SiteAdvisor to verify that your site is safe and secure. Include the badges and link to them so that both users and the bots know that you have a safe site.
  • Business Reputation Badges
    Use at least one trade group or business reputation group (like the better business bureau) or, at minimum, employ some form of schema review markup that makes it clear to your users that at least some person or group of persons out there trusts your site. If you use a trade group membership or the BBB, make sure you link to them so that, once again, it is clear to the bots as well as your users.
  • Up-to-date Design
    This is a clear issue time and time again. In the technology world, old means insecure. The site PR.com looks old-fashioned by all measures of the word, especially in comparison to the other press release websites. It is no wonder that it performs so horribly.

It is worth pointing out here that Google doesn’t need to find markup on your site to come to the conclusion that your site is untrustworthy. Because the Panda algorithm likely takes into account engagement metrics and behaviors (like pogo sticking), Google can use the behavior of users to predict the performance on these questions. So, even if there isn’t a clear path between a change you make on your site and Googlebot’s ability to identify that change doesn’t mean the change cannot and will not have an impact on site performance in the search results. The days of thinking about your users and the bots as separate audiences are gone. The bots now measure both your site and your audience. Your impact on users can and will have an impact on search performance.

Q3. Do you consider this site an authority?

This question is particularly difficult for sites that both don’t control the content they create and have a wide variety of content. This places press release websites squarely in the bullseye of the Panda algorithm. How does a website that accepts thousands of press releases on nearly any topic dare claim to be an authority? Well, it generally doesn’t, and the numbers bear that out. 75% of respondents wouldn’t consider PRNewswire an authority. 

Notice, though, that Wikipedia performs poorly on this metric as well (at least compared to EDUs and GOVs). So what exactly is going on here? How can a press release site hope to escape from this authority vacuum? 

  • Topically Segment Content
    This was one of the very first reactions to Panda. Many of the sites that were hit with Panda 1.0 sub-domained their content into particular topic areas. This seemed to provide some relief but was never a complete or permanent solution. Whether you segment your content into sub-directories or sub-domains, what you are really doing here is helping make clear to your users that the specific content your users are reading is part of a bigger piece of the pie. It isn’t some random page on your site, it fits in nicely with your website’s stated aims. 
  • Create an Authority
    Just because you don’t write the content for your site doesn’t mean you can’t be authoritative. In fact, most major press release websites have some degree of editorial oversight sitting between the author and the website. That editorial layer needs to be bolstered and exposed to the end user, making it obvious that the website does more than simply regurgitate the writing of anyone with a few bucks. 

So, what exactly would this look like? Let’s return to the Businesswire press release we were looking at earlier. We started with a bland page comprised of almost nothing but the press release. We then added a graph and some structured data automagically. Now, we want to add in some editor creds and topic segmentation.

Notice in the new design that we have created the “Securities & Investment Division”, added an editor with a fancy title “Business Desk Editor” and a credentialed by-line. You could even use authorship publisher markup. The page no longer looks like a sparse press release but an editorially managed piece of news content in a news division dedicated to this subject matter. Authority done.

Q4. Would you consider bookmarking/sharing this site?

When I look at this question, I am baffled. Seriously, how do you make a site in which you don’t control the content worth bookmarking or sharing? Furthermore, how do you do this with overtly commercial, boring content like press releases? As you could imagine, press release sites fair quite poorly on this. Over 85% of respondents said they weren’t interested at all in bookmarking or sharing content from PRWeb.com. And why should they? 

So, how exactly does a press release website encourage users to share? The most common recommendations are already in place on PRWeb. They are quite overt with the usage of social sharing and bookmarking buttons (placed right at the top of the content). Their content is constantly fresh because new press releases come out every day. If these techniques aren’t working, then what will?

The problem with bookmarking and sharing on press release websites is two-fold. First, the content is overtly commercial so users don’t want to share it unless the press release is about something truly interesting. Secondly, the content is ephemeral so users don’t want to return to it. We have to solve both of these problems.

Unfortunately, I think the answer to this question is some tough medicine for press release websites. The solution is multi-faceted. It starts with putting a  meta expires tag on press releases. Sorry, but there is no reason for PRWeb to maintain a 2009 press release about a business competition in the search results. In its place, though, should be company and/or categorical pages which thoughtfully index and organize archived content. While LumaDerm may lose their press release from 2009, they would instead have a page on the site dedicated to their press releases so that the content is still accessible, albeit one click away, and the search engines know to ignore it. With this solution, the pages that end up ranking in the long run for valuable words and phrases are the aggregate pages that truly do offer authoritative information on what is up-and-coming with the business. The page is sticky because it is updated as often as the business releases new information, you still get some of the shares out of new releases but you don’t risk the problems of PR sprawl and crawl prioritization. Aside from the initial bump of fresh content, there is no good SEO reason to keep old press releases in the index.

So, I don’t own a press release site…

Most of us don’t run sites with thousands of pages of low quality content. But that doesn’t mean we shouldn’t be cognizant of Panda. Of all of Google’s search updates, Panda is the one I respect the most. I respect it because it is an honest attempt to measure quality. It doesn’t ask how you got to your current position in the search results (a classic genetic fallacy problem), it simply asks whether the page and site itself deserve that ranking based on human quality measures (as imperfect as it may be at doing so). Most importantly, even if Google didn’t exist at all, you should aspire to have a website that scores well on all of these metrics. Having a site that performs well on the Panda questions means more than insulation from a particular algorithm update, it means having a site that performs well for your users. That is a site you want to have.

Take a look again at the questionnaire. Does your site honestly meet these standards? Ask someone unbiased. If your site does, then congratulations – you have an amazing site. But if not, it is time to get to work building the site that you were meant to build.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

Stop Worrying About the New Google Maps; These URL Parameters Are Gold

Posted by David-Mihm

I suspect I’m not alone in saying: I’ve never been a fan of the New Google Maps.

In the interstitial weeks between that tweet and today, Google has made some noticeable improvements. But the user experience still lags in many ways relative to the classic version (chief among them: speed).

Google’s invested so heavily in this product, though, that there’s no turning back at this point. We as marketers need to come to terms with a product that will drive an increasing number of search results in the future.

Somewhat inspired by this excellent Pete Wailes post from many years ago, I set out last week to explore Google Maps with a fresh set of eyes and an open mind to see what I could discover about how it renders local business results. Below is what I discovered.

Basic URL structure

New Google Maps uses a novel URL structure (novel for me, anyway) that is not based around the traditional ? and & parameters of Classic Google Maps, but instead uses /’s and something called hashbangs to tell the browser what to render.

The easiest way to describe the structure is to illustrate it:

There are also some additional useful hashbang parameters relating to local queries that I’ll describe in further detail below.

Some actual feature improvements

Despite the performance issues, New Google Maps has introduced at least two useful URL modifiers I’ve grown to love.

/am=t

This generates a stack-ranked list of businesses in a given area that Google deems relevant for the keyword you’re searching. It’s basically the equivalent of the list on the lefthand panel in Classic Google Maps but much easier to get to via direct URL. Important: am=t must always be placed after /search and before the hashbang modifiers, or else the results will break.

by:experts

This feature shows you businesses that have been reviewed by Google+ experts (the equivalent of what we’ve long-called “power reviewers” or “authority reviewers” on my annual Local Search Ranking Factors survey). To my knowledge it’s the first time Google has publicly revealed who these power users are, opening up the possibility of an interesting future study correlating PlaceRank with the presence, valence, and volume of these reviews. In order to see these power reviewers, it seems like you have to be signed into a Google+ account, but perhaps others have found a way around this requirement.

Combining these two parameters yields incredibly useful results like these, which could form the basis for an influencer-targeting campaign:

Above: a screenshot of the results for: https://www.google.com/maps/search/grocery+stores+by:experts/@45.5424364,-122.654422,11z/am=t/

Local pack results and the vacuum left by tbm=plcs

Earlier this week, Steve Morgan noticed that Google crippled the ability to render place-based results from a Google search (ex: google.com/search?q=realtors&tbm=plcs). Many local rank-trackers were based on the results of these queries.

Finding a replacement for this parameter in New Google Maps turns out to be a little more difficult than it would first appear. You’ll note in the summary of URL structure above that each URL comes with a custom-baked centroid. But local pack results on a traditional Google SERP each have their own predefined viewport — i.e. the width, height, and zoom level that most closely captures the location of each listing in the pack, making it difficult to determine the appropriate zoom level.

Above: the primary SERP viewport for ‘realtors’ with location set to Seattle, WA.

Note that if you click that link of “Map for realtors” today, and then add the /am=t parameter to the resulting URL, you tend to get a different order of results than what appears in the pack.

I’m not entirely sure as to why the order changes–one theory is that Google is now back to blending pack results (using both organic and maps algorithms). Another theory is that the aspect ratio on the viewport on the /am=t window is invariably square, which yields a different set of relevant results than the “widescreen” viewport on the primary SERP.

One thing I have found helps with replicability is to leave the @lat,lng,zoom parameters out of the URL, and let Google automatically generate them for you.

Here are a couple of variations that I encourage you to try:

https://www.google.com/maps/search/realtors/am=t/data=
followed by:
!3m1!4b1!1srealtors!2sSeattle,+WA!3s0x5490102c93e83355:0x102565466944d59a
or
!3m1!4b1!4m5!2m4!3m3!1srealtors!2sSeattle,+WA!3s0x5490102c93e83355:0x102565466944d59a

Take a closer look at those trailing parameters and you’ll see a structure that looks like this:

The long string starting with 0x and ending with 9a is the Feature ID of the centroid of the area in which you’re searching (in this case, Seattle). Incidentally, this feature ID is also rendered by Google Mapmaker using a URL similar to http://www.google.com/mapmaker?gw=39&fid={your_fid}.

This is the easy part. You can find this string by typing the URL:

https://www.google.com/maps/place/seattle,+WA

waiting for the browser to refresh, and then copying it from the end of the resulting URL.

The hard part is figuring out which hashbang combo will generate which order of results, and I still haven’t been able to do it. I’m hoping that by publishing this half-complete research, some enterprising Moz reader might be able to complete the puzzle! And there’s also the strong possibility that this theory is completely off base.

In my research thus far, the shorter hashbang combination (!3m1!4b1) seems to yield the closest results to what tbm=plcs used to render, but they aren’t 100% identical.

The longer hashbang combination (!3m1!4b1!4m5!2m4!3m3) actually seems to predictably return the same set of results as a Local search on Google Plus — and note the appearance of the pushpin icon next to the keyword when you add this longer combination:

Who’s #1?

Many of us in the SEO community, even before the advent of (not provided), encouraged marketers and business owners to stop obsessing about individual rankings and start looking at visibility in a broader sense. Desperately scrambling for a #1 ranking on a particular keyword has long been a foolish waste of resources.

Google’s desktop innovations in local search add additional ammunition to this argument. Heat map studies have shown that the first carousel result is far from dominant, and that a compelling Google+ profile photo can perform incredibly well even as far down the “sixth or seventh” (left to right) spot.  Ranking #1 in the carousel doesn’t provide quite the same visual benefit as ranking #1 in an organic SERP or 7-pack.

The elimination of the lefthand list pane on New Google Maps makes an even stronger case. It’s literally impossible to rank these businesses visually no matter how hard you stare at the map:

Mobile, mobile, mobile

Paradoxically, though, just as Google is moving away from ranked results on the desktop, my view is that higher rankings matter more than ever in mobile search. And as mobile and wearables continue to gain market share relative to desktop, that trend is likely to increase.

The increasing ubiquity of Knowledge Panels in search results the past couple of years has been far from subtle. Google is now not only attempting to organize the world’s information, but condense each piece of it into a display that will fit on a Google Glass (or Google Watch, or certainly a Google Android phone).

Nowhere is the need to be #1 more dramatic than in the Google Maps app, in which users perform an untold number of searches each month. List view is completely hidden (I didn’t even know it existed until this week) and an average user is just as likely to think the first result is the only one for them as they are to figure out they need to swipe right to view more businesses.

Above: a Google Maps app result for ‘golf courses’, in which the first result has a big-time advantage.

The other issue that mobile results really bring to the fore is that the user is becoming the centroid.

This is true even when searching from the desktop. I performed some searches one morning from a neighborhood coffee shop with wifi, and a few minutes later from my house six blocks away. To my surprise, I got completely different results. From my house, Google is apparently only able to detect that I’m somewhere in “Portland.” But from the coffee shop, it was able to detect my location at a much more granular level (presumably due to the coffee shop’s wifi?), and showed me results specific to my ZIP code, with the centroid placed at the center of that ZIP.  And the zoom setting for both adjusted automatically–the more granular ZIP code targeting defaulted to a zoom level of 15z or 16z, versus 11z to 13z from my home, where Google wasn’t as sure of my location.

Note, too, that I was unable to be exact about the zoom level in the previous paragraph. That’s because the centroid is category-dependent. It likely always has been category dependent but that fact is much more noticeable in New Google Maps.

Maps app visibility

Taking both of these into account, in terms of replicating Google Maps App visibility, here is a case where specifying @lat,lng,zoom (with the zoom set to 17z)can be incredibly useful. 

As an example, I performed the search below from my iPhone at the hotel I was staying at in Little Italy after a recent SEM SD event. And was able to replicate the results with this URL string on desktop:

http://google.com/maps/search/lawyers/@32.723278,-117.168528,17z/am=t/data=!3m1!4b1

Conclusions and recommendations

While I still feel the user experience of New Google Maps is subpar, as a marketer I found myself developing a very Strangelovian mindset over the past week or so — I have actually learned to stop worrying and love the new Google Maps. There are some incredibly useful new URL parameters that allow for a far more complete picture of local search visibility than the classic Google Maps provided.

With this column, I wanted to at least present a first stab to the Moz community to hopefully build on and experiment with. But this is clearly an area that is ripe for more research, particularly with an eye towards finding a complete replacement for the old tbm=plcs parameter.

As mobile usage continues to skyrocket, identifying the opportunities in your (or your client’s) competitive set using the new Google Maps will only become more important.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

One Content Metric to Rule Them All

Posted by Trevor-Klein

Let’s face it: Measuring, analyzing, and reporting the success of content marketing is hard.

Not only that, but we’re all busy. In its latest report on B2B trends, the Content Marketing Institute quantified some of the greatest challenges faced by today’s content marketers, and a whopping 69% of companies cited a lack of time. We spend enough of our time sourcing, editing, and publishing the content, and anyone who has ever managed an editorial calendar knows that fires are constantly in need of dousing. With so little extra time on our hands, the last thing content marketers want to do is sift through a heaping pile of data that looks something like this:

Sometimes we want to dig into granular data. If a post does exceptionally well on Twitter, but just so-so everywhere else, that’s noteworthy. But when we look at individual metrics, it’s far too easy to read into them in all the wrong ways.

Here at Moz, it’s quite easy to think that a post isn’t doing well when it doesn’t have a bunch of thumbs up, or to think that we’ve made a horrible mistake when a post gets several thumbs down. The truth is, though, that we can’t simply equate metrics like thumbs to success. In fact, our most thumbed-down post in the last two years was one in which Carson Ward essentially predicted the recent demise of spammy guest blogging.

We need a solution. We need something that’s easy to track at a glance, but doesn’t lose the forest for the trees. We need a way to quickly sift through the noise and figure out which pieces of content were really successful, and which didn’t go over nearly as well. We need something that looks more like this:

This post walks through how we combined our content metrics for the Moz Blog into a single, easy-to-digest score, and better yet, almost completely automated it.

What it is not

It is not an absolute score. Creating an absolute score, while the math would be equally easy, simply wouldn’t be worthwhile. Companies that are just beginning their content marketing efforts would consistently score in the single digits, and it isn’t fair to compare a multi-million dollar push from a giant corporation to a best effort from a very small company. This metric isn’t meant to compare one organization’s efforts with any other; it’s meant to be used inside of a single organization.

What it is and what it measures

The One Metric is a single score that tells you how successful a piece of content was by comparing it to the average performance of the content that came before it. We made it by combining several other metrics, or “ingredients,” that fall into three equally weighted categories:

  1. Google Analytics
  2. On-page (in-house) metrics
  3. Social metrics

It would never do to simply smash all these metrics together, as the larger numbers would inherently carry more weight. In other words, we cannot simply take the average of 10,000 visits and 200 Facebook likes, as Facebook would be weighted far more heavily—moving from 200 to 201 likes would be an increase of 0.5%, and moving from 10,000 to 10,001 visits would be an increase of 0.01%. To ensure every one of the ingredients is weighted equally, we compare them to our expectations of them individually.

Let’s take a simple example using only one ingredient. If we wanted to get a sense for how well a particular post did on Twitter, we could obviously look at the number of tweets that link to it. But what does that number actually mean? How successful is a post that earns 100 tweets? 500? 2,000? In order to make sense of it, we use past performance. We take everything we’ve posted over the last two months, and find the average number of tweets each of those posts got. (We chose two months; you can use more or less if that works better for you.) That’s our benchmark—our expectation for how many tweets our future posts will get. Then, if our next post gets more than that expected number, we can safely say that it did well by our own standards. The actual number of tweets doesn’t really matter in this sense—it’s about moving up and to the right, striving to continually improve our work.

Here’s a more visual representation of how that looks:

Knowing a post did better or worse than expectations is quite valuable, but how much better or worse did it actually do? Did it barely miss the mark, or did it completely tank? It’s time to quantify.

It’s that percentage of the average (92% and 73% in the examples above) that we use to seed our One Metric. For any given ingredient, if we have 200% of the average, we have a post that did twice as well as normal. If we have 50%, we have a post that did half as well.

From there, we do the exact same thing for all the other ingredients we’d like to use, and then combine them:

This gives us a single metric that offers a quick overview of a post’s performance. In the above example, our overall performance came out to 113% of what we’d expect based on our average performance. We can say it outperformed expectations by 13%.

We don’t stop there, though. This percent of the average is quite useful… but we wanted this metric to be useful outside of our own minds. We wanted it to make sense to just about anyone who looked at it, so we needed a different scale. To that end, we took it one step farther and applied that percentage to a logarithmic scale, giving us a single two-digit score much like you see for Domain Authority and Page Authority.

If you’re curious, we used the following equation for our scale (though you should feel free to adjust that equation to create a scale more suitable for your needs):

Where y is the One Metric score, and x is the percent of a post’s expected performance it actually received. Essentially, a post that exactly meets expectations receives a score of 50.

For the above example, an overall percentage of expectations that comes out to 113% translates as follows:

Of course, you won’t need to calculate the value by hand; that’ll be done automatically in a spreadsheet. Which is actually a great segue…

The whole goal here is to make things easy, so what we’re going for is a spreadsheet where all you have to do is “fill down” for each new piece of content as it’s created. About 10-15 seconds of work for each piece. Unfortunately, I can’t simply give you a ready-to-go template, as I don’t have access to your Google Analytics, and have no clue how your on-page metrics might be set up. 

As a result, this might look a little daunting at first.

Once you get things working once, though, all it takes is copying the formulas into new rows for new pieces of content; the metrics will be filled automatically. It’s well worth the initial effort.

Ready? Start here:

Make a copy of that document so you can make edits (File > Make a Copy), then follow the steps below to adjust that spreadsheet based on your own preferences.

  1. You’ll want to add or remove columns from that sheet to match the ingredients you’ll be using. Do you not have any on-page metrics like thumbs or comments? No problem—just delete them. Do you want to add Pinterest repins as an ingredient? Toss it in there. It’s your metric, so make it a combination of the things that matter to you.
  2. Get some content in there. Since the performance of each new piece of content is based on the performance of what came before it, you need to add the “what came before it.” If you’ve got access to a database for your organization (or know someone who does), that might be easiest. You can also create a new tab in that spreadsheet, then use the =IMPORTFEED function to automatically pull a list of content from your RSS feed.
  3. Populate the first row. You’ll use a variety of functionality within Google Spreadsheets to pull the data you need in from various places on the web, and I go through many of them below. This is the most time-consuming part of setting this up; don’t give up!
  4. Got your data successfully imported for the first row? Fill down. Make sure it’s importing the right data for the rest of your initial content.
  5. Calculate the percentage of expectations. Depending on how many ingredients you’re using, this equation can look mighty intimidating, but that’s really just a product of the spreadsheet smooshing it all onto one line. Here’s a prettier version:
    All this is doing (remember Step 2 above, where we combined the ingredients) is comparing each individual metric to past performance, and then weighting them appropriately.

    And, here’s what that looks like in plain text for our metric (yours may vary):
    =((1/3)*(E48/(average(E2:E47))))+((1/3)*((F48/(average(F2:F47)))+(G48/(average(G2:G47))))/2)+((1/3)*((H48/(average(H2:H47)))+(I48/(average(I2:I47)))+(J48/(average(J2:J47)))/3))
    	

    Note that this equation goes from row 2 through row 47 because we had 46 pieces of content that served to create our “expectation.”

  6. Convert it to the One Metric score. This is a piece of cake. You can certainly use our logarithmic equation (referenced above): y = 27*ln(x) +50, where x is the percent of expectations you just finished calculating. Or, if you feel comfortable adjusting that to suit your own needs, feel free to do that as well.
  7. You’re all set! Add more content, fill down, and repeat!

Update: A word of caution
After some great discussion in the comments below, I thought it prudent to include a word of caution about how this metric is used. For one thing, be smart about what the numbers actually mean, and keep the following points in mind:

  1. Make sure you have a sufficiently ample benchmark. If you only have three posts with which to set the “expected” values, then your performance against that expectation isn’t going to mean much. I’d recommend having at least 10-15 posts from which you calculate that expectation before you apply the One Metric score to any subsequent posts.
  2. Be smart about averages, and know that this doesn’t mean you can discard all the rest of your metrics. By smooshing all of these metrics together, we effectively soften the impact of any outliers. Thanks to Pete Wailes for this XKCD reference; while you shouldn’t lose the forest for the trees, sometimes individual trees are quite important.
  3. As with any metric, knowing what to do with the One Metric takes a keen awareness of why a certain piece of content performed the way it did, and checking any intended actions against your organization’s goals. There’s a reason marketers haven’t been replaced by algorithms: It’s up to you and your brain to turn these metrics into actual insights.

</caution>

Here are more detailed instructions for pulling various types of data into the spreadsheet:

Adding new rows with IFTTT

If This Then That (IFTTT) makes it brilliantly easy to have your new posts automatically added to the spreadsheet where you track your One Metric. The one catch is that your posts need to have an RSS feed set up (more on that from FeedBurner). Sign up for a free IFTTT account if you don’t already have one, and then set up a recipe that adds a row to a Google Spreadsheet for every new post in the RSS feed.

When creating that recipe, make sure you include “Entry URL” as one of the fields that’s recorded in the spreadsheet; that’ll be necessary for pulling in the rest of the metrics for each post.

Also, IFTTT shortens URLs by default, which you’ll want to turn off, since the shortened URLs won’t mean anything to the APIs we’re using later. You can find that setting in your account preferences.

Pulling Google Analytics

One of the beautiful things about using a Google Spreadsheet for tracking this metric is the easy integration with Google Analytics. There’s an add-on for Google Spreadsheets that makes pulling in just about any metric a simple process. The only downside is that even after setting things up correctly, you’ll still need to manually refresh the data.

To get started,  install the add-on. You’ll want to do so while using an account that has access to your Google Analytics.

Then, create a new report; you’ll find the option under “Add-ons > Google Analytics:”

Select the GA account info that contains the metrics you want to see, and choose the metrics you’d like to track. Put “Page” in the field for “Dimensions;” that’ll allow you to reference the resulting report by URL.

You can change the report’s configuration later on, and if you’d like extra help figuring out how to fiddle with it, check out Google’s documentation.

This will create (at least) two new tabs on your spreadsheet; one for Report Configuration, and one for each of the metrics you included when creating the report. On the Report Configuration tab, you’ll want to be sure you set the date range appropriately (I’d recommend setting the end date fairly far in the future, so you don’t have to go back and change it later). To make things run a bit quicker, I’d also recommend setting a filter for the section(s) of your site you’d like to evaluate. Last but not least, the default value for “Max Results” is 1,000, so if you have more pages than that, I’d change that, as well (the max value is 10,000).

Got it all set up? Run that puppy! Head to Add-ons > Google Analytics > Run Reports. Each time you return to this spreadsheet to update your info, you’ll want to click “Run Reports” again, to get the most up-to-date stats.

There’s one more step. Your data is now in a table on the wrong worksheet, so we need to pull it over using the VLOOKUP formula. Essentially, you’re telling Excel, “See that URL over there? Find it in the table on that report tab, and tell me what the number is next to it.” If you haven’t used VLOOKUP before, it’s well worth learning. There’s a fantastic  explanation over at Search Engine Watch if you could use a primer (or a refresher).

One additional detail worth noting (thanks to rorynatkiel for pointing it out in the comments): You may need to use a =CONCAT to add the “http://” in while you’re pulling from that report, as the GA report doesn’t include it.

Pulling in social metrics with scripts

This is a little trickier, as Google Spreadsheets doesn’t include a way to pull in social metrics, and that info ins’t included in GA. The solution? We create our own functions for the spreadsheet to use.

Relax; it’s not as hard as you’d think. =)

I’ll go over Facebook, Twitter, and Google Plus here, though the process would undoubtedly be similar for any other social network you’d like to measure.

We start in the script editor, which you’ll find under the tools menu:

If you’ve been there before, you’ll see a list of scripts you’ve already made; just click “Create a New Project.” If you’re new to Google Scripts, it’ll plop you into a blank project—you can just dismiss the popup window that tries to get you started.

Google Scripts organizes what you create into “projects,” and each project can contain multiple scripts. You’ll only need one project here—just call it something like “Social Metrics Scripts”—and then create a new script within that project for each of the social networks you’d like to include as an ingredient in your One Metric.

Once you have a blank script ready for each network, go through one by one, and paste the respective code below into the large box in the script editor (make sure to replace the default “myFunction” code).

function fbshares(url) {
var jsondata = UrlFetchApp.fetch("http://api.facebook.com/restserver.php?method=links.getStats&format=json&urls="+url);
var object = Utilities.jsonParse(jsondata.getContentText());
return object[0].total_count;
Utilities.sleep(1000)
}
function tweets(url) {
var jsondata = UrlFetchApp.fetch("http://urls.api.twitter.com/1/urls/count.json?url="+url);
var object = Utilities.jsonParse(jsondata.getContentText());
Utilities.sleep(1000)
return object.count;
}
function plusones(url) {
var options =
{
"method" : "post",
"contentType" : "application/json",
"payload" :
'{"method":"pos.plusones.get","id":"p","params":{"nolog":true,"id":"'+url+'","source":"widget","userId":"@viewer","groupId":"@self"},"jsonrpc":"2.0","key":"p","apiVersion":"v1"}'
};
var response = UrlFetchApp.fetch("https://clients6.google.com/rpc?key=AIzaSyCKSbrvQasunBoV16zDH9R33D88CeLr9gQ", options);
var results = JSON.parse(response.getContentText());
if (results.result != undefined)
return results.result.metadata.globalCounts.count;
return "Error";
}

Make sure you save these scripts—that isn’t automatic like it is with most Google applications. Done? You’ve now got the following functions at your disposal in Google Spreadsheets:

  • =fbshares(url)
  • =tweets(url)
  • =plusones(url)

The (url) in each of those cases is where you’ll point to the URL of the post you’re trying to analyze, which should be pulled in automatically by IFTTT. Voila! Social metrics.

Pulling on-page metrics

You may also have metrics built into your site that you’d like to use. For example, Moz has thumbs up on each post, and we also frequently see great discussions in our comments section, so we use both of those as success metrics for our blog. Those can usually be pulled in through one of the following two methods.

But first, obligatory note: Both of these methods involve scraping a page for information, which is obviously fine if you’re scraping your own site, but it’s against the ToS for many services out there (such as Google’s properties and Twitter), so be careful with how you use these.

=IMPORTXML

While getting it set up correctly can be a little tricky, this is an incredibly handy function, as it allows you to scrape a piece of information from a page using an XPath. As long as your metric is displayed somewhere on the URL for your piece of content, you can use this function to pull it into your spreadsheet.

Here’s how you format the function:

If you’d like a full tutorial on XPaths (they’re quite useful), our friends at Distilled put together a really fantastic guide to using them for things just like this.  It’s well worth a look. You can skip that for now, if you’d rather, as you can find the XPath for any given element pretty quickly with a tool built into Chrome.

Right-click on the metric you’d like to pull, and click on “Inspect element.”

That’ll pull up the developer tools console at the bottom of the window, and will highlight the line of code that corresponds to what you clicked. Right-click on that line of code, and you’ll have the option to “Copy XPath.” Have at it.

That’ll copy the XPath to your clipboard, which you can then paste into the function in Google Spreadsheets.

Richard Baxter of BuiltVisible created a wonderful  guide to the IMPORTXML function a few years ago; it’s worth a look if you’d like more info.

Combining =INDEX with =IMPORTHTML

If your ingredient is housed in a <table> or a list (ordered or unordered) on your pages, this method might work just as well.

=IMPORTHTML simply plucks the information from a list or table on a given URL, and =INDEX pulls the value from a cell you specify within that table. Combining them creates a function something like this:

Note that without the INDEX function, the IMPORTHTML function will pull in the entire piece of content it’s given. So, if you have a 15-line table on your page and you import that using IMPORTHTML, you’ll get the entire table in 15 rows in your spreadsheet. INDEX is what restricts it to a single cell in that table. For more on this function, check out this quick tutorial.


Taking it to the next level

I’ve got a few ideas in the works for how to make this metric even better. 

Automatically check for outlier ingredients and flag them

One of the downsides of smooshing all of these ingredients together is missing out on the insights that individual metrics can offer. If one post did fantastically well on Facebook, for example, but ended up with a non-remarkable One Metric score, you might still want to know that it did really well on Facebook.

In the next iteration of the metric, my plan is to have the spreadsheet automatically calculate not only the average performance of past content, but also the standard deviation. Then, whenever a single piece differs by more than a couple of standard deviations (in either direction), that ingredient will get called out as an outlier for further review.

Break out the categories of ingredients

In the graphic above that combines the ingredients into categories in order to calculate an overall average, it might help to monitor those individual categories, too. You might, then, have a spreadsheet that looked something like this:

Make the weight of each category adjustable based on current goals

As it stands, each of those three categories is given equal weight in coming up with our One Metric scores. If we broke the categories out, though, they could be weighted differently to reflect our company’s changing goals. For example, if increased brand awareness was a goal, we could apply a heavier weight to social metrics. If retention became more important, on-page metrics from the existing community could be weighted more heavily. That weighting would adapt the metric to be a truer representation of the content’s performance against current company goals.



I hope this comes in as handy for everyone else’s analysis as it has for my own. If you have any questions and/or feedback, or any other interesting ways you think this metric could be used, I’d love to hear from you in the comments!


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

Setting Up 4 Key Customer Loyalty Metrics in Google Analytics

Posted by Tom.Capper

Customer loyalty is one of the strongest assets a business can have, and one that any can aim to improve. However, improvement requires iteration and testing, and iteration and testing require measurement.

Traditionally, customer loyalty has been measured using customer surveys. The Net Promoter Score, for example, is based on the question (on a scale of one to ten) “How likely is it that you would recommend our company/product/service to a friend or colleague?”. Regularly monitoring metrics like this with any accuracy is going to get expensive (and/or annoying to customers), and is never going to be hugely meaningful, as advocacy is only one dimension of customer loyalty. Even with a wider range of questions, there’s also some risk that you end up tracking what your customers claim about their loyalty rather than their actual loyalty, although you might expect the two to be strongly correlated.

Common mistakes

Google Analytics and other similar platforms collect data that could give you more meaningful metrics for free. However, they don’t always make them completely obvious – before writing this post, I checked to be sure there weren’t any very similar ones already published, and I found some fairly dubious reoccurring recommendations. The most common of these was using % of return visitors as a sole or primary metric for customer loyalty. If the percentage of visitors to your site who are return visitors drops, there are plenty of reasons that could be behind that besides a drop in loyalty—a large number of new visitors from a successful marketing campaign, for example. Similarly, if the absolute number of return visitors rises, this could be as easily caused by an increase in general traffic levels as by an increase in the loyalty of existing customers.

Visitor frequency is another easily misinterpreted metric;  infrequent visits do not always indicate a lack of loyalty. If you were a loyal Mercedes customer, and never bought any car that wasn’t a new Mercedes, you wouldn’t necessarily visit their website on a weekly basis, and someone who did wouldn’t necessarily be a more loyal customer than you.

The metrics

Rather than starting with the metrics Google Analytics shows us and deciding what they mean about customer loyalty (or anything else), a better approach is to decide what metrics you want, then deciding how you can replicate them in Google Analytics.

To measure the various dimensions of (online) customer loyalty well, I felt the following metrics would make the most sense:

  • Proportion of visitors who want to hear more
  • Proportion of visitors who advocate
  • Proportion of visitors who return
  • Proportion of macro-converters who convert again

Note that a couple of these may not be what they initially seem. If your registration process contains an awkwardly worded checkbox for email signup, for example, it’s not a good measure of whether people want to hear more. Secondly, “proportion of visitors who return” is not the same as “proportion of visitors who are return visitors.”

1. Proportion of visitors who want to hear more

This is probably the simplest of the above metrics, especially if you’re already tracking newsletter signups as a micro-conversion. If you’re not, you probably should be, so see Google’s guidelines for event tracking using the analytics.js tracking snippet or Google Tag Manager, and set your new event as a goal in Google Analytics.

2. Proportion of visitors who advocate

It’s never possible to track every public or private recommendation, but there are two main ways that customer advocacy can be measured in Google Analytics: social referrals and social interactions. Social referrals may be polluted as a customer loyalty metric by existing campaigns, but these can be segmented out if properly tracked, leaving the social acquisition channel measuring only organic referrals.

Social interactions can also be tracked in Google Analytics, although surprisingly, with the exception of Google+, tracking them does require additional code on your site. Again, this is probably worth tracking anyway, so if you aren’t already doing so, see Google’s guidelines for analytics.js tracking snippets, or this excellent post for Google Tag Manager analytics implementations.

3. Proportion of visitors who return

As mentioned above, this isn’t the same as the proportion of visitors who are return visitors. Fortunately, Google Analytics does give us a feature to measure this.

Even though date of first session isn’t available as a dimension in reports, it can be used as a criteria for custom segments. This allows us to start building a data set for how many visitors who made their first visit in a given period have returned since.

There are a couple of caveats. First, we need to pick a sensible time period based on our frequency and recency data. Second, this data obviously takes a while to produce; I can’t tell how many of this month’s new visitors will make further visits at some point in the future.

In Distilled’s case, I chose 3 months as a sensible period within which I would expect the vast majority of loyal customers to visit the site at least once. Unfortunately, due to the 90-day limit on time periods for this segment, this required adding together the totals for two shorter periods. I was then able to compare the number of new visitors in each month with how many of those new visitors showed up again in the subsequent 3 months:

As ever with data analysis, the headline figure doesn’t tell the story. Instead, it’s something we should seek to explain. Looking at the above graph, it would be easy to conclude “Distilled’s customer loyalty has bombed recently; they suck.” However, the fluctuation in the above graph is mostly due to the enormous amount of organic traffic that’s been generated by Hannah‘s excellent blog post 4 Types of Content Every Site Needs.

Although many new visitors who discovered the Distilled site through this blog post have returned since, the return rate is unsurprisingly lower than some of the most business-orientated pages on the site. This isn’t a bad thing—it’s what you’d expect from top-of-funnel content like blog posts—but it’s a good example of why it’s worth keeping an eye out for this sort of thing if you want to analyse these metrics. If I wanted to dig a little deeper, I might start by segmenting this data to get a more controlled view of how new visitors are reacting to Distilled’s site over time.

4. Proportion of macro-converters who convert again

While a standard Google Analytics implementation does allow you to view how many users have made multiple purchases, it doesn’t allow you to see how these fell across their sessions. Similarly, if you can see how many users have had two sessions and two goal conversions, but you can’t see whether those conversions were in different visits, it’s entirely possible that some had one accidental visit that bounced, and one visit with two different conversions (note that you cannot perform the same conversion twice in one session).

It would be possible to create custom dimensions for first (and/or second, third, etc.) purchase dates using internal data, but this is a complex and site-specific implementation. Unfortunately, for the time being, I know of no good way of documenting user conversion patterns over multiple sessions using only Google Analytics, despite the fact that it collects all the data required to do this.

Contribute

These are only my favourite customer loyalty metrics. If you have any that you’re already tracking or are unsure how to track, please explain in the comments below.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

8 Ways to Use Email Alerts to Boost SEO – Whiteboard Friday

Posted by randfish

Link building is nowhere near dead, and some of the best link opportunities can be discovered by setting up email alerts for various things that are published on the web. In today’s Whiteboard Friday, Rand runs through eight specific types of alerts that you can implement today for improved SEO.

For reference, here’s a still of this week’s whiteboard!

Tools mentioned this week

Google Alerts

Fresh Web Explorer

Talkwalker

Mention

Trackur

Video transcription

Howdy Moz fans, and welcome to another edition of Whiteboard Friday. Today we’re going to chat about email alerts and using them to help with some of your SEO efforts, specifically content identification, competitive intelligence, some keyword research, and, of course, a lot of link building because email alerts are just fantastic for this.

Now here’s what we’ve got going on. There are a number of tools that you can use to do email alerts. Obviously, Google Alerts, very well-known. It’s free. It does have some challenges and some limitations in scope, so you won’t be able to do everything that I’m going to talk about today.

There’s Fresh Web Explorer from Moz. Of course, if you’re a Moz Pro subscriber, you’ve probably used Fresh Web Explorer. And Fresh Web Explorer’s alerts functionality, in particular, is kind of my favorite Moz feature period right now.

We also have some very strong, good competitors in this space—Talkwalker, Mention.net, and Tracker—all of which have many of the features that I’m going to be talking about here. So whatever program you’re using, this stuff can help.

That being said, I am going to be talking in terms of the operators that you would use for Fresh Web Explorer specifically. Google Alerts has some of these operators but not all of them, and so do Talkwalker, Mention, and Tracker. They might not have all of these, or theirs might be slightly different. So make sure you take a look at how the search operators for each of those work before you go engaging in this.

The operators I’m going to specifically mention are the minus command, which removes. I think that works in all of them. That’s essentially saying show me this stuff, but don’t show me anything that contains this.

Link:, this works in plenty of them. That’s showing links to the URL specifically. RD: which in Fresh Web Explorer shows links to the root domain, and SD: which shows links to the subdomain.

Quotes, which matches something exactly, works in all of these. TLD, which shows only links from a given domain extension. If I want to see only German websites, I can put TLD:DE and see only sites from Germany. Then, site: which shows only results from a specific sub or root domain, as opposed to like SD or RD, which show links to a subdomain or root domain.

This will all make sense in a second. But what I want to impart is that you can be using these tools, these types of commands to get a ton of intelligence that’s updated daily.

What I love about alerts is whether you do it weekly, or you do it daily, however, whatever frequency works for you, the beautiful thing is it’s a constant nudge, a constant reminder to us as marketers to be concentrating on something like, oh, yeah, I should really be thinking about link building. I should really be thinking about what my competition’s writing about. I should really be thinking about what bloggers in this niche think about my keywords and who they’re talking about when they mention these keywords, all that kind of stuff.

That nudge phenomenon of just having the repetitive cycle is really important for marketers. I feel like it helps me a tremendous amount when I get my alerts every night just to remember oh, yeah, I should do this. I should take a look at that. It’s right in my email. I take care of it with the rest of my work. Very, very helpful.

#1: Links to my competitors, but not to me

I mean come on. It’s just a gimme. It’s an opportunity for a bunch of things. It shows you what types of keywords and content people are writing about in the field, and it almost always gives you a link opportunity or at least insight into how you might get a link from those types of folks. So I love this.

I’m going to imagine that I’m Rover.com. Rover is a startup here in Seattle. They essentially have a huge network. They’re sort of like Airbnb but for people who do dog sitting and pet sitting. Great little company.

Rover has got some competitors in the field, like DogVacay.com and PetSitters.org and some of these other ones. They might, for example, create an alert that is RD:dogvacay.com. Show me people who link to my competitor’s domain, anywhere on my competitor’s domain, people who link to PetSitters.org minus RD:rover.com. Don’t show me people who also link to me. This will show them a subset of folks who are linking to their competition not linking to them. What a beautiful link building opportunity.

#2: Mentions my brand, but doesn’t link to me

Number two, another gimme and one that I’ve mentioned previously in some link building videos on Whiteboard Friday, places that mention my brand but don’t link to me. A number of these services can help you with this. Unfortunately, tragically, Google Alerts is the only one that can’t. But mentions my brand, doesn’t link to me, this is great.

In this case, because Rover’s brand name is so generic, people might use it for a lot of different things, they’re not always referring to the company Rover. They might use a keyword in here like Rover and any mention of dog sitting minus RD:rover.com. That means someone’s talked about Rover, talked about dog sitting, and they didn’t link to them.

This happens all the time. I have an alert set up for Moz that is “RD:moz.com,” and actually for me I just put minus Morrissey because the singer Morrissey is like the most common thing that people mention with Moz. I think I have another one that’s like “moz marketing minus RD:moz.com.” Literally, every week I have at least some news sites or sites that have mentioned us but haven’t linked to us. A comment or a tweet at them almost always gets us the link. This is great. I mean it’s like free link building.

#3: Mentions my keywords, but doesn’t link to me

This is similar to the competitive one but a little broader in scope.

So I might, for example, say “dog sitting or pet sitting minus RD:rover.com.” Show me all the people in the space who are talking about dog sitting. What are they saying?

The nice thing is with Fresh Web Explorer, and I think Talkwalker and Mention both do this, they’re sorted in terms of authority. So you don’t just get a bunch of random jumble. You can actually see the most authoritative sites.

Maybe it is the case that The Next Web is covering pet sitting marketplaces, and they haven’t written about Rover, but they’re mentioning the word “dog sitting.” That’s a great outreach point of view, and it can help uncover new content and new keyword opportunities too.

#4: Shows content produced by a competitor or news site on a topic related to me

For example, in the case of Rover.com, they might be a little creative and go, “Man, I really want to see whenever the Humane Society mentions dog sitting, which they do maybe once every two or three months. Let me just get a reminder of that. I don’t want to subscribe to their whole blog and read every post they put out. But I do really care when they talk about my topic.”

So you can set up an alert like dog sitting “site:humanesociety.org.” Perfect. Brilliant. Now I’m getting those content ideas. Potentially there are some outreach opportunities here, link building opportunities, keyword opportunities. Awesome.

#5: Show links coming from a geographic region

Let’s say, hey, I saw PetSitters.org is going international. They just opened up their UK branch. They haven’t actually, but let’s say that they did. I could create an alert like “RD:petsitters.org TLD:.co.uk.” Now it shows me all the people who are linking to PetSitters.org from the U.K. Since I know they just expanded there, I can start to target all those people who are coming out.

#6: Links to me or my site

This is very important for two reasons. One is so you know when new links are coming, where they’re coming from, that kind of stuff, which is cool to see. Sometimes you can forward those on, see what people are saying about you. That’s great.

But my favorite part of this is so I can thank those people, usually via Twitter, or so I can promote it on social media networks. Seriously, if someone’s going to go and say something nice about Rover and link to me, and it’s a third party news source or a blogger or something, I totally want to share that with my audience, because it reminds them of me and is also great promotional content that’s coming from someone else, an authoritative external voice. That’s wonderful. This can also be extremely helpful, by the way, to find testimonials for your business and press mentions that you might want to put on your site or in your conversion funnel.

#7: Find blogs that are writing about topics relevant to my business

This is pretty slick.

It turns out that most of these alerts systems will also look at the URL when they’re considering alerts, meaning that if someone has blog.domain.com, or domain.com/blog/whateverpost, you can search for the word “blog” and then something like “dog sitter.” Optionally, you could add things like site:wordpress.com, site:blogspot.com, so that you are getting more and more alerts that are showing you blogs that write about your topic, your keywords, that kind of stuff. This is pretty slick.

I especially like this one if you have a very broad topic area. I mean if you’re only getting a few results with your keywords anyway, then you can just keep an alert on that shows you everything. But if you have a very broad topic area, and dog sitting is probably one of those, you want to be able to narrow in on the blogs that you really care about or the types of sites that you really care about.

#8: Links to resources/data that I can compete with/offer a better version

I like this as a link building strategy, and I’ll use it on occasion. I don’t do it all the time, but I do care at certain points when we’re doing a campaign.

For example, a link to a resource or a piece of data that’s been collected out there on the Web that I can compete with or offer a better version of. Somebody, for example, is linking to the Wikipedia page on dog sitting or, let’s say, a statistics page from a Chamber of Commerce or something like that, and I have data that’s better, because I’ve done a survey of dog owners and pet sitting, and I’ve collected all this stuff. So I have more recent, and more updated, and more useful data than what Wikipedia has or this other resource.

I can reach out to these folks. I love seeing that. When you see these, these are often really good link targets, targets for outreach. So there’s just a lot of opportunity by looking at those specific resources and why people link to them and who.

So, with all of this stuff, I hope you’re going, setting up those alerts, getting your daily or weekly nudges, and improving your SEO based on all this stuff.

Thanks, everyone. See you again next week for another edition of Whiteboard Friday.

Take care.

Video transcription by Speechpad.com


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →