Archives for 

seo

Pyscape: Your Best Friend for Using the Mozscape API

Posted by GeoffKenyon

If you do link building and have to pull a lot of metrics from Open Site Explorer, I would like to introduce you to your new best friend, Pyscape. You might use a Google Doc to take advantage of the Mozscape API, but this is better. I promise.

Pyscape, created by fellow Distiller Ben Estes, enables you to pull link data such as an export of more than 10,000 backlinks or look up Moz metrics for a bulk URL list. And it’s super-fast.

What does Pyscape do?

When you use Pyscape, you must choose one of the following operating modes to run:

  • Metrics: This will simply show all the Moz metrics associated with the given URL, subdomain, or domain.
  • Bulk-metrics: This will show you the Moz metrics for a bulk URL list.
  • Anchor: This will give you all the anchors associated with a URL, subdomain, or domain.
  • Top: This will return the top pages on a site.
  • Links: This gives you a list of links pointing to a URL, subdomain, or domain.
  • Ose-style: This will return an Open Site Explorer formatted list of links for a URL, subdomain, or domain.

It is also worth noting that Pyscape does not cut off your reports at 10,000 URLs, and it can be much faster than using the Open Site Explorer Interface — especially for sites with large link profiles.

Selecting Your Granularity

In addition to telling Pyscape which report you want to run, you need to give it a little more guidance. This is especially important because Pyscape will pick an intelligent set of fields to grab from Mozscape based on the options you specify. Start by telling Pyscape how it should interpret the URL(s) that you input:

  • -d (domain): interprets the URL(s) as domains only
  • -s (subdomain): interprets the URL(s) as subdomains only
  • -p (page): interprets the URL(s) as the specific pages only

Links Mode

If you are in the links operating mode, here are a few more commands you should know:

  • -o (one): will return one URL per linking domain in links mode
  • -m (many): will return up to 25 pages per linking domain in link mode (this is the default)

Anchor Mode

Finally, there are a couple more directives you should know if you are going to use the anchor mode:

  • -f (phrase): will return anchor text phrases
  • -t (term): will return term matches (default)

Setting Up Pyscape

So now that we’ve covered what Pyscape will do, let’s look at how we use it. The following steps will take you through setting up Pyscape.

  1. Go to the Pyscape homepage; download the zip file and then extract it.
  2. Enter your Mozcape API credentials (free or paid) in the keys.json file. Get your credentials here.
  3. Download and install Python, version 3.2 or above (download here).
  4. Go give Pyscape an upvote on inbound.org to say thank you to Ben, or follow him on twitter.

Running Your First Pyscape Report

Ok, now that we’ve got Pyscape installed, I’m going to take you through how to run reports using the command line (It sounds technical but it’s really pretty easy).

  1. Start up the “Command Prompt” application.
  2. Next you’ll need to change the directory that the application is operating in to the directory where you have extracted Pyscape. “Enter “CD” followed by the folder path leading to the extracted Pyscape directory and hit enter.
  3. Now enter your request. Below are some sample requests that show how requests need to be structured.

Entering these commands will give you the data you need to start your analysis in a nice .csv output in lightning-fast time. This Google Doc contains examples of the output data from Pyscape to help you get a feel for which reports will best work for you.

The rest is up to you! If you like the tool, make sure to say thanks to Ben and to vote it up on inbound.org!


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

The Evolution of Search

Posted by dannysullivan

Knowing where we’re going often means knowing where we’ve come from. The history of search engines is a short one, but one of constant change.

In today’s Whiteboard Friday, Danny Sullivan takes a look at how search has evolved into the complicated engine it’s become, and what that means for its neon-lit, rocket-car future.

Whiteboard Friday – Evolution of Search – Danny Sullivan – 20130610


For reference, here is a still image of this week’s whiteboard.

Video Transcription

Hey Moz fans. Welcome to Whiteboard Friday. I’m not Rand. I’m Danny Sullivan, the founding editor of SearchEngineLand.com and MarketingLand.com. Because it’s 8,000 degrees here in Seattle, Rand has decided not to be around, and I am here sweating like a pig, because I walked over here. So I’m very excited to be doing a Whiteboard Friday. This is my first solo one, and I’m told I have to do it in 11 minutes, and in 1.5 takes. No, just one take. The topic today will be the evolution of search, trademark Google. No, they don’t own search.

There was a time when they didn’t own search, which brings us to Search 1.0. Did you know, kids, that search engines used to be multiple, that we didn’t talk about Googling things? We actually used things like Alta Vista, Lycos, and WebCrawler. Do you remember those names? There were things like OpenText, and what was that other one, Magellan. Well, these were search engines that existed before Google, and they went out onto the web and they crawled up all the pages, about a dozen pages that existed at the time, and then we would do our searches and try to find how to rank them all up.

That was all determined by just the words that were out on the page. So if you wanted to rank well for, I don’t know, something like movies, you would put movies on your page 100 times in a row. Then if somebody else wanted to outrank you, they’d put movies on their page 150 times in a row, because a search engine said, “Hey, we think relevancy is all about the number of words of the page, and a little bit about the location of those words.” The words at the top of the page would count for a little bit more than if they were further on down below.

Bottom line is this was pretty easy to spam. The search engines didn’t really want you to be doing better for movies because you said the word “movies” 150 times over somebody who said it 100 times. They needed to come up with a better signal. That signal, they took their time getting around to.

Long story short, they weren’t making a lot of money off of search so they really didn’t pay attention to it. But Google, they were sitting over there thinking, “You know what? If we create a search engine, someday someone might make a movie with Owen Wilson and Vince Vaughn. So let’s go out there and come up with a better system,” and that brought us into Search 2.0.

We are now here. Search 2.0 started looking at things that we refer to as off-the-page ranking factors, because all of the on-the-page stuff was in the complete control of the publisher. The publisher could change it all around. There was even a time, when you used Infoseek, where you could submit a web page, and it was instantly added to the index, and you could see how well you ranked. If you didn’t like it, you’d instantly make a change and put it back out again. Then you could move up that way. So off-the-page kind of said, “Let’s go out there and get some recommendations from beyond the publisher and decide what other people think about these web pages, because maybe that’s less spammable and would give us better quality search results.”

By the way, I said not Yahoo over here, because I’m talking about search engines in terms of crawler-based search engines, the ones that use automation to go out there and find web pages. Yahoo for the longest time – well it feels that way to me – was a directory, or a human-based search engine where they listed stuff because some human being actually went to a website, wrote up a review, and added it.

Now back to Search 2.0, Google came along and started making much more use of something called link analysis. So the other search engines kind of played with it, but hadn’t really gotten the formula right and didn’t really depend on it so much.

But Google, new kid on the block, said, “We’re going to do this a lot. We’re going to consider links to be like votes, and people with a lot of links pointing at them, maybe they got a lot of votes, and we should count them a little bit higher and rank them better.” It wasn’t just in sheer amount of numbers, however. Google also then wanted to know who has the best votes, who is the real authority out there. So they tried to look at the quality of those links as well.

You’ve got other people who were doing some off-the-page stuff. One of them, you might recall, was by the name of Direct Hit. They actually looked at things like click through. They would look and they’d say, “Well, we’ve looked at 10 search results, and we can see that people are clicking on the third search result completely out of proportion to the normal way that we would expect. Rather than it getting say 20% of the clicks, it’s pulling 80% of the clicks.” That might tell them that they should move it up to number one, and then they could move things that were down a bit further.

These are some of the things that we started doing, but it was really links that carried us along for about a decade. Now links, off-the-page stuff, that’s been powering and still to this day kind of powers the web search results and how they start ranking better, but we have a little bit of an intermission, which we would call or I call Search 3.0. By the way, I made all this stuff up, so you can disagree with it or you can figure out however you want to kind of go with it. But a few years ago I was trying to explain how I had seen the evolution of search and some of these changes that were coming along.

What happened in this Search 3.0 era is that, even though we were using these links and we were getting better quality results, it was also so much information that was coming in that the signals alone weren’t enough. You needed another way to get more relevancy, and the way the search engines started doing that was saying, “Let’s take, instead of having you search through 100 billion pages, let you search through a smaller collection of pages of just focused content.” That’s called vertical search.

Now in horizontal search, you’d do a search for things like news, sports, entertainment, shopping, and you just throw it all into one big search box. It goes out there, and it tries to come back with all the pages from across the web that it thinks is relevant to whatever you searched for. In vertical search, it’s like a vertical slice, and that vertical slice of the web is just only the news content. Then when you do a search for something like NSA, it’s only going to look through the news content to find the answers about news that is relating to the NSA at the moment. Not trying to go over there and see if maybe there is some sports information or shopping information that may match up with that as well.

That’s important right now, by the way. You have all this talk about something like PRISM that is happening. It’s a spy program or an eavesdropping program or a data mining program, depending on who you want to talk to, that the US government is running. Prism is also something that you use just to filter light, and so if you are doing a search and you are just trying to get information about filtering light, you probably don’t want to turn to a news search engine because right now the news stuff is full of the PRISM stuff. On the other hand, if you want the latest stuff that is happening just within this whole Prism area, then turning to the news search engine is important, because you won’t get all of the other stuff that is not necessarily related.

So we have this Search 3.0 thing, vertical search, and Google, in particular, referred to it as universal search. Trying to solve that problem that, if someone types into a box “pictures of flowers,” they should actually show you pictures of flowers, rather than 10 links that lead you to maybe pictures of flowers. Now we’re pretty solid on this right now. Bing does these sorts of things as well. They have their own blending that goes on there.

Then it’s Search 4.0. Now we are here, or right here just because I feel compelled to write something on that board. Search 4.0 is kind of a return to what Yahoo over here was using, which was human beings. By the way, I don’t write very much anymore because the typing thing.

To refer to using human beings, one of the biggest things that has happened with search engines is that they, in a very short period, completely changed how we sought out information. For thousands of years, if you needed to know something, you talked to a human being. Even when we had libraries and people had all that kind of data, typically you would go into a library and you would talk to a librarian and say, “Hey, I’m trying to find some information about such and such.” Or you would need a plumber, you would ask somebody, “Hey, you know a good plumber?” Babysitter, doctor, or is this a good product? Does anybody know this TV? Does this work well? Should I buy that? You would tend to turn to human beings or things that were written by human beings.

Then all of a sudden we had these search engines come along, and they just took all these pages out there, and they really weren’t using a huge amount of human data. Yeah, the links were put in there by human beings. Yeah, some human being had to write the content as well, but we kind of lost another aspect of the human element that was out there, the recommendations that were out there en masse.

That is kind of what has been going on with Search 4.0. The first thing that is going on with Search 4.0 is that they started looking at the things that we had searched for over time. If they can tell that you constantly go back to say a computing site, like Diverge or CNET, then they might say, “Well, the next time you search for something, let me give the weight of those sites a little bit higher bump, because you really seem to like the stuff that’s there. So let’s kind of reward them in that regard.” Or “I can see that you’re searching for travel right now, and I can see that you just searched for New York. Rather than me pretend that these things are disconnected, let me put them together on your subsequent searches because you are probably looking for information about New York travel, even though you didn’t put in all those words. So I’ll take use of your history that’s going there.”

The other thing that they have been doing, and some of this mixes across in the earlier times, but they are looking at your location. You do a search for football in the UK, you really don’t want to get information about the NFL for the most part. You want information about what Americans would call soccer. So looking and knowing that you’re in the UK when you do a search for football, it helps the search engine say, “We should go through and we should just come up with information that is relevant to the UK, or relevant to the US, based on where you’re at.” That greatly changed though, and these days it goes down even to your metropolitan area. You do a search for zoos, you’re in Seattle, you’re going to get information about zoos that are in Seattle rather than the Washington Zoo, or zoos that are in Detroit or so on.

The last thing, the really, really exciting thing is the use of social, which the search engines are still trying to get their head around. I talked earlier about the idea of links as being like votes, and I always like to use this analogy that, if links are like votes and links are somehow the democracy of the web, which is how Google still will describe them on some of their pages, then the democracy of the web is how the democracy in the United States started when to vote, you had to be 25 years and older, white, and own property. That wasn’t really representative of everybody that was out there.

In order for you to vote in this kind of system, you really have to say, “Wow, that was a great restaurant I went to. I want to go through now and I want to write a blog post about that restaurant, and I’m going to link to the restaurant, and I’m going to make sure that when I link to it, I’m going to use a platform that doesn’t automatically put things like no follow on top of the link so that the link doesn’t pass credit. Oh, and because it’s a great restaurant, I’m going to remember to make sure that the anchor text, or the words near the anchor text, say things like great restaurant because I need to make sure that the link is relevant and passing along that kind of context. Now when I’ve done all that, I’ve cast my vote.”

Probably the 99 other people that went to the restaurant are not going to do that. But what those people are likely to do is like it on Facebook, plus it on Google+, make a recommendation on Yelp, use any one of the number of social systems that effectively enable people to vote much more easily. So I think a lot of the future where we are going to be going is in this social direction. These social signals are very, very important in the future as to how the search engines will determine what are the best pages that are out there.

Unfortunately, they’ve put so much into this whole link system and figuring out that this is a good link, this is a bad link, this is a link that we are going to disavow, this is a link that you disavowed, and so on and so on and so on, that they still need to work on making all this social stuff better. That’s going to become important as well. Not saying the links are going to go away, but I think the social stuff is going to be coming up much more heavily as we go forward into the future.

Now on the way up here I was thinking, because I was asked, “Will you talk about the evolution of search?” I’m like, “Yeah, no problem because I’ve done this whole Search 1 through 4 thing before.” There’s a whole blog post if you search for Search 4.0. Search for Search 4.0 and you’ll find it.

I was thinking, “What is coming after that?” On the way up, as I was sweating coming up the staircase, not the staircase here. There’s a staircase, because I was at sea level and I had to apparently climb up to 300 feet here, where we are located in the Moz building. If there was a swear jar, I would put a dollar into it.

Search 5.0, and this is really about search where it’s no page at all. Remember on-the-page factors, off-the-page factors, which are really off this page but on some other page, this stuff is I don’t even care that it’s a page. I did a blog post, and I can’t remember the title of it. But if you search for “Google conversational search,” you’ll find it. If you don’t find it, clearly Google is a very bad search engine.

In the conversational search thing that I was demonstrating, if you have Chrome and you click on the microphone, you can talk to Google now on your desktop, kind of like how you can do it on the phone. You can say, “Barack Obama,” and Google will come along and it will show you results for Barack Obama, and it will talk back to you and say, “Barack Obama is President of the United States,” blah blah blah blah. It gives you a little box for him, and he appears and there is a little description they pull from Wikipedia.

Then you can say to it, “How old is he,” or something very similar to that. Then the search engine will come back, Google will come back and will say, “Barack Obama is . . .” I can’t remember how old he is. But you should Google it and use that voice search thing. It will come back and say Barack Obama is this age. You can go further and say, ‘Well, how tall is he?” It will say, “Barack Obama is . . .” I think he is 6 foot 1. And you say, “Who is he married to?” Then it comes back and it says, “Barack Obama is married to Michelle Obama.” And you say, “How old is she?” Then Google will come back and say, “It’s really an impolite thing to ask a woman, but she’s a certain age.” I believe 39. Yeah, you’re usually safe with that.

To do all of that it has to understand that Barack Obama, when you searched for him, wasn’t just these letters on a web page. It had to understand that he is a person, that he is an entity, if you will, a person, place, or thing, a noun, but an entity, that there is a thing out there called Barack Obama that it can link up to and know about. When you ask for its age, and you said, “How old is he,” it had to understand that “he” wasn’t just words, but that actually “he” refers to an entity that you had specified before, the entity being Barack Obama. When you said, “his age,” that age wasn’t just a bunch of letters that match on a web page, but age is equal to a value that it knows of because Barack Obama has an age value over here, and it’s connecting it there.

When you said, “How tall is he,” same thing. That tall wasn’t just letters, but tall is actually a height element that it knows. That says height, trust me. When you said, “Who’s his wife,” that wife, with an f kids, not a v, later we’ll do potatoes without an e, that his wife is a person that is equal to spouse, which is a thing that it understands, an entity. It’s not just words again. It’s like a thing that it actually understands, and that actually that that is Michelle and that she has all of these things about her, and [inaudible 15:38]. All those sorts of things along there.

That is much different than Search 1.0 where, when we were searching, we were really just looking for letters on a page. When you typed in “movies,” its going, “How many pages out there do I have that have these six letters in this order? Start counting them up and putting it together.”

We are looking for entities, and that the Google knowledge graph is that kind of demonstration of where things are going to be going forward. That’s all very exciting as well, because, for one thing as a marketer, it’s always exciting when your space changes because if you’re staying on top of things and you’re seeing where it’s going, there are always new opportunities that come along. It’s also exciting because some of these things are broken and they don’t work as well, so this has the opportunity to better reward things that are coming along.

It’s a little scary though because as Google learns about entities and it learns about things like facts, it also decides that, “You know what, you’re looking for movies in a place. I have a database of all those movies. I no longer need to point at a web page that has that sort of stuff.” The big takeaway from that is, if your job is just creating web pages that are all about known facts that are out there, it’s going to get harder, because people are no longer going to get pointed to you facts that are off of Google. People are going to get pointed to facts that Google can answer directly. Your job is to make sure that you always have the information that Google doesn’t have, the facts that aren’t easily found that are out there.

As for Search 6.0, it involved this PRISM system, but we can’t talk about that anymore, so that’s sort of gone away, and we’ll leave that off. In a few years from, it won’t make any sense. Right now, hopefully, it’s still very timely.

I think that’s probably it. So I thank you for your indulgence with my first solo Whiteboard Friday. I hope didn’t go too fast. I hope that all makes sense, and thank you very much.

Video transcription by Speechpad.com


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

Transcribe ALL The Things! Benefits, Strategies, and More

Posted by steviephil

This post was originally in YouMoz, and was promoted to the main blog because it provides great value and interest to our community. The author’s views are entirely his or her own and may not reflect the views of Moz, Inc.

It’s an SEO’s duty to try to utilise and leverage as many opportunities as possible for clients and employers in order to drive relevant traffic to their websites. One technique that I sometimes feel is overlooked — or at least not given the attention it deserves — is transcription, i.e. turning audio or other media into text.

Transcribe ALL The Things!

I was inspired to write about transcription for SEO (and more) after talking to a client at one of my previous agency roles. A few staff members at the top of the company are well known in their industry, and we wanted to leverage their popularity and standing by encouraging them to guest blog. For one of them (who’s practically a celebrity in his industry sector!), we were told this:

Client: “Well, he doesn’t want to write content on a regular basis. You see, he has enough on his plate as it is with his popular, weekly, hour-long podcast.”

Then the light-bulb moment happened…

Me: “Do you transcribe the podcasts into text and publish them on the site along with the audio?”

Client: “No. Why?”

Why?! Oh, my sweet, naïve client…
(I didn’t actually say that in reply to the client! That’d be silly.)

Ahem… Where was I?

Quite fittingly, my first instance of seeing regularly transcribed content was on this very site: Moz’s Whiteboard Friday videos are all transcribed on a weekly basis (or at least they have been every week for the last few years).

WBF screenshot

For that reason, it only seemed right to talk about transcription in the form of a YouMoz submission!

I think there are benefits beyond SEO, as it also touches upon user experience (UX), and if you sit down and really think about it, there are a lot of different things you can transcribe, which is why I’ve also provided a list of ideas towards the end of this post.

The benefits of transcription for SEO

The main benefit of transcribing audio for SEO? Search engines cannot ‘read’ audio media. Yet. Properly.

Yes, you can add text to an image to help search engines deduce its content and purpose (in the form of the title and alt attributes), but that’s not necessarily the case with things like videos. Embed a YouTube video, look at the code and see for yourself — it isn’t full of the video’s text, ready to be crawled by a search engine spider.

And while search engines are getting wiser and more Skynet-esque by the day, they’re still a long way off from effectively turning audio into words. I can’t find the exact tweet right now, but someone recently tweeted @mattcutts asking if the Webmaster Videos were transcribed. He replied saying that they were automatically transcribed on YouTube, accessed via the “Transcript” button.

YouTube Transcript button screenshot

I checked a few of Matt’s videos and they weren’t too bad, but what about when the audio isn’t crystal clear and/or the speaker has a bit of an accent? I checked a video I made using my laptop’s webcam and inbuilt microphone, spoken with my unusual accent (which I’ve been told sounds Welsh, Cockney, and accentless all at the same time), and found that the line:

“…in this video I’m gonna talk you through how to implement rel author…”

had been transcribed into:

“…video onions will keep you have to impeachment gravel for…”

Nailed it. (And I honestly thought I spoke quite clearly in that video!)

YouTube Transcript example screenshot

So I think it’s safe to say for now that transcription through a more — how to put this — “traditional” method (i.e. through transcription service providers) is still essential at this stage.

The major benefit of transcription for SEO? Hitting the long tail. What if a video or podcast covers a topic that’s not talked about in a blog post or other supportive text? Or, what if people are searching for a spoken quotation, as opposed to a written text quotation? Without transcription, they’ll miss it. With transcription, they won’t.

When I created the previously linked-to video about impeaching — er, I mean implementing rel=”author”, I embedded it in a post on my own blog along with the transcription, potentially driving more people to my blog from organic search — especially those searching for something relevant to the video and/or the event at which I spoke.

SEOno post screenshot

Another good example: the Q&A at an event after a speaker has given their presentation. The speaker may share their slides and speaker notes, but Q&A is obviously quite impromptu and on-the-spot in nature. If a video has caught it, and that video has been transcribed, then people looking for the answer to one of the questions that was asked will be able to find it.

The benefits of transcription for UX

I also think that there are more benefits to transcription than just improving long tail SEO. It can vastly improve usability and UX, too.

There have been numerous times when I’ve wanted to watch a Moz Whiteboard Friday, but I’ve been in a public place and not had any headphones. The next best thing? I could read the transcript. In fact, some people I’ve spoken to prefer to read a transcript than watch or listen to something. Each to their own, I guess, but at least by providing both you’re giving your users the choice.

Additionally, when I revisit the Whiteboard Friday at a later stage and want to double-check something that Rand or whoever has said, I can use my browser’s “find” function, type in the relevant word(s) and find it right away. So it’s good for quick checks and references as well — much quicker than trying to find the exact moment in a 5-10 minute video when something was mentioned.

How to do it (and is it really worth it?)

I’m sure that there are plenty of transcription service providers out there. Wanting to try it out myself, I went for Moz’s provider: SpeechPad. It seemed pretty reasonable and I had no major problems with it. I had to tidy up a bit of the text (e.g. Gafyn’s name — which is the Welsh variation of Gavin — was spelt the non-Welsh way, some Twitter handles had been missed, etc.), but it was about 95%+ correct. All in all, $5 to transcribe my 5-minute rel=”author” YouTube video? Bargain.

I know what you might be thinking: Is it worth it if a) you produce (or have previously produced) lots of media, or b) your media is quite long, e.g. an hour-long podcast or an event?

Well, put it this way. I paid $5 for a 5-minute video to be turned into text, which was 645 words long. It’s unique text, and apart from a bit of a proofread and tidy-up afterwards, it was good to go. I know people who pay 10 times that amount (if not more) for 600 words of unique content. When you look at it that way, it’s pretty reasonable. An hour-long transcription is likely to be essay-sized — in the 1,000s of words — which should hit the long tail like crazy.

Transcribe ALL the things! A list of things to consider transcribing

The list of things that you can transcribe is pretty much endless, so I wanted to put a shorter list together to spark ideas and make you think of what your business or your clients might have produced already that is transcribeable (and if that’s not a word, I’m totally coining it):

Events

  • Presentations, panels, keynotes
  • Q&A
  • Vox pops between sessions
  • PR stunts (if they’re filmed)

TV & Radio

  • Full TV/radio shows
  • Appearances on TV/radio shows (e.g. if your client only appears on a five-minute segment)
  • Adverts

Podcasts

  • Full podcasts
  • Appearances on podcasts

Music

  • Lyrics (especially if it’s an unsigned band — they might not yet have their lyrics plastered on every lyrics website ever)
  • Live shows (especially if there’s banter between songs and/or alternative lyrics)

Other

  • Interviews
  • Whiteboard videos (obviously!)
  • Corporate/promotional videos
  • Testimonial videos (as in testimonials from clients/customers)
  • Webinars
  • Google+ Hangouts (I’m thinking #maximpact…)
  • Videos with commentary/voice-overs
  • Documentaries
  • Pretty much everything/anything that has (or could have) audio!

Have I missed anything obvious? I’m sure I have! If you think of anything that I might’ve missed, leave a comment below!

Now if you don’t mind, I’m off to video my gravel and impeach some onions… or was that the other way around?


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

Early Look at Google’s June 25 Algo Update

Posted by Dr-Pete

If you follow our MozCast Google “weather” tracker, you may have noticed something unusual this morning – a record algorithm flux temperature of 113.3°F (the previous high was 102.2°, set on December 13, 2012). While the weather has been a bit stormy off and on since Penguin 2.0 and the announcement of 10-day rolling Panda updates, this one was still off the charts:

MozCast Temperatures

I’m usually cautious about over-interpreting any single day’s data – measuring algorithm change is a very difficult and noisy task. Given the unprecedented scope, though, and reports coming in of major ranking shake-ups in some verticals, I’ve decided to post an early analysis. Please understand that the Google algorithm is incredibly dynamic, and we’ll know more over the next few days.

Temperatures by Category

Some industry verticals are naturally more volatile than others, but here’s a breakdown of the major categories we track in order by the largest percentage change over the 7-day average. The temperature for June 25th along with the 7-day average for each category is shown in parentheses:

  • 68.5% (125°/74°) – Home & Garden
  • 58.2% (119°/75°) – Computers & Consumer Electronics
  • 58.1% (114°/72°) – Occasions & Gifts
  • 57.8% (121°/77°) – Apparel
  • 54.8% (107°/69°) – Real Estate
  • 54.1% (107°/69°) – Jobs & Education
  • 50.6% (112°/74°) – Internet & Telecom
  • 49.4% (112°/75°) – Hobbies & Leisure
  • 49.4% (102°/68°) – Health
  • 44.9% (105°/73°) – Finance
  • 44.5% (116°/80°) – Beauty & Personal Care
  • 43.0% (116°/81°) – Vehicles
  • 39.7% (104°/74°) – Family & Community
  • 38.0% (109°/79°) – Sports & Fitness
  • 37.3% (89°/65°) – Retailers & General Merchandise
  • 34.7% (101°/75°) – Food & Groceries
  • 32.4% (107°/81°) – Arts & Entertainment
  • 25.9% (92°/73°) – Travel & Tourism
  • 25.6% (93°/74°) – Law & Government
  • 25.5% (92°/73°) – Dining & Nightlife

Every vertical we track showed a solid temperature spike, but “Home & Garden� led the way with a massive 51° difference between the single-day temperature and its 7-day average.

Some Sample Queries

There are so many reasons that a query can change that looking at individual cases is often a one-way ticket to insanity, but that doesn’t seem to stop me from riding the train. Just to illustrate the point, the query “gay rights� showed a massive temperature of 250°F. Of course, if you know about the Supreme Court rulings announced the morning of June 26th, then this is hardly surprising. News results were being churned out fast and furious by very high-authority sites, and the SERP landscape for that topic was changing by the hour.

Sometimes, though, we can spot an example that seems to tell a compelling story, especially when that example hasn’t historically been a high-temperature query. It’s not Capital-S Science, but it can help us look for clues in the broader data. Here are a couple of interesting examples…

Example 1: “limousine service�

On the morning of June 25th, a de-localized and de-personalized query for “limousine service� returned the following results:

  1. http://www.ultralimousineservice.com/
  2. http://www.uslimoservice.com/
  3. http://www.fivediamondslimo.com/
  4. http://www.davesbestlimoservice.com/
  5. http://www.aftonlimousine.com/
  6. http://www.awardslimo.com/
  7. http://www.lynetteslimousines.com/
  8. http://www.chicagolandlimo.com/
  9. http://www.a1limousine.com/
  10. http://www.sterlinglimoservice.com/
The following morning, the Top 10 for the same query was completely rewritten (yielding the maximum possible MozCast temperature of 280°).
  1. http://www.carmellimo.com/
  2. http://www.crestwoodlimo.com/
  3. http://www.dial7.com/
  4. http://www.telavivlimo.com/
  5. http://www.willowwindcarriagelimo.com/
  6. http://www.asavannahnite.com/
  7. http://www.markofelegance.com/
  8. http://tomscruz.com/
  9. https://www.legrandeaffaire.com/
  10. http://www.ohare-midway.com/

One possible pattern is that there are no domains in the new Top 10 with either the phrase “limousine serviceâ€� or “limo serviceâ€� in them, which could indicate a crack-down on partial-match domains (PMDs). Interestingly, the term “limousineâ€� disappeared altogether in the post-update domain list, although “limoâ€� still fares well. This could also indicate some sort of tweak in how Google treats similar words (“limo” vs. “limousine”).

Example 2: “auto auction�

Here’s another query that shows a similar PMD pattern, clocking in at a MozCast temperature of 239°. The morning of June 25th, “auto auction� showed the following Top 10:

  1. http://www.iaai.com/
  2. http://www.autoauctions.gsa.gov/
  3. http://www.americasautoauction.com/
  4. http://www.copart.com/
  5. http://www.interstateautoauction.com/
  6. http://www.indianaautoauction.net/
  7. http://www.houstonautoauction.com/
  8. http://www.ranchoautoauction.com/
  9. http://www.southbayautoauction.com/
  10. http://velocity.discovery.com/tv-shows/mecum-auto-auctions
Just one day later, all but the #1 spot had changed…
  1. http://www.iaai.com/
  2. http://www.copart.com/
  3. http://www.autoauctions.gsa.gov/
  4. http://www.barrett-jackson.com/
  5. http://www.naaa.com/
  6. http://www.mecum.com/
  7. http://www.desertviewauto.com/
  8. http://www.adesa.com/
  9. http://www.brasherssacramento.com/
  10. http://www.voaautoauction.org/

In the first SERP, eight of the top ten had “auto auction(s)� in the URL; in the second, only two remained, and one of those was an official US government sub-domain (even that site lost a ranking spot).

Top-View PMD Influence

Ultimately, these are anecdotes. The question is: do we see any pattern across the broader set? As luck would have it, we do track the influence of partial-match domains (PMDs) in the MozCast metrics. Our PMD Influence metric looks at the percentage of total Top 10 URLs where the root or sub-domain contains either “keywordstring� or “keyword-string�, but is not an exact-match. Here’s a graph of PMD influence over the past 90 days:

PMD Influence Drop

Please note that the vertical axis is scaled to more clearly show rises and falls over time. Across our data set, there’s been a trend toward steady decline of PMD influence in 2013, but today showed a fairly dramatic drop-off and a record low across our historical data (back to April 2012). This data comes from our smaller (1K) query set, but the pattern is also showing up in our 10K data set.

For reference and further investigation, here are a few examples of PMDs that fell out of the Top 10, and the queries they fell out of (including some from the same queries):

  1. “appliance parts” – www.appliancepartscenter.com
  2. “appliance parts” – www.appliancepartscenter.us
  3. “appliance parts” – www.appliancepartssuppliers.com
  4. “bass boats” – www.phoenixbassboats.com
  5. “campagnolo” – www.campagnolorestaurant.com
  6. “divorce papers” – www.mydivorcepapers.com
  7. “driving school” – www.dollardrivingschool.com
  8. “driving school” – www.elitedrivingschool.biz
  9. “driving school” – www.ferraridrivingschool.com
  10. “driving school” – www.firstchoicedrivingschool.net
  11. “driving school” – www.fitzgeraldsdrivingschool.com
  12. “mario game” – www.mariogames98.com
  13. “monogrammed gifts” – www.monogrammedgiftshop.com
  14. “monogrammed gifts” – www.preppymonogrammedgifts.com
  15. “nickelback songs” – www.nickelback-songs.com
  16. “pressure washer” – www.pressurewashersdirect.com
  17. “tanzanite” – www.etanzanite.com
  18. “vibram” – www.vibramdiscgolf.com
  19. “wine racks” – www.wineracksamerica.com
  20. “yahtzee” – www.yahtzeeonline.org
I’m not making any statements about the quality of these sites (except nickelback-songs.com), since I haven’t dug into them individually. If anyone wants to take that on, though, please be my guest.

The “Multi-Week� Update

Recently, Matt Cutts warned of a multi-week algorithm update ending just after July 4th – could this be that update? The short answer is that we have no good way to tell, since Matt’s tweet didn’t tell us anything about the nature of the update. This single-day spike certainly doesn’t look like a gradual roll-out of anything, but it’s possible that we’ll see large-scale instability during this period.

Some (Quite a Few) Caveats

This is an imperfect exercise at best, and one day of data can be misleading. The situation is also constantly changing – Google claims Panda data is updating 10 days out of every 30 now, or 1/3 of the time, for example. At this early stage, I can only confirm that we’ve tracked this algorithm flux across multiple data centers and there is no evidence of any system errors or obvious data anomalies (we track many metrics, and some of them look relatively normal).

Finally, it’s important to note that, just because a metric drops, it doesn’t mean Google pulled a lever to directly impact that metric. In other words, Google could release a quality adjustment that just happened to hit a lot of PMDs, even though PMDs weren’t specifically the target. I would welcome any evidence people have seen on their own sites, in webmaster chatter, in unofficial Google statements, etc. (even if it’s evidence against something I’m saying in this post).


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →