About frans

Website:
frans has written 4625 articles so far, you can find them below.

Sustainable Link Building: Increasing Your Chances of Getting Links – Whiteboard Friday

Posted by Paddy_Moogan

Link building campaigns shouldn’t have a start-and-stop date — they should be ongoing, continuing to earn you links over time. In this edition of Whiteboard Friday, please warmly welcome our guest host Paddy Moogan as he shares strategies to achieve sustainable link building, the kind that makes your content efforts lucrative far beyond your initial campaigns for them.

Sustainable Link Building: Increasing Your Chances of Getting Links

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Hi, Moz fans. Welcome to Whiteboard Friday. I’m not Rand. I’m Paddy Moogan. I’m the cofounder of Aira. We’re an agency in the UK, focusing on SEO, link building, and content marketing. You may have seen me write on the Moz Blog before, usually about link building. You may have read my link building book. If you have, thank you. Today, I’m going to talk about link building again. It’s a topic I love, and I want to share some ideas around what I’m calling “sustainable link building.”

Problems

Now, there are a few problems with link building that make it quite risky, and I want to talk about some problems first before giving you some potential solutions that help make your link building less risky. So a few problems first:

I. Content-driven link building is risky.

The problem with content-driven link building is that you’re producing some content and you don’t really know if it’s going to work or not. It’s quite risky, and you don’t actually know for sure that you’re going to get links.

II. A great content idea may not be a great content idea that gets links.

There’s a massive difference between a great idea for content and a great idea that will get links. Knowing that difference is really, really important. So we’re going to talk a little bit about how we can work that out.

III. It’s a big investment of time and budget.

Producing content, particularly visual content, doing design and development takes time. It can take freelancers. It can take designers and developers. So it’s a big investment of time and budget. If you’re going to put time and budget into a marketing campaign, you want to know it’s probably going to work and not be too risky.

IV. Think of link building as campaign-led: it starts & stops.

So you do a link building campaign, and then you stop and start a new one. I want to get away from that idea. I want to talk about the idea of treating link building as the ongoing activity and not treating it as a campaign that has a start date and a finish date and you forget about it and move on to the next one. So I’m going to talk a little bit about that as well.

Solutions

So those are some of the problems that we’ve got with content-driven link-building. I want to talk about some solutions of how to offset the risk of content-driven link building and how to increase the chances that you’re actually going to get links and your campaign isn’t going to fail and not work out for you.

I. Don’t tie content to specific dates or events

So the first one, now, when you coming up with content ideas, it’s really easy to tie content ideas into events or days of the year. If there are things going on in your client’s industry that are quite important, current festivals and things like that, it’s a great way of hooking a piece of content into an event. Now, the problem with that is if you produce a piece of content around a certain date and then that date passes and the content hasn’t worked, then you’re kind of stuck with a piece of content that is no longer relevant.

So an example here of what we’ve done at Aira, there’s a client where they launch a piece of content around the Internet of Things Day. It turns out there’s a day celebrating the Internet of Things, which is actually April 9th this year. Now, we produced a piece of content for them around the Internet of Things and its growth in the world and the impact it’s having on the world. But importantly, we didn’t tie it exactly to that date. So the piece itself didn’t mention the date, but we launched it around that time and that outreach talked about Internet of Things Day. So the outreach focused on the date and the event, but the content piece itself didn’t. What that meant was, after July 9th, we could still promote that piece of content because it was still relevant. It wasn’t tied in with that exact date.

So it means that we’re not gambling on a specific event or a specific date. If we get to July 9th and we’ve got no links, it obviously matters, but we can keep going. We can keep pushing that piece of content. So, by all means, produce content tied into dates and events, but try not to include that too much in the content piece itself and tie yourself to it.

II. Look for datasets which give you multiple angles for outreach

Number two, lots of content ideas can lead from data. So you can get a dataset and produce content ideas off the back of the data, but produce angles and stories using data. Now, that can be quite risky because you don’t always know if data is going to give you a story or an angle until you’ve gone into it. So something we try and do at Aira when trying to produce content around data is from actually different angles you can use from that data.

So, for example:

  • Locations. Can you pitch a piece of content into different locations throughout the US or the UK so you can go after the local newspapers, local magazines for different areas of the country using different data points?
  • Demographics. Can you target different demographics? Can you target females, males, young people, old people? Can you slice the data in different ways to approach different demographics, which will give you multiple ways of actually outreaching that content?
  • Years. Is it updated every year? So it’s 2018 at the moment. Is there a piece of data that will be updated in 2019? If there is and it’s like a recurring annual thing where the data is updated, you can redo the content next year. So you can launch a piece of content now. When the data gets updated next year, plug the new data into it and relaunch it. So you’re not having to rebuild a piece of a content every single time. You can use old content and then update the data afterwards.

III. Build up a bank of link-worthy content

Number three, now this is something which is working really, really well for us at the moment, something I wanted to share with you. This comes back to the idea of not treating link building as a start and stop campaign. You need to build up a bank of link-worthy content on your client websites or on your own websites. Try and build up content that’s link worthy and not just have content as a one-off piece of work. What you can do with that is outreach over and over and over again.

We tend to think of the content process as something like this. You come up with your ideas. You do the design, then you do the outreach, and then you stop. In reality, what you should be doing is actually going back to the start and redoing this over and over again for the same piece of content.

What you end up with is multiple pieces of content on your client’s website that are all getting links consistently. You’re not just focusing on one, then moving past it, and then working on the next one. You can have this nice big bank of content there getting links for you all the time, rather than forgetting about it and moving on to the next one.

IV. Learn what content formats work for you

Number four, again, this is something that’s worked really well for us recently. Because we’re an agency, we work with lots of different clients, different industries and produce lots and lots of content, what we’ve done recently is try to work out what content formats are working the best for us. Which formats get the best results for our clients? The way we did this was a very, very simple chart showing how easy something was versus how hard it was, and then wherever it was a fail in terms of the links and the coverage, or wherever it was a really big win in terms of links and coverage and traffic for the client.

Now, what you may find when you do this is certain content formats fit within this grid. So, for example, you may find that doing data viz is actually really, really hard, but it gets you lots and lots of links, whereas you might find that producing maps and visuals around that kind of data is actually really hard but isn’t very successful.

Identifying these content formats and knowing what works and doesn’t work can then feed into your future content campaign. So when you’re working for a client, you can confidently say, “Well, actually, we know that interactives aren’t too difficult for us to build because we’ve got a good dev team, and they really likely to get links because we’ve done loads of them before and actually seen lots of successes from them.” Whereas if you come up with an idea for a map that you know is actually really, really hard to do and actually might lead to a big fail, then that’s not going to be so good, but you can say to a client, “Look, from our experience, we can see maps don’t work very well. So let’s try and do something else.”

That’s it in terms of tips and solutions for trying to make your link building more sustainable. I’d love to hear your comments and your feedback below. So if you’ve got any questions, anything you’re not sure about, let me know. If you see it’s working for your clients or not working, I’d love to hear that as well. Thank you.

Video transcription by Speechpad.com


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

Want to Speak at MozCon 2018? Here’s Your Chance – Pitch to Be a Community Speaker!

Posted by Danielle_Launders

MozCon 2018 is nearing and it’s almost time to brush off that microphone. If speaking at MozCon is your dream, then we have the opportunity of a lifetime for you! Pitch us your topic and you may be selected to join us as one of our six community speakers.

What is a community speaker, you ask? MozCon sessions are by invite only, meaning we reach out to select speakers for the majority of our talks. But every year we reserve six 15-minute community speaking slots, where we invite anyone in the SEO community to pitch to present at MozCon. These sessions are both an attendee favorite and a fabulous opportunity to break into the speaking circuit.

Katie Cunningham, one of last year’s community speakers, on stage at MozCon 2017

Interested in pitching your own idea? Read on for everything you need to know:

The details

  • Fill out the community speaker submission form
  • Only one submission per person — make sure to choose the one you’re most passionate about!
  • Pitches must be related to online marketing and for a topic that can be covered in 15 minutes
  • Submissions close on Sunday, April 22nd at 5pm PDT
  • All decisions are final
  • All speakers must adhere to the MozCon Code of Conduct
  • You’ll be required to present in Seattle at MozCon

Ready to pitch your idea?

If you submit a pitch, you’ll hear back from us regardless of your acceptance status.

What you’ll get as a community speaker:

  • 15 minutes on the MozCon stage for a keynote-style presentation, followed by 5 minutes of Q&A
  • A free ticket to MozCon (we can issue a refund or transfer if you have already purchased yours)
  • Four nights of lodging covered by Moz at our partner hotel
  • Reimbursement for your travel — up to $500 for domestic and $750 for international travel
  • An additional free MozCon ticket for you to give away, plus a code for $300 off of one ticket
  • An invitation for you and your significant other to join us for the pre-event speakers dinner

The selection process:

We have an internal committee of Mozzers that review every pitch. In the first phase we review only the topics to ensure that they’re a good fit for our audience. After this first phase, we look at the entirety of the pitch to help us get a comprehensive idea of what to expect from your talk on the MozCon stage.

Want some advice for perfecting your pitch?

  • Keep your pitch focused to online marketing. The more actionable the pitch, the better.
  • Be detailed! We want to know the actual tactics our audience will be learning about. Remember, we receive a ton of pitches, so the more you can explain, the better!
  • Review the topics already being presented — we’re looking for something new to add to the stage.
  • Keep the pitch to under 1200 characters. We’re strict with the word limits — even the best pitches will be disqualified if they don’t abide by the rules.
  • No pitches will be evaluated in advance, so please don’t ask 🙂
  • Using social media to lobby your pitch won’t help. Instead, put your time and energy into the actual pitch itself!
  • Linking to a previous example of a slide deck or presentation isn’t required, but it does help the committee a ton.

You’ve got this!

This could be you.

If your pitch is selected, the MozCon team will help you along the way. Whether this is your first time on stage or your twentieth, we want this to be your best talk to date. We’re here to answer questions that may come up and to work with you to deliver something you’re truly proud of. Here are just a handful of ways that we’re here to help:

  • Topic refinement
  • Helping with your session title and description
  • Reviewing any session outlines and drafts
  • Providing plenty of tips around best practices — specifically with the MozCon stage in mind
  • Comprehensive show guide
  • Being available to listen to you practice your talk
  • Reviewing your final deck
  • A full stage tour on Sunday to meet our A/V crew, see your presentation on the big screens, and get a feel for the show
  • An amazing 15-person A/V team

Make your pitch to speak at MozCon!

We can’t wait to see what y’all come up with. Best of luck!


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

The Bot Plan: Your Guide to Making Conversations Convert

Posted by purna_v

Let’s start off with a quick “True or False?” game:

“By 2020, the average person will have more conversations with their bot than with their spouse.”

True, or false? You may be surprised to learn that speaking more with bots than our spouse is precisely what Gartner is predicting.

And when Facebook’s Mark Zuckerberg says “messaging is one of the few things that people do more than social networking,” it requires no leap of faith to see that chatbots are an integral part of marketing’s future.

But you don’t need to stock up on canned peaches and head for the hills because “the robots are coming.” The truth is, the robots aren’t coming because they’re already here, and they love us from the bottom of their little AI-powered hearts.

Bots aren’t a new thing for many parts of the world such as China or India. As reported by Business Insider, sixty-seven percent of consumers worldwide have used a chatbot for customer support in the last year.

Within the United States, an impressive 60% of millennials have used chatbots with 70% of those reporting positive experiences, according to Forbes.

There’s no putting bots back in the box.

And it’s not just that brands have to jump on board to keep up with those pesky new generations, either. Bots are great for them, too.

Bots offer companies:

  1. A revolutionary way to reach consumers. For the first time in history, brands of any size can reach consumers on a personal level. Note my emphasis on “of any size.” You can be a company of one and your bot army can give your customers a highly personal experience. Bots are democratizing business!
  2. Snackable data. This “one-to-one” communication gives you personal insights and specificity, plus a whole feast of snackable data that is actionable.
  3. Non-robot-like interaction. An intelligent bot can keep up with back-and-forth customer messages in a natural, contextual, human way.
  4. Savings. According to Juniper Research, the average time saving per chatbot inquiry compared to traditional call centers is over four minutes, which has the potential to make a truly extraordinary impact on a company’s bottom line (not to mention the immeasurable impact it has on customers’ feelings about the company).
  5. Always on. It doesn’t matter what time zone your customer is in. Bots don’t need to sleep, or take breaks. Your company can always be accessible via your friendly bot.

Here in the West, we are still in the equivalent of the Jurassic Period for bots. What they can be used for is truly limited only by our imagination.

One of my most recent favorites is an innovation from the BBC News Labs and Visual Journalism teams, who have launched a bot-builder app designed to, per Nieman Lab, “make it as easy as possible for reporters to build chatbots and insert them in their stories.”

So, in a story about President Trump from earlier this year, you see this:

Source: BBC.com

It’s one of my favorites not just because it’s innovative and impressive, but because it neatly illustrates how bots can add to and improve our lives… not steal our jobs.

Don’t be a dinosaur

A staggering eighty percent of brands will use chatbots for customer interactions by 2020, according to research. That means that if you don’t want to get left behind, you need to join the bot arms race right now.

“But where do I start?” you wonder.

I’m happy you asked that. Building a bot may seem like an endeavor that requires lots of tech savvy, but it’s surprisingly low-risk to get started.

Many websites allow you to build bots for free, and then there’s QNAMaker.ai (created by Microsoft, my employer), which does a lot of the work for you.

You simply input your company’s FAQ section, and it builds the foundation for an easy chatbot that can be taken live via almost any platform, using natural language processing to parse your FAQ and develop a list of questions your customers are likely to ask.

This is just the beginning — the potential for bots is wow-tastic.

That’s what I’m going to show you today — how you can harness bot-power to build strong, lasting relationships with your customers.

Your 3-step plan to make conversations convert

Step 1: Find the right place to start

The first step isn’t to build a bot straightaway. After all, you can build the world’s most elaborate bot and it is worth exactly nothing to you or your customer if it does not address their needs.

That’s why the first step is figuring out the ways bots can be most helpful to your customers. You need to find their pain points.

You can do this by pretending you’re one of your customers, and navigating through your purchase funnel. Or better again, find data within your CRM system and analytics tools that can help you answer key questions about how your audience interacts with your business.

Here’s a handy checklist of questions you should get answers to during this research phase:

  • How do customers get information or seek help from your company? ☑
  • How do they make a purchase? ☑
  • Do pain points differ across channels and devices? ☑
  • How can we reduce the number of steps in each interaction? ☑

Next, you’ll want to build your hypothesis. And here’s a template to help you do just that:

I believe [type of person] needs to solve [problem] which happens while [situation], which will allow them to [get value].

For example, you’re the manager of a small spa, whose biggest time-suck is people calling to ask simple questions, meaning other customers are on hold for a long time. If those customers can ask a bot these simple questions, you get three important results:

  1. The hold time for customers overall will diminish
  2. The customer-facing staff in your spa will be able to pay more attention to clients who are physically in front of them
  3. Customers with lengthier questions will be helped sooner

Everybody wins.

Finally, now that you’ve identified and prioritized the situations where conversation can help, you’ll be ready to build a bot as well as a skill.

Wait a minute — what’s a skill in this context, and how do they relate to bots? Here’s a great explanation from Chris Messina:

  • A bot is an autonomous program on a network
  • A chatbot is a bot that uses human language to communicate
  • An AI assistant is a chatbot that performs tasks or services for an individual
  • A skill is a capability that an AI assistant can learn

Each of them can help look things up, place orders, solve problems, and make things happen easier, better, and faster.

A few handy resources to build a bot are:

Step 2: Add conversation across the entire customer journey

There are three distinct areas of the customer decision journey where bots and skills can make a big difference.

Bot as introducer

Bots can help your company by being present at the very first event in a purchase path.

Adidas did this wonderfully when they designed a chatbot for their female-focused community Studio LDN, to help create an interactive booking process for the free fitness sessions offered. To drive engagement further, as soon as a booking was made the user would receive reminders and messages from influencer fitness instructors.

The chatbot was the only way for people to book these sessions and it worked spectacularly well.

In the first two weeks, 2,000 people signed up to participate, with repeat use at 80%. Retention after week one was 60%, which the brand claims is far better compared to an app.

Adidas did something really clever. They advertised the bot across many of their other channels to help promote the bot and help with its discoverability.

You can do the same.

There are countless examples where bots can put their best suit on and act as the first introduction to your company:

  • Email marketing: According to MailChimp research, the average email open rates are between 15% to 26% with click rates being just a fraction of that at approximately 2%–5%. That’s pretty low when you compare that to Messenger messages, which can have an open rate of well over 90%. Why not make your call-to-action within your email be an incentive for people to engage with your chatbot? For example, something like “message us for 10% off” could be a compelling reason for people to engage with your chatbot.
  • Social media: How about instead of running Facebook ads which direct people to websites, you run an ad connecting people to bots instead? For example, in the ad, advise people to “chat to see the latest styles” or “chat now to get 20% off” and then have your bot start a conversation. Instant engagement! Plus, it’s a more gentle call-to-action as opposed to a hard sell such as “buy now.”
  • Video: How about creating instructional YouTube videos on how to use your bot? Especially helpful since one of the barriers to using this new technology is a lack of awareness about how to use it. A short, quick video that demonstrates what your skill can do could be very impactful. Check out this great example from FitBit and Cortana:

  • Search: As you’ve likely seen by now, Bing has been integrating chatbots within the SERPs itself. You can do a search for bots across different platforms and you’ll be able to add relevant bots directly to your preferred platform right from the search results themselves:

Travel Bots

  • You can engage with local businesses such as restaurants via the Bing Business bot that shows up as part of the local listings:

Monsoon Seattle search with chatbot

The key lesson here is that when your bot is acting as an introducer, give your audience plenty of ways and reasons to chat. Use conversation to tell people about new stuff, and get them to kick off that conversation.

Bot as influencer

To see a bot acting as an effective influencer, let’s turn to Chinese giant Alibaba. They developed a customizable chatbot store concierge that they offer free to brands and markets.

Cutely named dian xiao mi, or “little shop bee,” the concierge is designed to be the most helpful store assistant you could wish for.

For example, if a customer interacting with a clothing brand uploads a photograph of a t-shirt, the bot buzzes in with suggestions of pants to match. Or, if a customer provides his height and weight, the bot can offer suggested sizing. Anyone who has ever shopped online for clothing knows exactly how much pain the latter offering could eliminate.

This helpful style is essentially changing the conversation from “BUY NOW!” to “What do you need right now?”

We should no longer ask: “How should we sell to customers?” The gazillion-dollar question instead is: How can we connect with them?

An interesting thing about this change is that, when you think about it for a second, it seems like common sense. How much more trust would you have for a brand that was only trying to help you? If you bought a red dress, how much more helpful would it be if the brand showed you a pic of complementary heels and asked if you want to “complete the look”?

For the chatbot to be truly helpful as an influencer, it needs to learn from each conversation. It needs to remember what you shared from the last conversation, and use it to shape future conversations.

So, say a chatbot from my favorite shoe store knew all about my shoe addiction (is there a cure? Would I event want to be cured of it?), then it could be more helpful via its remarketing efforts.

Imagine how much more effective it would be if we could have an interaction like this:

Shoestore Chatbot: Hi Purna! We’re launching a new collection of boots. Would you like a sneak peek?

Me: YES please!!!

Shoestore Chatbot: Great! I’ll email pics to you. You can also save 15% off your next order with code “MozBlog”. Hurry, code expires in 24 hours.

Me: *buys all the shoes, obvs*

This is Bot-topia. Your brand is being helpful, not pushy. Your bot is cultivating relationships with your customers, not throwing ads at them.

The key lesson here? For your bot to be a successful influencer, you must always consider how they can be helpful and how they can add value.

Bot as closer

Bot: “A, B, C. Always be closing.”

Imagine you want to buy flowers for Mother’s Day, but you have very little interest in flowers, and when you scroll through the endless options on the website, and then a long checkout form, you just feel overwhelmed.

1-800-Flowers found your pain point, and acted on it by creating a bot for Facebook Messenger.

It asks you whether you want to select a bunch from one of their curated collections, instantly eliminating the choice paralysis that could see consumers leave the website without purchasing anything.

And once you’ve chosen, you can easily complete the checkout process using your phone’s payment system (e.g. Apple Pay) to make checkout a cinch. So easy, and so friction-free.

The result? According to Digiday, within two months of launch the company saw 70% of the orders through the bot came from brand-new customers. By building a bot, 1-800 Flowers slam-dunked their way into the hearts of a whole new, young demographic.

Can you think of a better, more inexpensive way to unlock a big demographic? I can’t.

To quote Mr. Zuckerberg again: “It’s pretty ironic. To order from 1-800-Flowers, you never have to call 1-800-Flowers again.”

Think back to that handy checklist of questions from Step 1, especially this one: “How can we reduce the number of steps in each interaction?”

Your goal is to make every step easy and empathetic.

Think of what people would want/need to know to as they complete their tasks. For example, if you’re looking to transfer money from your bank account, the banking chatbot could save you from overdraft fees if it warns you that your account could be overdrawn before you make the transfer.

The key lesson here: Leverage your bots to remove any friction and make the experience super relevant and empathetic.

Step 3: Measure the conversation with the right metrics

One of my favorite quotes around how we view metrics versus how we should view metrics comes from Automat CEO Andy Mauro, who says:

“Rather than tracking users with pixels and cookies, why not actually engage them, learn about them, and provide value that actually meets their needs?”

Again, this is common sense once you’ve read it. Of course it makes sense to engage our users and provide value that meets their needs!

We can do this because the bots and skills give us information in our customers’ own words.

Here’s a short list of KPIs that you should look at (let’s call it “bot-alytics”):

  • Delivery and open rates: If the bot starts a conversation, did your customer open it?
  • Click rates: If your bot delivered a link in a chat, did your customer click on it?
  • Retention: How often do they come back and chat with you?
  • Top messages: What messages are resonating with your customers more than others?
  • Conversion rates: Do they buy?
  • Sentiment analysis: Do your customers express happiness and enthusiasm in their conversation with the bot, or frustration and anger?

Using bot-alytics, you can easily build up a clear picture of what is working for you, and more importantly, what is working for your customer.

And don’t forget to ask: What can you learn from bot-alytics that can help other channels?

The future’s bright, the future’s bots

What were once dumb machines are now smart enough that we can engage with them in a very human way. It presents the opportunity of a generation for businesses of all shapes and sizes.

Our customers are beginning to trust bots and digital personal assistants for recommendations, needs, and more. They are the friendly neighborhood machines that the utopian vision of a robotic future presents. They should be available to people anywhere: from any device, in any way.

And if that hasn’t made you pencil in a “we need to talk about bots” meeting with your company, here’s a startling prediction from Accenture. They believe that in five years, more than half of your customers will select your services based on your AI instead of your traditional brand.

In three steps, you can start your journey toward bot-topia and having your conversations convert. What are you waiting for?


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

How the Mobile-First Index Disrupts the Link Graph

Posted by rjonesx.

It’s happened to all of us. You bring up a webpage on your mobile device, only to find out that a feature you were accustomed to using on desktop simply isn’t available on mobile. While frustrating, it has always been a struggle for web developers and designers alike to simplify and condense their site on mobile screens without needing to strip features or content that would otherwise clutter a smaller viewport. The worst-case scenario for these trade-offs is that some features would be reserved for desktop environments, or perhaps a user might be able to opt out of the mobile view. Below is an example of how my personal blog displays the mobile version using a popular plugin by ElegantThemes called HandHeld. As you can see, the vast page is heavily stripped down and is far easier to read… but at what cost? And at what cost to the link graph?

My personal blog drops 75 of the 87 links, and all of the external links, when the mobile version is accessed. So what happens when the mobile versions of sites become the primary way the web is accessed, at scale, by the bots which power major search engines?

Google’s announcement to proceed with a mobile-first index raises new questions about how the link structure of the web as a whole might be influenced once these truncated web experiences become the first (and sometimes only) version of the web Googlebot encounters.

So, what’s the big deal?

The concern, which no doubt Google engineers have studied internally, is that mobile websites often remove content and links in order to improve user experience on a smaller screen. This abbreviated content fundamentally alters the link structure which underlies one of the most important factors in Google’s rankings. Our goal is to try and understand the impact this might have.

Before we get started, one giant unknown variable which I want to be quick to point out is we don’t know what percentage of the web Google will crawl with both its desktop and mobile bots. Perhaps Google will choose to be “mobile-first” only on sites that have historically displayed an identical codebase to both the mobile and desktop versions of Googlebot. However, for the purposes of this study, I want to show the worst-case scenario, as if Google chose not only to go “mobile-first,” but in fact to go “mobile-only.”

Methodology: Comparing mobile to desktop at scale

For this brief research, I decided to grab 20,000 random websites from the Quantcast Top Million. I would then crawl two levels deep, spoofing both the Google mobile and Google desktop versions of Googlebot. With this data, we can begin to compare how different the link structure of the web might look.

Homepage metrics

Let’s start with some descriptive statistics of the home pages of these 20,000 randomly selected sites. Of the sites analyzed, 87.42% had the same number of links on their homepage regardless of whether the bot was mobile- or desktop-oriented. Of the remaining 12.58%, 9% had fewer links and 3.58% had more. This doesn’t seem too disparate at first glance.

Perhaps more importantly, only 79.87% had identical links on the homepage when visited by desktop and mobile bots. Just because the same number of links were found didn’t mean they were actually the same links. This is important to take into consideration because links are the pathways which bots use to find content on the web. Different paths mean a different index.

Among the homepage links, we found a 7.4% drop in external links. This could mean a radical shift in some of the most important links on the web, given that homepage links often carry a great deal of link equity. Interestingly, the biggest “losers” as a percentage tended to be social sites. In retrospect, it seems reasonable that one of the common types of links a website might remove from their mobile version would be social share buttons because they’re often incorporated into the “chrome” of a page rather than the content, and the “chrome” often changes to accommodate a mobile version.

The biggest losers as a percentage in order were:

  1. linkedin.com
  2. instagram.com
  3. twitter.com
  4. facebook.com

So what’s the big deal about 5–15% differences in links when crawling the web? Well, it turns out that these numbers tend to be biased towards sites with lots of links that don’t have a mobile version. However, most of those links are main navigation links. When you crawl deeper, you just find the same links. But those that do deviate end up having radically different second-level crawl links.

Second-level metrics

Now this is where the data gets interesting. As we continue to crawl out on the web using crawl sets that are influenced by the links discovered by a mobile bot versus a desktop bot, we’ll continue to get more and more divergent results. But how far will they diverge? Let’s start with size. While we crawled an identical number of home pages, the second-tier results diverged based on the number of links found on those original home pages. Thus, the mobile crawlset was 977,840 unique URLs, while the desktop crawlset was 1,053,785. Already we can see a different index taking shape — the desktop index would be much larger. Let’s dig deeper.

I want you to take a moment and really focus on this graph. Notice there are three categories:

  • Mobile Unique: Blue bars represent unique items found by the mobile bot
  • Desktop Unique: Orange bars represent unique items found by the desktop bot
  • Shared: Gray bars represent items found by both

Notice also that there are there are four tests:

  • Number of URLs discovered
  • Number of Domains discovered
  • Number of Links discovered
  • Number of Root Linking Domains discovered

Now here is the key point, and it’s really big. There are more URLs, Domains, Links, and Root Linking Domains unique to the desktop crawl result than there are shared between the desktop and mobile crawler. The orange bar is always taller than the gray. This means that by just the second level of the crawl, the majority of link relationships, pages, and domains are different in the indexes. This is huge. This is a fundamental shift in the link graph as we have come to know it.

And now for the big question, what we all care about the most — external links.

A whopping 63% of external links are unique to the desktop crawler. In a mobile-only crawling world, the total number of external links was halved.

What is happening at the micro level?

So, what’s really causing this huge disparity in the crawl? Well, we know it has something to do with a few common shortcuts to making a site “mobile-friendly,” which include:

  1. Subdomain versions of the content that have fewer links or features
  2. The removal of links and features by user-agent detecting plugins

Of course, these changes might make the experience better for your users, but it does create a different experience for bots. Let’s take a closer look at one site to see how this plays out.

This site has ~10,000 pages according to Google and has a Domain Authority of 72 and 22,670 referring domains according to the new Moz Link Explorer. However, the site uses a popular WordPress plugin that abbreviates the content down to just the articles and pages on the site, removing links from descriptions in the articles on the category pages and removing most if not all extraneous links from the sidebar and footer. This particular plugin is used on over 200,000 websites. So, what happens when we fire up a six-level-deep crawl with Screaming Frog? (It’s great for this kind of analysis because we can easily change the user-agent and restrict settings to just crawl HTML content.)

The difference is shocking. First, notice that in the mobile crawl on the left, there is clearly a low number of links per page and that number of links is very steady as you crawl deeper through the site. This is what produces such a steady, exponential growth curve. Second, notice that the crawl abruptly ended at level four. The site just didn’t have any more pages to offer the mobile crawler! Only ~3,000 of the ~10,000 pages Google reports were found.

Now, compare this to the desktop crawler. It explodes in pages at level two, collecting nearly double the total pages of the mobile crawl at this level alone. Now, recall the graph before showing that there were more unique desktop pages than there were shared pages when we crawled 20,000 sites. Here is confirmation of exactly how it happens. Ultimately, 6x the content was made available to the desktop crawler in the same level of crawl depth.

But what impact did this have on external links?

Wow. 75% of the external, outbound links were culled in the mobile version. 4,905 external links were found in the desktop version while only 1,162 were found in the mobile. Remember, this is a DA 72 site with over twenty thousand referring domains. Imagine losing that link because the mobile index no longer finds the backlink. What should we do? Is the sky falling?

Take a deep breath

Mobile-first isn’t mobile-only

The first important caveat to all this research is that Google isn’t giving up on the desktop — they’re simply prioritizing the mobile crawl. This makes sense, as the majority of search traffic is now mobile. If Google wants to make sure quality mobile content is served, they need to shift their crawl priorities. But they also have a competing desire to find content, and doing so requires using a desktop crawler so long as webmasters continue to abbreviate the mobile versions of their sites.

This reality isn’t lost on Google. In the Original Official Google Mobile First Announcement, they write…

If you are building a mobile version of your site, keep in mind that a functional desktop-oriented site can be better than a broken or incomplete mobile version of the site.

Google took the time to state that a desktop version can be better than an “incomplete mobile version.” I don’t intend to read too much into this statement other than to say that Google wants a full mobile version, not just a postcard.

Good link placements will prevail

One anecdotal outcome of my research was that the external links which tended to survive the cull of a mobile version were often placed directly in the content. External links in sidebars like blog-rolls were essentially annihilated from the index, but in-content links survived. This may be a signal Google picks up on. External links that are both in mobile and desktop tend to be the kinds of links people might click on.

So, while there may be fewer links powering the link graph (or at least there might be a subset that is specially identified), if your links are good, content-based links, then you have a chance to see improved performance.

I was able to confirm this by looking at a subset of known good links. Using Fresh Web Explorer, I looked up fresh links to toysrus.com which is currently gaining a great deal of attention due to stores closing. We can feel confident that most of these links will be in-content because the articles themselves are about the relevant, breaking news regarding Toys R Us. Sure enough, after testing 300+ mentions, we found the links to be identical in the mobile and desktop crawls. These were good, in-content links and, subsequently, they showed up in both versions of the crawl.

Selection bias and convergence

It is probably the case that popular sites are more likely to have a mobile version than non-popular sites. Now, they might be responsive — at which point they would yield no real differences in the crawl — but at least some percentage would likely be m.* domains or utilize plugins like those mentioned above which truncate the content. At the lower rungs of the web, older, less professional content is likely to have only one version which is shown to mobile and desktop devices alike. If this is the case, we can expect that over time the differences in the index might begin to converge rather than diverge, as my study looked only at sites that were in the top million and only crawled two levels deep.

Moreover (this one is a bit speculative), but I think over time that there will be convergence between a mobile and desktop index. I don’t think the link graphs will grow exponentially different as the linked web is only so big. Rather, the paths to which certain pages are reached, and the frequency with which they are reached, will change quite a bit. So, while the link graph will differ, the set of URLs making up the link graph will largely be the same. Of course, some percentage of the mobile web will remain wholly disparate. The large number of sites that use dedicated mobile subdomains or plugins that remove substantial sections of content will remain like mobile islands in the linked web.

Impact on SERPs

It’s difficult at this point to say what the impact on search results will be. It will certainly not leave the SERPs unchanged. What would be the point of Google making and announcing a change to its indexing methods if it didn’t improve the SERPs?

That being said, this study wouldn’t be complete without some form of impact assessment. Hat tip to JR Oakes for giving me this critique, otherwise I would have forgotten to take a look.

First, there are a couple of things which could mitigate dramatic shifts in the SERPs already, regardless of the veracity of this study:

  • A slow rollout means that shifts in SERPs will be lost to the natural ranking fluctuations we already see.
  • Google can seed URLs found by mobile or by desktop into their respective crawlers, thereby limiting index divergence. (This is a big one!)
  • Google could choose to consider, for link purposes, the aggregate of both mobile and desktop crawls, not counting one to the exclusion of the other.

Second, the relationships between domains may be less affected than other index metrics. What is the likelihood that the relationship between Domain X and Domain Y (more or less links) is the same for both the mobile- and desktop-based indexes? If the relationships tend to remain the same, then the impact on SERPs will be limited. We will call this relationship being “directionally consistent.”

To accomplish this part of the study, I took a sample of domain pairs from the mobile index and compared their relationship (more or less links) to their performance in the desktop index. Did the first have more links than the second in both the mobile and desktop? Or did they perform differently?

It turns out that the indexes were fairly close in terms of directional consistency. That is to say that while the link graphs as a whole were quite different, when you compared one domain to another at random, they tended in both data sets to be directionally consistent. Approximately 88% of the domains compared maintained directional consistency via the indexes. This test was only run comparing the mobile index domains to the desktop index domains. Future research might explore the reverse relationship.

So what’s next?: Moz and the mobile-first index

Our goal for the Moz link index has always been to be as much like Google as possible. It is with that in mind that our team is experimenting with a mobile-first index as well. Our new link index and Link Explorer in Beta seeks to be more than simply one of the largest link indexes on the web, but the most relevant and useful, and we believe part of that means shaping our index with methods similar to Google. We will keep you updated!


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

Google Confirms Chrome Usage Data Used to Measure Site Speed

Posted by Tom-Anthony

During a discussion with Google’s John Mueller at SMX Munich in March, he told me an interesting bit of data about how Google evaluates site speed nowadays. It has gotten a bit of interest from people when I mentioned it at SearchLove San Diego the week after, so I followed up with John to clarify my understanding.

The short version is that Google is now using performance data aggregated from Chrome users who have opted in as a datapoint in the evaluation of site speed (and as a signal with regards to rankings). This is a positive move (IMHO) as it means we don’t need to treat optimizing site speed for Google as a separate task from optimizing for users.

Previously, it has not been clear how Google evaluates site speed, and it was generally believed to be measured by Googlebot during its visits — a belief enhanced by the presence of speed charts in Search Console. However, the onset of JavaScript-enabled crawling made it less clear what Google is doing — they obviously want the most realistic data possible, but it’s a hard problem to solve. Googlebot is not built to replicate how actual visitors experience a site, and so as the task of crawling became more complex, it makes sense that Googlebot may not be the best mechanism for this (if it ever was the mechanism).

In this post, I want to recap the pertinent data around this news quickly and try to understand what this may mean for users.

Google Search Console

Firstly, we should clarify our understand of what the “time spent downloading a page” metric in Google Search Console is telling us. Most of us will recognize graphs like this one:

Until recently, I was unclear about exactly what this graph was telling me. But handily, John Mueller comes to the rescue again with a detailed answer [login required] (hat tip to James Baddiley from Chillisauce.com for bringing this to my attention):

John clarified what this graph is showing:

It’s technically not “downloading the page” but rather “receiving data in response to requesting a URL” – it’s not based on rendering the page, it includes all requests made.

And that it is:

this is the average over all requests for that day

Because Google may be fetching a very different set of resources every day when it’s crawling your site, and because this graph does not account for anything to do with page rendering, it is not useful as a measure of the real performance of your site.

For that reason, John points out that:

Focusing blindly on that number doesn’t make sense.

With which I quite agree. The graph can be useful for identifying certain classes of backend issues, but there are also probably better ways for you to do that (e.g. WebPageTest.org, of which I’m a big fan).

Okay, so now we understand that graph and what it represents, let’s look at the next option: the Google WRS.

Googlebot & the Web Rendering Service

Google’s WRS is their headless browser mechanism based on Chrome 41, which is used for things like “Fetch as Googlebot” in Search Console, and is increasingly what Googlebot is using when it crawls pages.

However, we know that this isn’t how Google evaluates pages because of a Twitter conversation between Aymen Loukil and Google’s Gary Illyes. Aymen wrote up a blog post detailing it at the time, but the important takeaway was that Gary confirmed that WRS is not responsible for evaluating site speed:

Twitter conversation with Gary Ilyes

At the time, Gary was unable to clarify what was being used to evaluate site performance (perhaps because the Chrome User Experience Report hadn’t been announced yet). It seems as though things have progressed since then, however. Google is now able to tell us a little more, which takes us on to the Chrome User Experience Report.

Chrome User Experience Report

Introduced in October last year, the Chrome User Experience Report “is a public dataset of key user experience metrics for top origins on the web,” whereby “performance data included in the report is from real-world conditions, aggregated from Chrome users who have opted-in to syncing their browsing history and have usage statistic reporting enabled.”

Essentially, certain Chrome users allow their browser to report back load time metrics to Google. The report currently has a public dataset for the top 1 million+ origins, though I imagine they have data for many more domains than are included in the public data set.

In March I was at SMX Munich (amazing conference!), where along with a small group of SEOs I had a chat with John Mueller. I asked John about how Google evaluates site speed, given that Gary had clarified it was not the WRS. John was kind enough to shed some light on the situation, but at that point, nothing was published anywhere.

However, since then, John has confirmed this information in a Google Webmaster Central Hangout [15m30s, in German], where he explains they’re using this data along with some other data sources (he doesn’t say which, though notes that it is in part because the data set does not cover all domains).

At SMX John also pointed out how Google’s PageSpeed Insights tool now includes data from the Chrome User Experience Report:

The public dataset of performance data for the top million domains is also available in a public BigQuery project, if you’re into that sort of thing!

We can’t be sure what all the other factors Google is using are, but we now know they are certainly using this data. As I mentioned above, I also imagine they are using data on more sites than are perhaps provided in the public dataset, but this is not confirmed.

Pay attention to users

Importantly, this means that there are changes you can make to your site that Googlebot is not capable of detecting, which are still detected by Google and used as a ranking signal. For example, we know that Googlebot does not support HTTP/2 crawling, but now we know that Google will be able to detect the speed improvements you would get from deploying HTTP/2 for your users.

The same is true if you were to use service workers for advanced caching behaviors — Googlebot wouldn’t be aware, but users would. There are certainly other such examples.

Essentially, this means that there’s no longer a reason to worry about pagespeed for Googlebot, and you should instead just focus on improving things for your users. You still need to pay attention to Googlebot for crawling purposes, which is a separate task.

If you are unsure where to look for site speed advice, then you should look at:

That’s all for now! If you have questions, please comment here and I’ll do my best! Thanks!


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →