About frans

Website:
frans has written 4625 articles so far, you can find them below.

App Search – Whiteboard Friday

Posted by Tom-Anthony

App search is growing and changing, and there’s more opportunity than ever to both draw customers in at the top of the funnel and retain them at the bottom. In today’s special British Whiteboard Friday, Tom Anthony and Will Critchlow of Distilled dig into everything app search and highlight a future where Google may have some competition as the search engine giant.

App Search Whiteboard

Click on the whiteboard image above to open a high resolution version in a new tab!

Video Transcription

Tom: Howdy, and welcome to another British Whiteboard Friday. I’m Tom Anthony, head of the R&D Department here at Distilled. This is Will Critchlow, founder and CEO. Today we’re going to be talking about app search. App search is really, really important at the moment because research shows that the average user is spending 85% of their time in apps on their mobile phone.
Will, tell us a bit about app search.

Will: When we say “app search,” we could potentially mean three things. The first is App Store Optimization or ASO, which is not what we’re going to be talking about today. It’s an important area, and it’s got its own quirks and intricacies, but it’s pretty far down the funnel. Most of the searches in app stores are either branded or high-level category searches.

What we want to spend more of our time on today is…

App indexing

This is right at the top of the funnel typically, and it’s taking over the opportunities to rank in long-tail search. So this gives you the opportunity to acquire new users via search really for the first time in app marketing.
The third element that we’ll touch on later is the personal corpus, which is the idea right down at the bottom of the funnel and it’s about retaining the users once you have them.

The critical thing is app indexing. That’s what we want to spend most of our time on. What are the basics, Tom? What are the prerequisites for app indexing?

Tom: The first thing, the most important thing to understand is deep links.

Close-up of App Search whiteboard: a tree graph depicting Deep Links leading to the Distilled Twitter account.

Tom: People sometimes struggle to understand deep links, but it’s a very simple concept. It’s the parallel of what a normal URL is for a web page. A URL takes you to a specific web page rather than a website. Deep links allow you to open a specific screen in an app.
So you might click a deep link. It’s just a form of a URL. It might be on a web page. It might be in another app. It can open you to a specific point in an app, for example the @Distilled page in the Twitter app.
There’s been various competing standards for how deep links should work on different platforms. But what’s important to understand is that everyone is converging on one format. So don’t bother trying to learn all the intricacies of it.
The important format is what we call universal links. Will, tell us a bit about them.

Will: Universal links — this is actually Apple’s terminology, but it is, as Tom said, spreading everywhere — which is the idea that you can take a URL just like we use to a regular HTTP or HTTPS URL and this URL would normally open up the web page on the desktop.

Close-up of App Search whiteboard: a URL pointing at a web page

Will: Now if instead we were on a mobile device — and we’ve brought our mobile whiteboard again to demonstrate this concept — then if you clicked on this same link on your mobile device, same URL, it would open up the deep view within the app like Tom mentioned.
So the critical thing about the universal link is that the form of this link is the same, and it’s shared across those different devices and platforms.

Now before that was the case, in the world where we had different kinds of links, different kinds of link formats for the different devices and platforms, it was important that we mapped our web pages to those mobile URLs. There were various ways of doing that. So you could use Schema.org markup on your web pages. You could use JSON-LD. You could match them all up in your robots.txt. Or you could use rel=”alternate” links.

Tom: This is much like how you would’ve done the same thing for the mobile version of a desktop web page.

Will: Right. Yeah, if you had a different mobile website, an m-dot website for example, you would use rel=”alternate” to match those two together. In the old world of deep links, where there were the application-specific links, you could use this rel=”alternate” to map them together.

Close-up of whiteboard: a normal desktop page on the left with a two-sided arrow with "alternate" written underneath, a drawing of a mobile phone to the right

If you’re using universal links, it’s not so much about this mapping anymore. It’s not about saying it’s over there. But it’s about advertising the fact that there is an app, that you have an app that can open this particular view or web page. That’s kind of important obviously to get that indexed and to get that app ranking.

Tom: Google and Co. are encouraging you to have parity at the moment between your app. So you’ve got your desktop site, your mobile site, and then you’ve got the same screen in the mobile application.

Will: Absolutely, and they’d like that all to be on these universal URLs. Now all of this so far is pretty familiar to us as search marketers. We understand the concept of having these URLs, having them crawled, having them indexed. But in the app world there’s more opportunity than just crawling because both Google and Apple on iOS have opened up APIs, which means that you can push information to the search engine about how the app is actually being used, which opens up all kinds of interesting possibilities.

Tom: Absolutely. The first one is new types of ranking factor, the big one being engagement. Apple have already confirmed that they’re going to use engagement as a ranking factor. We anticipate that Google will do the same thing.
This is the idea that users opening your app, using your app, spending time in your app is a clue of the value of that app. So it’s more likely to appear in search results. There are two layers to this. The first is appearing in personalized search results. If I use a specific app a lot, then I’ll expect to see that more.
Then, there’s the second level, which is the aggregated user statistics, which is where they see that most people like this app for this thing, so other people will see that in the search results.

The second point is taking us back to what Will mentioned at the start.

The personal corpus

This is the idea where you get search results specific to yourself coming from your data. So you might run a search and you’ll see things such as your messages, entries in your calendar, photos from your gallery. I’d see different results to Will, and I’d see them all in the same interface as where I’d see the public search results.

So I might do a search for a restaurant. I might see a link to the restaurant’s website in the public search results, but I might also see that Will sent me a message about going for dinner at that restaurant, and there might be an entry in my calendar, which other people wouldn’t see. It’s a really interesting way that we might start to appear in search results in a new format.

Then the third interesting thing here is the idea of app-only indexing.

Closeup of whiteboard: Showing the top of the funnel (app indexing) and the bottom of the funnel (a personal corpus).

With universal links, we talked about needing parity between the desktop site, the mobile site, the app. With app-only indexing, we could be looking at a model where there are screens in apps that don’t have a web equivalent. So you might start to see search results where there’s no possibility of a website actually appearing for that. That’s also a fascinating new model. Apple already do this. Google have confirmed that they’re going to be doing this. So it’s definitely coming.

Then further out into the future one of the important things is going to be app streaming. So Will, are you going to tell us a bit about that?

Will: Right. App streaming, this is another thing that Google has announced. It’s kind of available in limited trials, but we think it’s going to be a bigger thing because they’re trying to attack this core problem, which is that to use an app and for an app to appear in search results, if you haven’t already got it, you have to download it and you have to install it. That’s both a slow process and a data-hungry process. If you’re just kicking the tires, if this is an app you’ve never seen before, it’s a little bit too much to ask you to do this multi-megabyte download and then install this app, just to try it out.

So what they’re trying with app streaming is saying, “We can simplify that process. This is an app you’ve not used before. Let’s preview it for you.” So you can use it. You can see it. You can certainly check out the public areas of the app and then install it if it’s useful to you.

The current setup is a little bit of a kind of a kludge; they’re running in a virtual machine in the cloud and streaming. It’s all very weird. We think the details are going to change.

Tom: Yeah.

Will: Fundamentally, they’re going to figure out a way to make this streamlined and smooth, and it will become much easier to use apps for the first time, making it possible to expose them in a much broader array of search results. Then there’s all kinds of other things and stuff coming in the future. I mean, Tom’s passionate about the personal assistant.

Tom: Yeah. The intelligent personal assistant thing is really, really exciting to me. By intelligent personal assistant, I mean things like Siri, Cortana, Google Now, and the up-and-coming ones — Facebook M and SoundHound’s Hound app. What’s fascinating about personal assistants is that when you do a search, you do a search for weather in Siri for example, you just get a card about the weather for where you are. You don’t get taken to a list of results and taken elsewhere. You just get a direct answer.
Most of the personal assistants are already able to answer a lot of search queries using this direct answer methodology. But what we think is exciting about apps is that we anticipate a future where you can store an app and it allows the personal assistants to tap into that app’s data to answer queries directly. So you can imagine I could do a search for “are the trains running on time.” Siri taps into my train app, pulls that data, and just lets me know right there. So no longer am I opening the app. What’s important is the app is actually sort of a gateway through to a data source in the backend. We start to get all this data pulled into a central place.

Will: It’s fascinating. You mentioned a whole bunch of different tools, companies, platforms coming up there. The final thing that we want to point out is that this is a really interesting space because Google’s had a lock on web search for what feels like forever.
App search is a whole new area. Obviously, Google has some advantages just through the fact that the Android devices and they’ve got the apps installed in so many places and it’s part of people’s habits. But there are certainly opportunities. It’s the first crack. It’s first chink in the armor that means that maybe there are some upcoming players who will be interesting to watch and interesting for us as marketers to pay attention to.

Thank you for joining us here in Distilled’s London HQ. It’s been great talking to you. Thank you for taking the time. Bye.

Tom: Bye.

Video transcription by Speechpad.com


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

Building Brand Value & Customer Loyalty: Videos from MozTalk in Philly

Posted by Danielle_Launders

[Estimated read time: 3 minutes]

In April, we hopped on a plane to go visit our friends in the City of Brotherly Love to share an evening of learning, networking, and, of course, eating, for our latest MozTalk. Wait, what’s a MozTalk, you ask? Well, let me tell you! MozTalks are after-work events, featuring two-four speakers, focusing on topics relevant to online marketing. These one-night events are a way to engage and share ideas amongst the community, meet old friends and new (that’s you!), and learn great tips from some brilliant minds. Oh, did I mention there is food and some awesome swag? Yes, let’s not forget the most important parts.

Our most recent MozTalk focused on innovative strategies for building brand value and keeping your customers coming back. Topics ranged from human interaction through customer service to tailoring PPC ads to keep your customers coming back for more. We had a lineup of four outstanding speakers: Adam Melson from Seer Interactive, Erin McCaul from Moz, Purna Virji from Microsoft, and Wil Reynolds from Seer Interactive. Watch the presentations below for the full scoop:

Adam Melson: Branding & Revenue Wins That Ignore Traditional SEO

Top takeaways

  • Fixing broken customer experiences should be a priority; make sure to measure everything. By doing so, you may find new key opportunities to enhance the customer experience and gain loyalty.
  • Look past the basic search results; more and more people are using websites outside of Google to search, including Reddit.

Erin McCaul: Customer Engagement: Why Your Help Team Should Have a Seat at the Table

Top takeaway

  • Excellent customer service is about empathy and should be the foundation of any good brand.

Purna Virji: Clever Ways the World’s Most Valuable Brands Use PPC

Top takeaways

  • Build your brand with PPC by focusing on making the customer’s life easier, showing them that you care, and making it easy to be a loyal customer.
  • With the increase in the use of voice search, be prepared for search results with misspellings. Make sure to account for that as a brand and in PPC.
  • Utilize sitelink extensions by focusing on the stage of interest of a customer or according to the customer’s needs.

Wil Reynolds: A Modern SEO’s View on Authority vs. Trust

Top takeaway

  • Don’t confuse popularity with trust, and remember, rankings don’t equal trust or equal money.

Missed the previous talks?

Now that your brand value is at all-time high and your customers love you, it’s time to level up even more. Our first MozTalk covers Need-to-Know SEO and Making Your Blog Audience Fall in Love with Rand Fishkin and Geraldine DeRuiter, while our second MozTalk dives into 5 Years of SEO Changes and Better Goal-Setting with Rand Fishkin and Dr. Pete Meyers.

Join us for the next one!

We are excited to announce that we are headed to Denver for our next MozTalk on July 19th to learn all things content. Join us for a night full of learning, networking, and fun. Keep an eye on our Events page for more details to come, and we hope to see you there!


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

Optimizing for Accessibility + SEO: Site & Page Structure Overlaps

Posted by Laura.Lippay

[Estimated read time: 9 minutes]

(header image photo by H.L.W. from the Blind Photographers Flickr Group.)

Happy Global Accessibility Awareness Day!

39 million people are blind. 285 million are visually impaired. 15% of the world’s population — over a billion people — have some form of disability.

Disabilities can come in many shapes and forms: physical, cognitive, visual, hearing. How does someone who is blind check their email? How does someone who can’t hear decipher any of the tens of millions of videos on YouTube? How do disabled students keep an upper hand when all of their classmates are using the Internet for research?

Many use assistive technologies to help with these tasks. Screen readers are like search engines in that they can’t see the content of a page, and instead rely on signals in the code to navigate the web and understand the content of a page.

In this, and in two follow-up posts, we’ll explore the overlaps between optimizing for search engines and optimizing for screen readers and assistive technologies. But first, let’s get a better understanding of what screen readers are all about.

In the video below, Kyle Woodruff navigates the web without his sight, quadriplegic web user Gordon Richins navigates the web without his hands, and Curtis Radford navigates the web without his hearing. You’ll see that they still face challenges because of the gaps between what assistive technology (AT) can do and what is actually built in a way that can be accessed and navigated by assistive technologies.

Because he’s so entertaining (even has his own Amazon series), I also must introduce you to Tommy Edison, The Blind Film Critic. See how he’s easily using Twitter and YouTube with the assistive technologies built into the iPhone.

Give it a shot

Here’s a simple way to try something like this out for yourself right now:

  1. Open a Chrome browser
  2. Install the screen reader extension ChromeVox and enable it.
  3. For a better experience, utilize some of this ChromeVox help:
    1. ChromeVox tutorial (quick, easy, and highly recommended)
    2. ChromeVox shortcuts reference (print it and tape it to your monitor (if you’re sighted)).
  4. Go for it. Navigate.

Optionally, you can also enable these more complex free screen readers on your device:

  1. VoiceOver on Mac OSX, iOS. User Guide. Shortcuts.
  2. NVDA free screen reader for Windows machines. User guide. Shortcuts.

Try any of these on your own website. How painful is it?

Do something about it

Consider going beyond simply being aware of accessibility (A11y) for Global Accessibility Awareness Day by utilizing some of your technical code optimization chops to help millions of disabled fellow humans have a better experience on the web.

Let’s be very clear though: learning web accessibility is no small task. This is a complex industry where assistive technologies go beyond just optimizing a bit of web code for screen readers.

But here’s where it’s simple for you to start: Some of the tasks of optimizing for web accessibility overlap with the things we look at for search engine optimization (SEO). At the most basic level, it’s important not to cannibalize the screen reader experience by over-optimizing for SEO by doing things like keyword stuffing. At a more advanced level, you’re top-notch if you can recognize the overlaps and consider how to optimize for both screen readers (AKA humans, in essence) and for search engines (bots).

In this post we’ll look at some simple structure overlaps. In next week’s post, we’ll dive more into overlaps in formatting and links, including what may be somewhat controversial: hidden text. And in the final post, we’ll look at accessibility and SEO overlaps optimizing video, images and non-text elements, including making infographics accessible.

SEO/A11y Overlaps: Structure

Markup on a page helps both search engines and assisitive technologies (AT) like screen readers to understand what elements are in a page. For SEO, different elements like title tags, headings, or some schema markup can have more weight or value than other markup. People using assistive technologies are reliant on these structural elements to navigate through the page without being able to see it or without being able to use a mouse.

We’ll start with some easy SEO + accessibility overlaps: structural elements that you might deal with in your every day coding or optimization. Consider the accessibility side of implementing and optimizing these elements to provide a better experience for disabled visitors while you also build for SEO.

Title tags

Title tags (not to be confused with title attributes) are important in search engine optimization for (1) providing context as to what the page is about when Google crawls it and (2) how the page appears in the search result display. Over the years, while SEO techniques have come and gone and fluctuated in perceived effectiveness, page titles have continued to be one of the more highly valued on-page tactics.

When considering accessibility, W3C provides these specific benefits of the page title for various disabilities:

  • This criterion benefits all users in allowing users to quickly and easily identify whether the information contained in the web page is relevant to their needs.
  • People with visual disabilities will benefit from being able to differentiate content when multiple web pages are open.
  • People with cognitive disabilities, limited short-term memory, and reading disabilities also benefit from the ability to identify content by its title.
  • This criterion also benefits people with severe mobility impairments whose mode of operation relies on audio when navigating between Web pages.

Here’s a sample of the ChromeVox extension announcing the content of several tabs in a browser window by reading the title tags (and page focus).

Title tag do’s and don’ts

  • Do continue to follow page title best practices for SEO. This should work hand-in-hand with titling for accessibility.
  • Do not keyword stuff. In case keyword stuffing is still your thing, consider how something like this reads (“Buy Soap Dispensers, Pump Soap Dispensers, Sensor Soap Dispensers & more | MyStore”) versus “Soap Dispensers from MyStore.” Keep it simple and relevant (and split content into separate pages if you need to).

Headings

In search engine optimization, there’s a lot of focus on the H1 tag, and much less on nested H2–H6 headings. An H1 heading indicates the main topic of a page, while H2–H6 indicate subtopics or page sections. SEOs will sometimes use multiple H1 headings in an attempt to give more emphasis to more of the keywords and content on the page. SEOs will also sometimes tag other text on the page or in the footer as an H1 heading, in an attempt to get more keyword-rich H1 text in front of search engines. This can really mess with screen readers, and here’s why.

Headings allow assistive technologies to quickly navigate a page. Headings define the structure of the page and a screen reader user will oftentimes use these as the first method to move to a particular module or region of content.

Here’s an example of tabbing through headings using ChromeVox on the Santa Cruz Good Times news website:

Compare that to the Lehighton Times News, where only one heading was used, and it’s on the Calendar.

Heading do’s and don’ts

  • Do use headings. It’s important not to skip headings altogether, like in the Times News website example above.
  • Don’t use more than one H1 heading: HTML5 allows for multiple H1s, but this is not well-supported by browsers/assistive technology.
  • Do use headings to define sections of content. For example, use H2 headings for subheader or key sections of the page and H3 headings for content modules.
  • Don’t use headings if there’s no following content. Headings define sections of content, so if there is no content section, it shouldn’t have a header.

HTML5 elements and schema markup

HTML 5 introduced more detailed tagging of page elements, like <article>, <section>, <header>, <footer>, and a whole bunch more. Additionally, schema markup was developed by Google, Yahoo, and Bing to better understand elements of a page.

We know that Google uses some schema elements for rich snippets in search results, like review markup to get star ratings to appear in search results. What we don’t know is how many of the 571 schema types that the search engines pay attention to when indexing and forming context around a page and its elements, and/or what HTML 5 elements are considered and how they’re weighted.

Regardless, this tagging allows various assistive devices via different browsers to better understand and navigate through content. Check out these accessibility scores of HTML 5 elements via different browsers from html5accessibility.com. The page has a lot more interesting detail.

HTML5 Accessibility Report Score for 5 browsers. Their scores are Safari: 62/100, Chrome: 93/100, Firefox: 89/100, Internet Explorer: 35/100, Edge: 40/100

Semantic markup do’s and don’ts

  • Do markup your content with relevant tags. It’s likely good for everyone.
  • Do not use divs or spans for buttons. Div and span-based buttons are not accessible. If it looks and acts like a button, use a button.

Page structure & navigation

Some SEO camps might be zealous about putting the body copy or most important content of the source code at the very beginning of the source code, in an attempt to either make sure it’s indexed before Google leaves the page or to attempt to indicate that content is more important.

Whether you’re of that camp or not, it’s not a great idea for accessibility. The order of the source code content is important for being able to easily tab through content with your keyboard in the correct order. Ideally this looks something like: H1 heading, main navigation, site sections, and then footer. You can make modifications via the CSS, as long as your source code order is logical. It helps to have good semantic markup.

Page structure and navigation do’s and don’ts

  • Do make sure your source code is in order. You can change things around via CSS, but a keyboard will tab through page content as it is listed in the source code.

On-site sitemaps

This one is pretty straightforward. When search engines and screen readers are having trouble discerning the complex, convoluted, complicated navigation or otherwise hard-to-reach sections and pages of your website, a simple outlined sitemap with links to all the sections and pages can provide a quick and easy solution to fuller indexation and understanding the contents of a website.

Sitemap do’s and don’ts

  • Do provide an on-site sitemap, if it’s possible. Link to all sections of the site, present the same organization as the site presents, and keep the links updated.
  • Don’t keyword stuff. We’re all familiar with those sitemaps that repeat the site’s keyword in every link, like used cars alabama, used cars alaska, used cars arizona, used cars arkansas. You’re likely to drive a screen reader user to drink. Don’t do it.

Resources & tools

General:

for Accessibility and Structure:


Next Week: SEO/A11y Overlaps: Formatting & Links


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

Learn to Love Your Existing Content – 5 Ways to Get More Visibility

Posted by JamesAgate

[Estimated read time: 9 minutes]

For the most part, articles on content promotion focus on launching new content.

Today I want to focus on getting the most out of (and in some cases, breathing new life into) existing content.

We frequently see clients that have a variety of content assets already. Where possible, I always advocate using what’s on hand over indiscriminately pumping out new content.

For the following exercise, we need to start by identifying the content assets; we’ll be looking for unloved and underloved assets.

  • Unloved = content that exists but nobody has even noticed it. It has very few links, social shares, and little to no traffic.
  • Underloved = content that exists, was launched, and did okay, but never reached its full potential. (I can count on one hand the number of times we’ve found a piece that we couldn’t squeeze at least one campaign out of.)

It’s important to note that, in many cases, we’ve been alerted to content that’s unloved because it’s essentially invisible but potentially very valuable. One good example of this would be an internal knowledge base that your sales team maintains.

Identifying pages with potential

Often it’s easier to spot underloved content than it is to find completely unloved content.

Our preferred method is to plug a domain into Ahrefs.com Site Explorer, navigate to the “Top Pages” tab (which in their redesign now seems to be called “Best By Links”), and start working your way through the URLs that you find.

A screenshot of Ahrefs Site Explorer with an arrow indicating the Top Pages navigation.

You can also use Ahrefs.com “Best By Shares” feature, which will present all pages in order of their social share count. Again, this can be useful in terms of pointing you towards assets that may perform well with some additional promotion.

I tend to pull together all the URLs that I find so that I can work on them/review in conjunction with other sources.

The other sources being, in this case:

  1. The client (or perhaps colleagues at your company) alerting you to “invisible” content
  2. Google Analytics to identify pages that perhaps get some traffic but have no links or social shares
  3. Sitemap or crawl of your domain

You should now have a file of all of your existing content assets. You’re ready to match these up against any content opportunities in your market that you’ve previously identified, or as a result of evaluating the assets you’ve found and researching the possible opportunities.

This might include things like:

  • Keywords – You’ve identified keywords around certain topic areas that are worth targeting.
  • Broken link opportunities – Perhaps you’ve identified specific broken resources that you’re looking to target. (Shameless plug: our broken link prospecting tool should be launching later this month.)
  • Rich veins of link opportunities – Perhaps you’ve spotted a niche within your market that’s particularly attractive from a linking standpoint.

Now you can assess whether the content you have fits that opportunity. It probably won’t be a perfect match, but is it close enough to not warrant creating a whole new piece of content?

If a new piece of content is truly needed, set that opportunity aside in favor of the others for the moment. Remember, right now we’re just focusing on priming and promoting existing content.

Priming existing content

I did say we weren’t going to be creating new content, but there is some work involved. Unless you get really lucky, the content assets you discover will probably need a little TLC before they’re ready to be promoted.

Repurpose/Reformat

So… I lied. This does involve creating a new piece of content. But, in my defense, you’re taking the meat from an existing asset and creating something that matches the opportunity you’re looking to target.

In essence, you’ll be extracting ideas from a content asset to produce something that’s worthy of promotion. A good example of this might be taking the key ideas from a webinar and turning that into a cheatsheet; this can be promoted as a resource far more easily than a full-on webinar.

Consolidate

This is, by far, the most common scenario. Clients will come to us from other providers who’ve said that 4 blog posts per month is going to change their business. In isolation, most of these blog posts aren’t worth promoting. When consolidated, however, they can become something more substantial.

Improve

This involves enhancing a piece of content that’s nearly there but is perhaps missing a section or two, or could be updated with the latest industry best practices.

Optimize

This could be improving the formatting of a piece to make it more digestible or — perhaps more crucially — adjusting the page to target specific keywords. For example, we’ve just finished working with a client to update and better optimize their existing blog posts for specific keywords that attract huge search volumes in their market. In one case, this meant a solid blog post that was completely unloved now ranks in the top three results for a term that gets searched around 10,000 times per month. These aren’t commercial keywords, but rather informational queries that have the potential to lead people into the client’s commercial landing pages.

Promoting existing content

#1: Reach out to people who’ve shared similar content

A good place to start when promoting content is some proactive outreach. What better place to start than with people who’ve already linked to similar/related content?

This can be quite a manual process: searching various keywords relating to the content, identifying websites that have said content, plugging each URL into Ahrefs, Majestic, or Open Site Explorer to see who links, sifting through to see who’s worth contacting, and then performing the actual outreach.

To this end, we built our (free) Similar Content Prospecting Tool to take the heavy lifting out of this process. You enter the keywords and it finds the content that ranks highest for them, gathers those that link to that content, sifts through and removes the lower-end stuff, and presents the top links for you to review and export, ready for contact.

You can find people who link to similar content or, with the right keywords, you can find people who link to related content. Both groups of prospects may be interested in linking to you.

For example, say you have a piece of content that looks at keeping children safe on their smartphone. You might want to identify those that link to top-ranking content on “Internet safety,” as there’s likely to be crossover. Those prospects will potentially be interested in your content because it fills a gap that currently exists on their site.

For further reading, see: You Can Get Links from Cold Outreach.

#2: Look for broken link opportunities

I know I’ve plugged it once before, but we’re launching Linkrot.com later this month (all being well) and this will automate the process of finding broken link opportunities. For now, prospecting for opportunities can be a largely manual process (take a look at the additional resources linked to below to get a feel for the process). This can be eased with extensions like LinkMiner from Jon Cooper at PointBlankSEO. And of course there are prospecting tools on the market currently that can help with the search, such as BrokenLinkBuilding.com.

Broken link building is extremely powerful and, in my opinion, still under-utilized. For the uninitiated, at its most basic level it involves a) finding pages that used to exist but are now dead and that people have linked to, b) tailoring your content asset to fit that opportunity, and c) reaching out to those that link, to suggest they update their link to your page.

Take a look at this chart:

Bar graph: Publish rate by outreach reason. Broken links at 6.5%, related information at 2.61%, and related topic at 1.76%.

Source: Do Short Outreach Emails Get You More Links?

As you can see, the publish rate (percentage of people who link versus number who were contacted) is considerably higher than with other reasons for outreach.

As a side note, before you go ignoring the other techniques: the pool of opportunities is significantly smaller for broken link building. So, whilst you might convert more prospects into links, there will be fewer prospects to start with.

One of the quickest ways to find broken links manually is to search for resource pages in your industry and scan them for dead pages.

For further reading, see: Broken Link Building Bible, Creative Broken Link Building Strategies, 53 Broken Link Resources.

#3: Devise a new angle

This applies in particular to underloved content assets. Adjusting the niche you pitch can have a significant impact on publish rate.

This may involve more than just adjusting your prospecting efforts and your email template. It’s likely to involve tweaking your piece of content to better fit who you’re planning to target.

A straightforward example would be targeting a different country. Perhaps you’ve had success reaching out to schools in the US. With some adjustments to the piece and to your approach, you might be able to find schools in the UK or Canada that might also find your content useful and link-worthy.

#4: Consider paid promotion

In the past, I’ve recommended offerings like Outbrain and Taboola. In the early days of both of these platforms we actually saw a really good return, but I’m not ashamed to say that we can’t make them work anymore.Animated gif of Leonardo DiCaprio crumpling up a piece of paper and throwing it in the waste basket.

I think this are many reasons for this. Consumers are becoming increasingly blind to these “around the web” links; there seems to be limited quality control in terms of advertisers or adverts so they have become increasingly spammy-looking (which harms clickthrough rates); and finally, due to the surge in popularity, the traffic isn’t all that cheap anymore.

A screenshot of spammy, clickbait-y articles via paid platforms.

One platform that I think is underrated is StumbleUpon Paid Discovery; we find it useful for amplifying content alongside proactive outreach.

I do also like Facebook advertising as a way of reaching very specific audiences. However, we typically only utilize paid media like this where the goals of a campaign go beyond link building because it’s REALLY hard to draw that direct line between your Facebook ad spend and number of referring domains.

#5: Connect your content to a wider story

Yes, I know people say that press releases are dead. Certainly, as a form of link building or the sole method of generating press, they just might be. But for announcing content, they can still be very effective.

We’ve found if you can tap into a developing story and go hyper-focused, then you can A) generate some coverage of your content and B) leverage that coverage for further coverage with some proactive outreach.

You might think this sounds like a technique for a new piece of content, but that’s not so. We’ve recently found this approach useful in campaigns where prospects are indifferent to our standard outreach approach. They feel that the issue we’re talking about either doesn’t matter or doesn’t apply to them. A well-written press release can change all of that.

You’re flipping the issue on its head, making it about the broader story rather than simply a piece of your content. A punchy title, some official stats and a nice quote from the CEO can help generate some initial coverage. You can then take that initial coverage and use it as social proof in your proactive outreach.

Any questions or ways that you squeeze more juice out of your existing content? I’d welcome them in the comments section below.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

Sweating the Details – Rethinking Google Keyword Tool Volume

Posted by rjonesx.

[Estimated read time: 13 minutes]

I joined Moz in August of 2015 and fell right into the middle of something great. Rand had brought his broad vision of a refined yet comprehensive SEO keyword tool to a talented team of developers, designers, data scientists and project managers… and now, me.

I was hoping to ease in with a project that was right up my wheelhouse, so when the “Volume” metric in Keyword Explorer was pitched as something I could work on, I jumped right on it. In my mind, I was done the second the work was offered to me. I already had a giant keyword volume database at my disposal and a crawling platform ready to fire up. All I had to do was tie some strings together and, voilà.

Peer pressure

It was subtle at first, and never direct, but I quickly began to see something different about the way Moz looked at problems. I’ve always been a bit of a lazy pragmatist — when I need a hammer, I look around for something hard. It’s a useful skill set for quick approximations, but when you have months set aside to do something right, it’s a bit of a liability instead.

Moz wasn’t looking for something to use instead of a hammer; they were looking for the perfect hammer. They were scrutinizing metrics, buttons, work flows… I remember one particularly surreal discussion around mapping keyboard shortcuts within the web app to mimic those in Excel. So, when on my first attempt I turned up in a planning meeting with what was, essentially, a clone of Google Keyword Planner volume, I should have seen it coming. They were polite, but I could feel it — this wasn’t better, and Moz demanded better in their tools. Sometimes peer pressure is a good thing.

If it ain’t broke, don’t fix it.

Rand was, unsurprisingly, the first to question whether or not volume data was accurate. My response had always been that of the lazy pragmatist: “It’s the best we got.” Others then chimed in with equally valid questions — how would users group by this data? How much do we have? Why give customers something they can already get for free?

Tail tucked between my knees, I decided it was time to sweat the details, starting with the question: “What’s broke?” This was the impetus behind the research which lead to this post on Keyword Planner’s dirty secrets, outlining the numerous problems with Google Keyword Planner data. I’ll spare you the details here, but if you want some context behind why Rand was right and why we did need to throw a wrench into the conventional thinking on keyword volume metrics, take a look at that post.

Here was just one of the concerns — that Google Adwords search volume puts keywords into volume buckets without telling you the ranges.

Image showing that Google Keyword Planner averages are heavily rounded.

Well, it’s broke. Time to sweat the details!

Once it became clear to me that I couldn’t just regurgitate Google’s numbers anymore and pretend they were the canonical truth of the matter, it was time to start asking the fundamental questions we want answered through a volume metric. In deliberation with the many folks working on Keyword Explorer, we uncovered four distinct characteristics of a good volume metric.

  1. Specificity: The core of a good volume metric is being specific to the actual average search volume. You want the volume number to be as close as possible to reality.
    We want to be as close to the average annual search volume as we possibly can.
  2. Coverage: Volume varies from month to month, so not only do you want it to be specific to the average across all months, you want it to be specific to each individual month. A good volume metric will give you reasonable expectations every month of the year — not just the whole year divided by 12.
    We want the range to capture as many months as possible. Highlighted range on graph with spike at the end.
  3. Fresh: A good volume metric will take into account trends and adjust to statistically significant variations which diverge from the previous 12 months.
    We want to detect trending keywords early on so we can predict volume and track it more closely. Graph with spike at the end.
  4. Relatable: A good volume metric should allow you to relate keywords to one another when they are similar in volume (i.e.: grouping).

We can actually apply these four points to Google Keyword Planner and see its weaknesses…

  1. Specificity: Google’s Keyword Volume is a yearly rounded, bucketed average of monthly rounded, bucketed averages
  2. Coverage: For most keywords, the average monthly search is accurate only 33% of the months of the year. Most months, the actual volume will land in a different volume bucket than the average monthly search.
  3. Fresh: Keyword Planner updates once a month, with averages not providing predictive value. A hot new keyword will look 1/12th its actual volume in the average monthly search, and it won’t show up for 30 days.
  4. Relatable: You can group keywords in 1 of the 84 different volume buckets, with no explanation as to how the groups were formed. (They appear to be associated with a simple logarithmic curve.)

You can see why we were concerned. The numbers aren’t that specific, have ranges that are literally wrong most of the time, are updated regularly but infrequently, and aren’t very group-able. Well, we had our work cut out for us, so we began in earnest attacking the problems…

Balancing specificity and coverage

As you can imagine, there’s a direct trade-off between specificity and coverage. The tighter the volume ranges, the higher the specificity and lower the coverage. The broader the ranges, the lower the specificity and higher coverage. If we only had one range that was from zero to a billion, we would have horrible specificity and perfect coverage. If we had millions of ranges, we would have perfect specificity but no coverage. Given our weightings and parameters, we identified the best possible arrangement. I’m pretty sure there’s a mathematical expression of this problem that would have done a quicker job here, but I am not a clever man, so I used my favorite tool of all: brute force. The idea was simple.

  1. We take the maximum and minimum boundaries of the search volume data provided by Google Keyword Planner, lets say… between 0 and 1 billion.
  2. We then randomly divide it into ranges — testing a reasonable number of ranges (somewhere between 10 and 25). Imagine randomly placing dividers between books on a shelf. We did that, except the books were keyword volume numbers.
  3. We assign a weighting to the importance of specificity (the distance between the average of the range min and max from the keyword’s actual average monthly search). For example, we might say that it’s 80% important that we’re close to the average for the year.
  4. We assign a weighting to the importance of coverage (the likelihood that any given month over the last year falls within the range). For example, we might say it’s 20% important that we’re close to the average each month.
  5. We test 100,000 randomly selected keywords and their Google Keyword Planner volume against the randomly selected ranges.
  6. We use the actual average of the last 12 months rather than the rounded average of the last 12 months.
  7. We do this for millions of randomly selected ranges.
  8. We select the winner from among the top performers.

It took a few days to run (the longer we ran it, the rarer new winners were discovered). Ultimately, we settled on 20 different ranges (a nice, whole number for grouping and displaying purposes) that more than doubled the coverage rate over the preexisting Google Keyword Planner data while minimizing damage to specificity as much as possible. Let me give an example of how this could be useful. Let’s take the keyword “baseball.” It’s fairly seasonal, although it has a long season.

Bar graph showing seasonality of keyword "baseball." The actual search volume falls within the Moz range 10 out of 12 months of the year. The Google search volume only matches 3 out of 12 months. Ranges give us predictability with best and worst case scenarios built in.

In the above example, the Google Average Monthly Search for Baseball is 368,000. The range this covers is between around 330K and 410K. As you can see, this range only covers 3 of the 12 months. The Moz range covers 10 of the 12 months.

Now, imagine that you’re a retailer that’s planning PPC and SEO marketing for the next year. You make your predictions based on the 368,000 number given to you by Google Keyword Planner. You’ll actually under-perform the average 8 months out of the year. That’s a hard pill to swallow. But, with the Moz range, you can use the lower boundary as a “worst-case scenario.” With the Moz range, your traffic under-performs only 2 months out of the year. Why pretend that we can get the exact average when we know the exact average is nearly always wrong?

Improving relatability

This followed naturally from our balancing specificity and coverage. We did end up choosing 20 groupings over some higher-performing groupings that were less clean numbers (like 21 groupings) for aesthetic and usability purposes. But what this means is that it’s easy to group keywords by volume and not in an arbitrary fashion. You could always group by ranges in Excel, if you wanted, but the ranges you came up with off the top of your head wouldn’t have been validated in any way regarding the underlying data.

Let me give an example why this matters. Intuitively, you’d imagine that the ranges would increase in broadness in a similar logarithmic fashion as they get larger. For example, you might think most keywords are 10% volatile, so if a keyword is searched 100 times a month, you might expect some months to be 90 and others 110. Similarly, you would expect a keyword searched 1,000 times a month to vary 10% up or down as well. Thus, you would create ranges like 0–10, 100–200, 1,000–2,000, etc. In fact, this appears to be exactly what Google does. It’s simple and elegant. But is it correct?

Nope. It turns out that keyword data is not congruent. It generally follows these patterns, but not always. For example, in our analysis, we found that while the volume range after 101–200 is 201–500 (a 3x increase in broadness), the very next optimal range is actually 501–850, only a 1/6th increase in broadness.

This is likely due to non-random human search patterns related to certain keywords. There are keywords which people probably search daily, weekly, monthly, quarterly, etc. Imagine keywords like “what is the first Monday of this month” and “what is the last Tuesday of this month.” All of these keywords would be searched a similar number of times by a similar population a similar number of times each month, creating a congruency that is non-random. These patterns create shifts in the volatility of terms that are not congruent with a natural logarithmic scale you would expect if the data was truly random. Our machine-learned volume ranges capture this non-random human behavior efficiently and effectively.

We can actually demonstrate this quite easily in a graph.

Upward-trend line graph of log of keyword planner ranges. Google's range sizes are nearly perfectly linear, meaning they are not optimized at all to accommodate the non-linear, non-random nature of search volume volatility and seasonality.

Notice in this graph that the log of Google’s Keyword Planner volume ranges are nearly linear, except at the tail ends. This would indicate that Google has done very little to try and address patterns in search behavior that make the data non-random. Instead, they apply a simple logarithmic curve to their volume buckets and leave it at that. The R2 value shows just how close to 1 (perfect linearity) this relationship is.

Upward-trend line graph of log of Moz ranges. Moz's Keyword Explorer volume ranges are far less linear, as they're trained to maximize specificity and coverage, exploiting the non-random variations in human search patterns.

The log of Moz’s keyword volume ranges are far less linear, which indicates that our range-optimization methodologies found anomalies in the search data which do not conform to a perfect logarithmic relationship with search volume volatility. These anomalies are most likely caused by real non-random patterns in human search behavior. Look at positions 11 and 12 in the Moz graph. Our ranges actually contract in breadth at position 12 and then jump back up at 13. There is a real, data-determined anomaly which shows the searches in that range actually have less volatility than the searches in the previous range, despite being searched more often.

Improving freshness

Finally, we improved freshness by using a completely new, thirrd-party anonymized clickstream data set. Yes, we analyze 1-hour delayed clickstream data to capture new keywords worth including both in our volume data and our corpus. Of course, this was a whole feat in and of itself; we have to parse and clean hundreds of millions of events daily into usable data. Furthermore, a lot of statistically significant shifts in search volume are actually ephemeral. Google Doodles are notorious for this, causing huge surges in traffic for obscure keywords just for a single day. We subsequently built models to look for keywords that trended upward over a series of days, beyond the expected value. We then used predictive models to map that clickstream search volume to a bottom quartile range (i.e.: we were intentionally conservative in our estimates until we could validate against next month’s Google Keyword Planner data).

Finally, we had to remove inherent biases from the clickstream dataset itself so that we were confident our fresh data was reliable. We accomplished this by…

  1. Creating a naive model that predicts Google Keyword Volume from the clickstream data
  2. Tokenizing the clickstream keywords and discovering words and phrases that correlate with outliers
  3. Building a depressing and enhancing map of these tokens to modify the predictive model based on their inclusion
  4. Applied the map to the naive model to give us better predictions.

This was a very successful endeavor in that we can take raw clickstream data and, given certain preconditions (4 weeks of steady data), we can predict with 95% accuracy the appropriate volume range.

A single metric

All of this above — the research into why Google Keyword Planner is inadequate, the machine-learned ranges, the daily freshness volume updating, etc. — all goes into a single, seemingly simple, metric: Volume Ranges. This is probably the least-scrutinized of our metrics because it’s the most straightforward. Keyword Difficulty, Keyword Opportunity, and Keyword Potential went through far more revisions and are far more sophisticated in their approach, analysis, and production.

But we aren’t done. We’re actively looking at improving the volume metric by adding more and better data sources, predicting future traffic, and potentially providing a mean along with the ranges. We appreciate any feedback you might offer, as well, on what the use cases might be for different styles of volume metrics

However, at the end of the day, I hope what you come away with is this: At Moz, we sweat the details so you don’t have to.

A personal note

This is my first big launch at Moz. While I dearly miss my friends and colleagues at Angular (the consulting firm for whom I worked for the past 10 years), I can’t say enough about the amazing people I work with here. Most of them will never blog here, won’t tweet, and won’t speak at conferences. But they deserve all the credit. So, here’s a picture of my view from Google Hangouts from a Keyword Explorer meeting. Most of the team was able to make it, but those who didn’t, you know who you are. Thanks for sweating the details.

Google Hangout screenshot of the Moz Keyword Explorer team during a meeting. Russ is connected remotely in the corner.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →