Wednesday, 30 September 2015

Video: Expanding your site to more languages

Webmaster Level: Intermediate to Advanced

We filmed a video providing more details about expanding your site to more languages or country-based language variations. The video covers details about rel=”alternate” hreflang and potential implementation on your multilingual and/or multinational site.


Video and slides on expanding your site to more languages

You can watch the entire video or skip to the relevant sections: Additional resources on hreflang include: Good luck as you expand your site to more languages!

Monday, 28 September 2015

SEO Starter Guide updated

Webmaster Level: Beginner

Update on October 3, 2010: We have fixed the issue causing the highlighted text to be obscured on Linux PDF readers.

About two years ago we published our first SEO Starter Guide, which we have since translated into 40 languages. Today, we’re very happy to share with you the new version of the guide with more content and examples.

Here’s what’s new:
  • Glossary to define terms throughout the guide
  • More example images to help you understand the content
  • Ways to optimize your site for mobile devices
  • Clearer wording for better readability
You may remember getting to see what Googlebot looks like in our “First date with Googlebot” post. In this version of the SEO Starter Guide, Googlebot is back to provide you with some more SEO tips.

You can download the new version here [PDF]. Entertain and impress your friends by leaving a printed copy on your coffee table.

Googlebot

Fresher query stats

Query stats in webmaster tools provide information about the search queries that most often return your site in the results. You can view this information by a variety of search types (such as web search, mobile search, or image search) and countries. We show you the top search types and locations for your site. You can access these stats by selecting a verified site in your account and then choosing Query stats from the Statistics tab.


If you've checked your site's query stats lately, you may have noticed that they're changing more often than they used to. This is because we recently changed how frequently we calculate them. Previously, we showed data that was averaged over a period of three weeks. Now, we show data that is averaged over a period of one week. This results in fresher stats for you, as well as stats that more accurately reflect the current queries that return your site in the results. We update these stats every week, so if you'd like to keep a history of the top queries for your site week by week, you can simply download the data each week. We generally update this data each Monday.

How we calculate query stats
Some of you have asked how we calculate query stats.

These results are based on results that searchers see. For instance, say a search for [Britney Spears] brings up your site as position 21, which is on the third page of the results. And say 1000 people searched for [Britney Spears] during the course of a week (in reality, a few more people than that search for her name, but just go with me for this example). 600 of those people only looked at the first page of results and the other 400 browsed to at least the third page. That means that your site was seen by 400 searchers. Even though your site was at position 21 for all 1000 searchers, only 400 are counted for purposes of this calculation.

Both top search queries and top search query clicks are based on the total number of searches for each query. The stats we show are based on the queries that most often return your site in the results. For instance, going back to that familiar [Britney Spears] query -- 400 searchers saw your site in the results. Now, maybe your site isn't really about Britney Spears -- it's more about Buffy the Vampire Slayer. And say Google received 50 queries for [Buffy the Vampire Slayer] in the same week, and your site was returned in the results at position 2. So, all 50 searchers saw your site in the results. In this example, Britney Spears would show as a top search query above Buffy the Vampire Slayer (because your site was seen by 400 searchers for Britney but 50 searchers for Buffy).

The same is true of top search query clicks. If 100 of the Britney-seekers clicked on your site in the search results and all 50 of the Buffy-searchers click on your site in the search results, Britney would show as a top search query above Buffy.

At times, this may cause some of the query stats we show you to seem unusual. If your site is returned for a very high-traffic query, then even if a low percentage of searchers click on your site for that query, the total number of searchers who click on your site may still be higher for the query than for queries for which a much higher percentage of searchers click on your site in the results.

The average top position for top search queries is the position of the page on your site that ranks most highly for the query. The average top position for top search query clicks is the position of the page on your site that searchers clicked on (even if a different page ranked more highly for the query). We show you the average position for this top page across all data centers over the course of the week.

A variety of download options are available. You can:
  • download individual tables of data by clicking the Download this table link.
  • download stats for all subfolders on your site (for all search types and locations) by clicking the Download all query stats for this site (including subfolders) link.
  • download all stats (including query stats) for all verified sites in your account by choosing Tools from the My Sites page, then choosing Download data for all sites and then Download statistics for all sites.

Sunday, 27 September 2015

Improve snippets with a meta description makeover



The quality of your snippet — the short text preview we display for each web result — can have a direct impact on the chances of your site being clicked (i.e. the amount of traffic Google sends your way). We use a number of strategies for selecting snippets, and you can control one of them by writing an informative meta description for each URL.

<META NAME="Description" CONTENT="informative description here">

Why does Google care about meta descriptions?
We want snippets to accurately represent the web result. We frequently prefer to display meta descriptions of pages (when available) because it gives users a clear idea of the URL's content. This directs them to good results faster and reduces the click-and-backtrack behavior that frustrates visitors and inflates web traffic metrics. Keep in mind that meta descriptions comprised of long strings of keywords don't achieve this goal and are less likely to be displayed in place of a regular, non-meta description, snippet. And it's worth noting that while accurate meta descriptions can improve clickthrough, they won't affect your ranking within search results.

Snippet showing quality meta description




Snippet showing lower-quality meta description



What are some good meta description strategies?
Differentiate the descriptions for different pages
Using identical or similar descriptions on every page of a site isn't very helpful when individual pages appear in the web results. In these cases we're less likely to display the boilerplate text. Create descriptions that accurately describe each specific page. Use site-level descriptions on the main home page or other aggregation pages, and consider using page-level descriptions everywhere else. You should obviously prioritize parts of your site if you don't have time to create a description for every single page; at the very least, create a description for the critical URLs like your homepage and popular pages.

Include clearly tagged facts in the description
The meta description doesn't just have to be in sentence format; it's also a great place to include structured data about the page. For example, news or blog postings can list the author, date of publication, or byline information. This can give potential visitors very relevant information that might not be displayed in the snippet otherwise. Similarly, product pages might have the key bits of information -- price, age, manufacturer -- scattered throughout a page, making it unlikely that a snippet will capture all of this information. Meta descriptions can bring all this data together. For example, consider the following meta description for the 7th Harry Potter Book, taken from a major product aggregator.

Not as desirable:
<META NAME="Description" CONTENT="[domain name redacted]
: Harry Potter and the Deathly Hallows (Book 7): Books: J. K. Rowling,Mary GrandPré by J. K. Rowling,Mary GrandPré">

There are a number of reasons this meta description wouldn't work well as a snippet on our search results page:
  • The title of the book is complete duplication of information already in the page title.
  • Information within the description itself is duplicated (J. K. Rowling, Mary GrandPré are each listed twice).
  • None of the information in the description is clearly identified; who is Mary GrandPré?
  • The missing spacing and overuse of colons makes the description hard to read.

All of this means that the average person viewing a Google results page -- who might spend under a second scanning any given snippet -- is likely to skip this result. As an alternative, consider the meta description below.

Much nicer:
<META NAME="Description" CONTENT="Author: J. K. Rowling, Illustrator: Mary GrandPré, Category: Books, Price: $17.99, Length: 784 pages">

What's changed? No duplication, more information, and everything is clearly tagged and separated. No real additional work is required to generate something of this quality: the price and length are the only new data, and they are already displayed on the site.

Programmatically generate descriptions
For some sites, like news media sources, generating an accurate and unique description for each page is easy: since each article is hand-written, it takes minimal effort to also add a one-sentence description. For larger database-driven sites, like product aggregators, hand-written descriptions are more difficult. In the latter case, though, programmatic generation of the descriptions can be appropriate and is encouraged -- just make sure that your descriptions are not "spammy." Good descriptions are human-readable and diverse, as we talked about in the first point above. The page-specific data we mentioned in the second point is a good candidate for programmatic generation.

Use quality descriptions
Finally, make sure your descriptions are... descriptive. It's easy to become lax on the quality of the meta descriptions, since they're not directly visible in the UI for your site's visitors. But meta descriptions might be displayed in Google search results -- if the description is high enough quality. A little extra work on your meta descriptions can go a long way towards showing a relevant snippet in search results. That's likely to improve the quality and quantity of your user traffic.

Introducing Google Checkout

For you webmasters that manage sites that sell online, we'd like to introduce you to one of our newest products, Google Checkout. Google Checkout is a checkout process that you integrate with your site(s), enabling your customers to quickly buy from you by providing only a single username and password. From there, you can use Checkout to charge your customers' credit cards and process their orders.

Users of Google's search advertising program, AdWords, get the added benefit of the Google Checkout badge and free transaction processing. The Google Checkout badge is an icon that appears on your AdWords ads and improves the effectiveness of your advertising by letting searchers know that you accept Checkout. Also, for every $1 you spend on AdWords, you can process $10 of Checkout sales for free. Even if you don't use AdWords, you can still process sales for a low 2% and $0.20 per transaction. So if you're interested in implementing Google Checkout, we encourage you to learn more.

If you're managing the sites of other sellers, you might want to sign up for our merchant referral program where you can earn cash for helping your sellers get up and running with Google Checkout. You can earn $25 for every merchant you refer that processes at least 3 unique customer transactions and $500 in Checkout sales. And you can earn $5 for every $1,000 of Checkout sales processed by the merchants you refer. If you're interested, apply here.

Tuesday, 22 September 2015

Finding Places on the Web: Rich Snippets for Local Search

Webmaster Level: All
Cross-posted from the Lat Long Blog.

We’re sharing some news today that we hope webmasters will find exciting. As you know, we’re constantly working to organize the world’s information - be it textual, visual, geographic or any other type of useful data. From a local search perspective, part of this effort means looking for all the great web pages that reference a particular place. The Internet is teeming with useful information about local places and points of interest, and we do our best to deliver relevant search results that help shed light on locations all across the globe.

Today, we’re announcing that your use of Rich Snippets can help people find the web pages you’ve created that may reference a specific place or location. By using structured HTML formats like hCard to markup the business or organization described on your page, you make it easier for search engines like Google to properly classify your site, recognize and understand that its content is about a particular place, and make it discoverable to users on Place pages.

You can get started by reviewing these tips for using Rich Snippets for Local Search. Whether you’re creating a website for your own business, an article on a newly opened restaurant, or a guide to the best places in town, your precise markup helps associate your site with the search results for that particular place. Though this markup does not guarantee that your site will be shown in search results, we’re excited to expand support for making the web better organized around real world places.

Sunday, 20 September 2015

How to verify Googlebot

Lately I've heard a couple smart people ask that search engines provide a way know that a bot is authentic. After all, any spammer could name their bot "Googlebot" and claim to be Google, so which bots do you trust and which do you block?

The common request we hear is to post a list of Googlebot IP addresses in some public place. The problem with that is that if/when the IP ranges of our crawlers change, not everyone will know to check. In fact, the crawl team migrated Googlebot IPs a couple years ago and it was a real hassle alerting webmasters who had hard-coded an IP range. So the crawl folks have provided another way to authenticate Googlebot. Here's an answer from one of the crawl people (quoted with their permission):


Telling webmasters to use DNS to verify on a case-by-case basis seems like the best way to go. I think the recommended technique would be to do a reverse DNS lookup, verify that the name is in the googlebot.com domain, and then do a corresponding forward DNS->IP lookup using that googlebot.com name; eg:

> host 66.249.66.1
1.66.249.66.in-addr.arpa domain name pointer crawl-66-249-66-1.googlebot.com.

> host crawl-66-249-66-1.googlebot.com
crawl-66-249-66-1.googlebot.com has address 66.249.66.1

I don't think just doing a reverse DNS lookup is sufficient, because a spoofer could set up reverse DNS to point to crawl-a-b-c-d.googlebot.com.


This answer has also been provided to our help-desk, so I'd consider it an official way to authenticate Googlebot. In order to fetch from the "official" Googlebot IP range, the bot has to respect robots.txt and our internal hostload conventions so that Google doesn't crawl you too hard.

(Thanks to N. and J. for help on this answer from the crawl side of things.)

Structured Data Testing Tool

Webmaster level: All

Today we’re excited to share the launch of a shiny new version of the rich snippet testing tool, now called the structured data testing tool. The major improvements are:
  • We’ve improved how we display rich snippets in the testing tool to better match how they appear in search results.
  • The brand new visual design makes it clearer what structured data we can extract from the page, and how that may be shown in our search results.
  • The tool is now available in languages other than English to help webmasters from around the world build structured-data-enabled websites.
Here’s what it looks like:


The new structured data testing tool works with all supported rich snippets and authorship markup, including applications, products, recipes, reviews, and others.

Try it yourself and, as always, if you have any questions or feedback, please tell us in the Webmaster Help Forum.


Saturday, 19 September 2015

Debugging blocked URLs

Vanessa's been posting a lot lately, and I'm starting to feel left out. So here my tidbit of wisdom for you: I've noticed a couple of webmasters confused by "blocked by robots.txt" errors, and I wanted to share the steps I take when debugging robots.txt problems:

A handy checklist for debugging a blocked URL

Let's assume you are looking at crawl errors for your website and notice a URL restricted by robots.txt that you weren't intending to block:
http://www.example.com/amanda.html URL restricted by robots.txt Sep 3, 2006

Check the robots.txt analysis tool
The first thing you should do is go to the robots.txt analysis tool for that site. Make sure you are looking at the correct site for that URL, paying attention that you are looking at the right protocol and subdomain. (Subdomains and protocols may have their own robots.txt file, so https://www.example.com/robots.txt may be different from http://example.com/robots.txt and may be different from http://amanda.example.com/robots.txt.) Paste the blocked URL into the "Test URLs against this robots.txt file" box. If the tool reports that it is blocked, you've found your problem. If the tool reports that it's allowed, we need to investigate further.

At the top of the robots.txt analysis tool, take a look at the HTTP status code. If we are reporting anything other than a 200 (Success) or a 404 (Not found) then we may not be able to reach your robots.txt file, which stops our crawling process. (Note that you can see the last time we downloaded your robots.txt file at the top of this tool. If you make changes to your file, check this date and time to see if your changes were made after our last download.)

Check for changes in your robots.txt file
If these look fine, you may want to check and see if your robots.txt file has changed since the error occurred by checking the date to see when your robots.txt file was last modified. If it was modified after the date given for the error in the crawl errors, it might be that someone has changed the file so that the new version no longer blocks this URL.

Check for redirects of the URL
If you can be certain that this URL isn't blocked, check to see if the URL redirects to another page. When Googlebot fetches a URL, it checks the robots.txt file to make sure it is allowed to access the URL. If the robots.txt file allows access to the URL, but the URL returns a redirect, Googlebot checks the robots.txt file again to see if the destination URL is accessible. If at any point Googlebot is redirected to a blocked URL, it reports that it could not get the content of the original URL because it was blocked by robots.txt.

Sometimes this behavior is easy to spot because a particular URL always redirects to another one. But sometimes this can be tricky to figure out. For instance:
  • Your site may not have a robots.txt file at all (and therefore, allows access to all pages), but a URL on the site may redirect to a different site, which does have a robots.txt file. In this case, you may see URLs blocked by robots.txt for your site (even though you don't have a robots.txt file).
  • Your site may prompt for registration after a certain number of page views. You may have the registration page blocked by a robots.txt file. In this case, the URL itself may not redirect, but if Googlebot triggers the registration prompt when accessing the URL, it will be redirected to the blocked registration page, and the original URL will be listed in the crawl errors page as blocked by robots.txt.

Ask for help
Finally, if you still can't pinpoint the problem, you might want to post on our forum for help. Be sure to include the URL that is blocked in your message. Sometimes its easier for other people to notice oversights you may have missed.

Good luck debugging! And by the way -- unrelated to robots.txt -- make sure that you don't have "noindex" meta tags at the top of your web pages; those also result in Google not showing a web site in our index.

Friday, 18 September 2015

Quick security checklist for webmasters

Written by Nathan Johns, Search Quality Team

In recent months, there's been a noticeable increase in the number of compromised websites around the web. One explanation is that people are resorting to hacking sites in order to distribute malware or attempt to spam search results. Regardless of the reason, it's a great time for all of us to review helpful webmaster security tips.

Obligatory disclaimer: While we've collected tips and pointers below, and we encourage webmasters to "please try the following at home," this is by no means an exhaustive list for your website's security. We hope it's useful, but we recommend that you conduct more thorough research as well.

  • Check your server configuration.
Apache has some security configuration tips on their site and Microsoft has some tech center resources for IIS on theirs. Some of these tips include information on directory permissions, server side includes, authentication and encryption.

  • Stay up-to-date with the latest software updates and patches.
A common pitfall for many webmasters is to install a forum or blog on their website and then forget about it. Much like taking your car in for a tune-up, it's important to make sure you have all the latest updates for any software program you have installed. Need some tips? Blogger Mark Blair has a few good ones, including making a list of all the software and plug-ins used for your website and keeping track of the version numbers and updates. He also suggests taking advantage of any feeds their websites may provide.

  • Regularly keep an eye on your log files.
Making this a habit has many great benefits, one of which is added security. You might be surprised with what you find.

  • Check your site for common vulnerabilities.
Avoid having directories with open permissions. This is almost like leaving the front door to your home wide open, with a door mat that reads "Come on in and help yourself!" Also check for any XSS (cross-site scripting) and SQL injection vulnerabilities. Finally, choose good passwords. The Gmail support center has some good guidelines to follow, which can be helpful for choosing passwords in general.

  • Be wary of third-party content providers.
If you're considering installing an application provided by a third party, such as a widget, counter, ad network, or webstat service, be sure to exercise due diligence. While there are lots of great third-party content on the web, it's also possible for providers to use these applications to push exploits, such as dangerous scripts, towards your visitors. Make sure the application is created by a reputable source. Do they have a legitimate website with support and contact information? Have other webmasters used the service?

  • Try a Google site: search to see what's indexed.
This may seem a bit obvious, but it's commonly overlooked. It's always a good idea to do a sanity check and make sure things look normal. If you're not already familiar with the site: search operator, it's a way for you to restrict your search to a specific site. For example, the search site:googleblog.blogspot.com will only return results from the Official Google Blog.
They're free, and include all kinds of good stuff like a site status wizard and tools for managing how Googlebot crawls your site. Another nice feature is that if Google believes your site has been hacked to host malware, our webmaster console will show more detailed information, such as a sample of harmful URLs. Once you think the malware is removed, you then can request a reevaluation through Webmaster Tools.

  • Use secure protocols.
SSH and SFTP should be used for data transfer, rather than plain text protocols such as telnet or FTP. SSH and SFTP use encryption and are much safer. For this and many other useful tips, check out StopBadware.org's Tips for Cleaning and Securing Your Website.

Here's some great content about online security and safety with pointers to lots of useful resources. It's a good one to add to your Google Reader feeds. :)

  • Contact your hosting company for support.
Most hosting companies have helpful and responsive support groups. If you think something may be wrong, or you simply want to make sure you're in the know, visit their website or give 'em a call.

We hope you find these tips helpful. If you have some of your own tips you'd like to share, feel free to leave a comment below or start a discussion in the Google Webmaster Help group. Practice safe webmastering!

Thursday, 17 September 2015

Video Sitemaps: Is your video part of a gallery?

Webmaster Level: All

Often a website which hosts videos will have a common top-level page that groups conceptually related videos together. Such a page may be of interest to a user searching on that subject. Sites with many videos about a single subject can group these videos together on a top-level page, often known as a gallery. This can make it easier for users to find exactly what they're looking for. In this case, you can use a Sitemap to tell Google the URL of the gallery page on which each video appears.


You can specify the URL of the gallery level page using the optional tag <video:gallery_loc> on a per-video basis. Note that only one gallery_loc is allowed per video.

For more information on Google Videos, including Sitemap specifications, please visit our Help Center. To post questions and search for answers, check out our Help Forum.

Tuesday, 15 September 2015

For Those Wondering About Public Service Search

Update: The described product or service is no longer available. More information.

We recently learned of a security issue with our Public Service Search service and disabled login functionality temporarily to protect our Public Service Search users while we were working to fix the problem. We are not aware of any malicious exploits of this problem and this service represents an extremely small portion of searches.

We have a temporary fix in place currently that prevents exploitation of this problem and will have a permanent solution in place shortly. Unfortunately, the temporary fix may inconvenience a small number of Public Service Search users in the following ways:

* Public Service Search is currently not open to new signups.
* If you use Public Service Search on your site, you are currently unable to log in to make changes, but rest assured that Public Service Search continues to function properly on your site.
* The template system is currently disabled, so search results will appear in a standard Google search results format, rather than customized to match the look and feel of your site. However, the search results themselves are not being modified.


If you are a Public Service Search user and are having trouble logging in right now, please sit tight. As soon as the permanent solution is in place the service will be back on its feet again. In the meantime, you will still be able to provide site-specific searches on your site as usual.

Google introduced this service several years ago to support universities and non-profit organizations by offering ad-free search capabilities for their sites. Our non-profit and university users are extremely important to us and we apologize for any inconvenience this may cause.

Please post any questions or concerns in our webmaster discussion forum and we'll try our best to answer any questions you may have.

Tips for getting help with your site

Webmaster Level: All

As a search company, we at Google try to develop scalable solutions to problems. In fact, Webmaster Tools was born out of this instinct: rather than fighting the losing battle of trying to respond to questions via email (and in multiple languages!), we developed an automated, scalable product that gives webmasters like you information about your sites and lets you handle many requests yourself. Now you can streamline the crawling of your site, improve your sitelinks, or clean up after a malware attack all on your own.

Of course, our Help Forum still gets hundreds of questions from site owners every week — everything from "Why isn't my site in Google?" to very specific questions about a particular API call or a typo in our documentation. When we see patterns—such as a string of questions about one particular topic—we continue to use that information in scalable ways, such as to help us decide which parts of the product need work, or what new features we should develop. But we also still answer a lot of individual questions in our forum, on our blog, and at industry events. However, we can't answer them all.

So how do we decide which questions to tackle? We have a few guiding principles that help us make the most of the time we spend in places like our forum. We believe that there are many areas in which Google’s interests and site owners’ interests overlap, and we’re most motivated by questions that fall into these areas. We want to improve our search results, and improve the Internet; if we can help you make your site faster, safer, more compelling, or more accessible, that’s good for both of us, and for Internet users at large. We want to help as many people at a time as we can, so we like questions that are relevant to more than just one person, and we like to answer them publicly. We want to add value with the time we spend, so we prefer questions where we can provide more insight than the average person, rather than just regurgitating what’s already written in our Help Center.

The reason I tell you all this is because you can greatly increase your chances of getting an answer if you make it clear how your question helps us meet these goals. Here are some tips for increasing the likelihood that someone will answer your question:
  1. Ask in public.
    If you post your question in our forum, the whole world gets to see the answer. Then when Betty has the same question a week later, she benefits because she can find the answer instantly in our forum, and I benefit because it saves me from having to answer the same question twice (or ten times, or fifty times, or...). We have a very strong preference for answering questions publicly (in a forum, on a blog, at a conference, in a video...) so that many people can benefit from the answer.
  2. Do your homework.
    We put a lot of effort into writing articles, blog posts and FAQs to help people learn about search and site-building, and we strongly encourage you to search our Help Center, blog and/or forum for answers before asking a question. You may find an answer on the spot. If you don’t, when you post your question be sure to indicate what resources you’ve already read and why they didn’t meet your needs: for example, “I read the Help Center article on affiliate websites but I’m still not sure whether this particular affiliate page on my site has enough added value; can I get some feedback?” This shows that you’ve taken the time to try to help yourself, it saves everyone from reiterating the obvious solutions if you’ve already ruled those out, and it will help get you a more specific and relevant answer. It can also help us improve our documentation if something’s missing.
  3. Be specific.
    If you ask a vague question, you’re likely to get a vague answer. The more details and context you can give, the more able someone will be to give you a relevant, personalized answer. For example, “Why was my URL removal request denied?” is likely to get you a link to this article, as removals can be denied for a variety of reasons. However, if you say what type of removal you requested, what denial reason you got, and/or the URL in question, you’re more likely to get personalized advice on what went wrong in your case and what you can do differently.
  4. Make it relevant to others.
    As I said earlier, we like to help as many people at a time as we can. If you make it clear how your question is relevant to more people than just you, we’ll have more incentive to look into it. For example: “How can site owners get their videos into Google Video search? In particular, I’m asking about the videos on www.example.com.”
  5. Let us know if you’ve found a bug.
    As above, the more specific you can be, the better. What happened? What page or URL were you on? If it’s in Webmaster Tools, what site were you managing? Do you have a screenshot? All of these things help us track down the issue sooner. We appreciate your feedback, but if it’s too vague we won’t understand what you’re trying to tell us!
  6. Stay on-topic.
    Have a question about Google Analytics? iGoogle? Google Apps? That’s great; go ask it in the Analytics / iGoogle / Apps forum. Not every Googler is familiar with every product Google offers, so you probably won’t get an answer if you’re asking a Webmaster Central team member about something other than Web Search or Webmaster Tools.
  7. Stay calm.
    Trust me, we’ve heard it all. Making threats, being aggressive or accusatory, YELLING IN ALL CAPS, asking for “heeeeeeeeeeeeeeelp!!!!!1!!,” or claiming Google is involved in a mass conspiracy against you & your associates because your sites aren’t ranked on page one... Rather than making others want to help you, these things are likely to turn people off. The best way to get someone to help is by calmly explaining the situation, giving details, and being clear about what you’re asking for.
  8. Listen, even when it’s not what you wanted to hear.
    The answer to your question may not always be the one you wanted; but that doesn’t mean that answer isn’t correct. There are many areas of SEO and website design that are as much an art as a science, so a conclusive answer isn’t always possible. When in doubt, feel free to ask people to cite their sources, or to explain how/where they learned something. But keep an open mind and remember that most people are just trying to help, even if they don’t agree with you or tell you what you wanted to hear.
Bonus tip: Are you more comfortable communicating in a language other than English? We have Webmaster Help Forums available in 18 other languages; you can find the list here.

Monday, 14 September 2015

Subscriber stats and more

We're unrolling some exciting new features in Webmaster Tools.

First of all, subscriber stats are now available. Webmaster Tools now show feed publishers the number of aggregated subscribers you have from Google services such as Google Reader, iGoogle, and Orkut. We hope this will make it easier to track subscriber statistics across multiple feeds, as well as offer an improvement over parsing through server logs for feed information.


To improve the navigation and look and feel, we've also made some changes to the interface, including:
  • No more tabs! Navigate through the new sidebar.
  • Breadcrumbs in the page title for easier product navigation.
  • A sidebar that expands and contracts to show and hide options based on your current goal.
  • New sidebar topics: Overview, Diagnostics, Statistics, Links, Sitemaps, and Tools.
And last but not least, Webmaster Tools is now available in 20 languages! In addition to US English, UK English, French, Italian, Spanish, German, Dutch, Brazilian Portuguese, Traditional Chinese, Simplified Chinese, Korean, Russian, Japanese, Danish, Finnish, Norwegian, Swedish, and Polish, Webmaster Tools are now in Turkish and Romanian.

Sign in to see these changes for yourself. For questions or feedback, please post in the Google Webmaster Tools section of our Webmaster Help Group.

Update: some of the functionality described in this post is no longer available. More information.

Answering the top questions from government webmasters

Webmaster level: Beginner - Intermediate

Government sites, from city to state to federal agencies, are extremely important to Google Search. For one thing, governments have a lot of content — and government websites are often the canonical source of information that’s important to citizens. Around 20 percent of Google searches are for local information, and local governments are experts in their communities.

That’s why I’ve spoken at the National Association of Government Webmasters (NAGW) national conference for the past few years. It’s always interesting speaking to webmasters about search, but the people running government websites have particular concerns and questions. Since some questions come up frequently I thought I’d share this FAQ for government websites.

Question 1: How do I fix an incorrect phone number or address in search results or Google Maps?

Although managing their agency’s site is plenty of work, government webmasters are often called upon to fix problems found elsewhere on the web too. By far the most common question I’ve taken is about fixing addresses and phone numbers in search results. In this case, government site owners really can do it themselves, by claiming their Google+ Local listing. Incorrect or missing phone numbers, addresses, and other information can be fixed by claiming the listing.

Most locations in Google Maps have a Google+ Local listing — businesses, offices, parks, landmarks, etc. I like to use the San Francisco Main Library as an example: it has contact info, detailed information like the hours they’re open, user reviews and fun extras like photos. When we think users are searching for libraries in San Francisco, we may display a map and a listing so they can find the library as quickly as possible.

If you work for a government agency and want to claim a listing, we recommend using a shared Google Account with an email address at your .gov domain if possible. Usually, ownership of the page is confirmed via a phone call or post card.

Question 2: I’ve claimed the listing for our office, but I have 43 different city parks to claim in Google Maps, and none of them have phones or mailboxes. How do I claim them?

Use the bulk uploader! If you have 10 or more listings / addresses to claim at the same time, you can upload a specially-formatted spreadsheet. Go to www.google.com/places/, click the "Get started now" button, and then look for the "bulk upload" link.

If you run into any issues, use the Verification Troubleshooter.

Question 3: We're moving from a .gov domain to a new .com domain. How should we move the site?

We have a Help Center article with more details, but the basic process involves the following steps:
  • Make sure you have both the old and new domain verified in the same Webmaster Tools account.
  • Use a 301 redirect on all pages to tell search engines your site has moved permanently.
    • Don't do a single redirect from all pages to your new home page — this gives a bad user experience.
    • If there's no 1:1 match between pages on your old site and your new site (recommended), try to redirect to a new page with similar content.
    • If you can't do redirects, consider cross-domain canonical links.
  • Make sure to check if the new location is crawlable by Googlebot using the Fetch as Google feature in Webmaster Tools.
  • Use the Change of Address tool in Webmaster Tools to notify Google of your site's move.
  • Have a look at the Links to Your Site in Webmaster Tools and inform the important sites that link to your content about your new location.
  • We recommend not implementing other major changes at the same time, like large-scale content, URL structure, or navigational updates.
  • To help Google pick up new URLs faster, use the Fetch as Google tool to ask Google to crawl your new site, and submit a Sitemap listing the URLs on your new site.
  • To prevent confusion, it's best to retain control of your old site’s domain and keep redirects in place for as long as possible — at least 180 days.
What if you’re moving just part of the site? This question came up too — for example, a city might move its "Tourism and Visitor Info" section to its own domain.

In that case, many of the same steps apply: verify both sites in Webmaster Tools, use 301 redirects, clean up old links, etc. In this case you don't need to use the Change of Address form in Webmaster Tools since only part of your site is moving. If for some reason you’ll have some of the same content on both sites, you may want to include a cross-domain canonical link pointing to the preferred domain.

Question 4: We've done a ton of work to create unique titles and descriptions for pages. How do we get Google to pick them up?

First off, that's great! Better titles and descriptions help users decide to click through to get the information they need on your page. The government webmasters I’ve spoken with care a lot about the content and organization of their sites, and work hard to provide informative text for users.

Google's generation of page titles and descriptions (or "snippets") is completely automated and takes into account both the content of a page as well as references to it that appear on the web. Changes are picked up as we recrawl your site. But you can do two things to let us know about URLs that have changed:
  • Submit an updated XML Sitemap so we know about all of the pages on your site.
  • In Webmaster Tools, use the Fetch as Google feature on a URL you’ve updated. Then you can choose to submit it to the index.
    • You can choose to submit all of the linked pages as well — if you’ve updated an entire section of your site, you might want to submit the main page or an index page for that section to let us know about a broad collection of URLs.

Question 5: How do I get into the YouTube government partner program?

For this question, I have bad news, good news, and then even better news. On the one hand, the government partner program has been discontinued. But don’t worry, because most of the features of the program are now available to your regular YouTube account. For example, you can now upload videos longer than 10 minutes.

Did I say I had even better news? YouTube has added a lot of functionality useful for governments in the past year:
I hope this FAQ has been helpful, but I’m sure I haven’t covered everything government webmasters want to know. I highly recommend our Webmaster Academy, where you can learn all about making your site search-engine friendly. If you have a specific question, please feel free to add a question in the comments or visit our really helpful Webmaster Central Forum.

Saturday, 12 September 2015

Better backlink data for site owners

Webmaster level: intermediate

In recent years, our free Webmaster Tools product has provided roughly 100,000 backlinks when you click the "Download more sample links" button. Until now, we've selected those links primarily by lexicographical order. That meant that for some sites, you didn't get as complete of a picture of the site's backlinks because the link data skewed toward the beginning of the alphabet.

Based on feedback from the webmaster community, we're improving how we select these backlinks to give sites a fuller picture of their backlink profile. The most significant improvement you'll see is that most of the links are now sampled uniformly from the full spectrum of backlinks rather than alphabetically. You're also more likely to get example links from different top-level domains (TLDs) as well as from different domain names. The new links you see will still be sorted alphabetically.

Starting soon, when you download your data, you'll notice a much broader, more diverse cross-section of links. Site owners looking for insights into who recommends their content will now have a better overview of those links, and those working on cleaning up any bad linking practices will find it easier to see where to spend their time and effort.

Thanks for the feedback, and we'll keep working to provide helpful data and resources in Webmaster Tools. As always, please ask in our forums if you have any questions.




Setting the preferred domain

Based on your input, we've recently made a few changes to the preferred domain feature of webmaster tools. And since you've had some questions about this feature, we'd like to answer them.

The preferred domain feature enables you to tell us if you'd like URLs from your site crawled and indexed using the www version of the domain (http://www.example.com) or the non-www version of the domain (http://example.com). When we initially launched this, we added the non-preferred version to your account when you specified a preference so that you could see any information associated with the non-preferred version. But many of you found that confusing, so we've made the following changes:
  • When you set the preferred domain, we no longer will add the non-preferred version to your account.
  • If you had previously added the non-preferred version to your account, you'll still see it listed there, but you won't be able to add a Sitemap for the non-preferred version.
  • If you have already set the preferred domain and we had added the non-preferred version to your account, we'll be removing that non-preferred version from your account over the next few days.
Note that if you would like to see any information we have about the non-preferred version, you can always add it to your account.

Here are some questions we've had about this preferred domain feature, and our replies.

Once I've set my preferred domain, how long will it take before I see changes?
The time frame depends on many factors (such as how often your site is crawled and how many pages are indexed with the non-preferred version). You should start to see changes in the few weeks after you set your preferred domain.

Is the preferred domain feature a filter or a redirect? Does it simply cause the search results to display on the URLs that are in the version I prefer?
The preferred domain feature is not a filter. When you set a preference, we:
  • Consider all links that point to the site (whether those links use the www version or the non-www version) to be pointing at the version you prefer. This helps us more accurately determine PageRank for your pages.
  • Once we know that both versions of a URL point to the same page, we try to select the preferred version for future crawls.
  • Index pages of your site using the version you prefer. If some pages of your site are indexed using the www version and other pages are indexed using the non-www version, then over time, you should see a shift to the preference you've set.
If I use a 301 redirect on my site to point the www and non-www versions to the same version, do I still need to use this feature?
You don't have to use it, as we can follow the redirects. However, you still can benefit from using this feature in two ways: we can more easily consolidate links to your site and over time, we'll direct our crawl to the preferred version of your pages.

If I use this feature, should I still use a 301 redirect on my site?
You don't need to use it for Googlebot, but you should still use the 301 redirect, if it's available. This will help visitors and other search engines. Of course, make sure that you point to the same URL with the preferred domain feature and the 301 redirect.

You can find more about this in our webmaster help center.

Google, duplicate content caused by URL parameters, and you



How can URL parameters, like session IDs or tracking IDs, cause duplicate content?
When user and/or tracking information is stored through URL parameters, duplicate content can arise because the same page is accessible through numerous URLs. It's what Adam Lasnik referred to in "Deftly Dealing with Duplicate Content" as "store items shown (and -- worse yet -- linked) via multiple distinct URLs." In the example below, URL parameters create three URLs which access the same product page.


(click to enlarge)

Why should you care?
When search engines crawl identical content through varied URLs, there may be several negative effects:

1. Having multiple URLs can dilute link popularity. For example, in the diagram above, rather than 50 links to your intended display URL, the 50 links may be divided three ways among the three distinct URLs.

2. Search results may display user-unfriendly URLs (long URLs with tracking IDs, session IDs)
* Decreases chances of user selecting the listing
* Offsets branding efforts


How we help users and webmasters with duplicate content
We've designed algorithms to help prevent duplicate content from negatively affecting webmasters and the user experience.

1. When we detect duplicate content, such as through variations caused by URL parameters, we group the duplicate URLs into one cluster.

2. We select what we think is the "best" URL to represent the cluster in search results.

3. We then consolidate properties of the URLs in the cluster, such as link popularity, to the representative URL.

Consolidating properties from duplicates into one representative URL often provides users with more accurate search results.


If you find you have duplicate content as mentioned above, can you help search engines understand your site?
First, no worries, there are many sites on the web that utilize URL parameters and for valid reasons. But yes, you can help reduce potential problems for search engines by:

1. Removing unnecessary URL parameters -- keep the URL as clean as possible.

2. Submitting a Sitemap with the canonical (i.e. representative) version of each URL. While we can't guarantee that our algorithms will display the Sitemap's URL in search results, it's helpful to indicate the canonical preference.


How can you design your site to reduce duplicate content?
Because of the way Google handles duplicate content, webmasters need not be overly concerned with the loss of link popularity or loss of PageRank due to duplication. However, to reduce duplicate content more broadly, we suggest:

1. When tracking visitor information, use 301 redirects to redirect URLs with parameters such as affiliateID, trackingID, etc. to the canonical version.

2. Use a cookie to set the affiliateID and trackingID values.

If you follow this guideline, your webserver logs could appear as:

127.0.0.1 - - [19/Jun/2007:14:40:45 -0700] "GET /product.php?category=gummy-candy&item=swedish-fish&affiliateid=ABCD HTTP/1.1" 301 -

127.0.0.1 - - [19/Jun/2007:14:40:45 -0700] "GET /product.php?item=swedish-fish HTTP/1.1" 200 74

And the session file storing the raw cookie information may look like:

category|s:11:"gummy-candy";affiliateid|s:4:"ABCD";

Please be aware that if your site uses cookies, your content (such as product pages) should remain accessible with cookies disabled.


How can we better assist you in the future?
We recently published ideas from SMX Advanced on how search engines can help webmasters with duplicate content. If you have an opinion on the topic, please join our conversation in the Webmaster Help Group (we've already started the thread).

Update: for more information, please see our Help Center article on canonicalization.

Wednesday, 9 September 2015

Unifying content under multilingual templates

Webmaster Level: Advanced

Update: This markup can now be used for multilingual and multi-regional content in general. More information.


If you have a global site containing pages where the:
  • template (i.e. side navigation, footer) is machine-translated into various languages,
  • main content remains unchanged, creating largely duplicate pages,
and sometimes search results direct users to the wrong language, we’d like to help you better target your international/multilingual audience through:

<link rel=”alternate” hreflang="a-different-language" href="http://url-of-the-different-language-page" />

As you know, when rel=”canonical” or a 301 response code is properly implemented, we become more precise in clustering information from duplicate URLs, such as consolidating their linking properties. Now, when rel=”alternate” hreflang=”x” is included in conjunction with rel=”canonical” or 301s, not only will our indexing and linking properties be more accurate, but we can better serve users the URL of their preferred language.

Sample configuration that’s prime for rel=”alternate” hreflang=”x”

How does this all work? Imagine that you’re the proud owner of example.com, a site called “The Network” where you allow users to create their very own profile. Let’s say Javier Lopez, a Spanish speaker, makes his page at http://es.example.com/javier-lopez:


Because you’re trying to target a multilingual audience, once Javier hits “Publish,” his profile becomes immediately available in other languages with the translated templates. Also, each of the new language versions is served on a separate URL.


Two localized versions, http://en.example.com/javier-lopez in English and http://fr.example.com/javier-lopez in French

Background on the old issue: duplicate content caused by language variations

The configuration above allowed visitors speaking different languages to more easily interpret the content, but for search engines it was slightly problematic: there are three URLs (English, French, and Spanish versions) for the same main content in Javier’s profile. Webmasters wanted to avoid duplicate content issues (such as PageRank dilution) from these multiple versions and still ensure that we would serve the appropriate version to the user.

A new solution for localized templates

First of all, just to be clear, the strategy we’re proposing isn’t appropriate for multilingual sites that completely translate each page’s content. We’re trying to specifically improve the situation where the template is localized but the main content of a page remains duplicate/identical across language/country variants.

Before we get into the specific steps, our prior advice remains applicable:
  • Have one URL associated with one piece of content. We recommend against using the same URL for multiple languages, such as serving both French and English versions on example.com/page.html based on user information (IP address, Accept-Language HTTP header).

  • When multiple languages are at play, it’s best to include the language or country indication in the URL, e.g., example.com/en/welcome.html and example.com/fr/accueil.html (which specify “en” and “fr”) rather than example.com/welcome.html and example.com/accueil.html (which don’t contain an explicit country/language specification). More suggestions can be found in our blog posts about designing localized URLs and multilingual sites.
For the new feature:
Step 1: Select the proper canonical.
The canonical designates the version of your content you’d like indexed and returned to users.
The first step towards making the right content indexable is to pick one canonical URL that best reflects the genuine locale of the page’s main content. In the example above, since Javier is a Spanish-speaking user and he created his profile on es.example.com, http://es.example.com/javier-lopez is the logical canonical. The title and snippet in all locales will be selected from the canonical URL.

Once you have the canonical URL picked out, you can either:
A. 301 (permanent redirect) from the language variants to the canonical

As an example, if a French speaker visits fr.example.com/javier-lopez (not the canonical), have this page include a cookie to remember the user's language preference of French. Then permanently redirect from fr.example.com/javier-lopez to the canonical at es.example.com/javier-lopez. Because of the cookie, es.example.com/javier-lopez will still render its boilerplate in French (even on the es.example.com subdomain!). Similarly, en.example.com/javier-lopez would set the value of this cookie to English and then 301 redirect to es.example.com/javier-lopez.

Including a language selection link is also helpful should a multilingual user prefer a different experience of your site.

B. Use rel=”canonical”

On the other language variants, include a link rel=”canonical” tag pointing to your chosen canonical. In our example, since the canonical for Javier’s profile is the Spanish version, the English and French pages (and optionally even the Spanish page itself) would include <link rel=”canonical” href="http://es.example.com/javier-lopez" />.

Cookies are not involved in this setup. Therefore, a French speaker will be served es.example.com/javier-lopez with a Spanish template. Implement step 2 if you want the French speakers to be served the French version at fr.example.com/javier-lopez in Google search results.
Step 2: In the canonical URL, specify the various language versions via the rel=”alternate” link tag, using its hreflang attribute.

rel=”alternate” URLs can be displayed in search results in accordance with a user’s language preference. The title and snippet, however, remain generated from the canonical URL (as is customary with rel=”canonical”), not from the content of any rel=”alternate”.
You can help Google display the correctly localized variant of your URL to our international users by adding the following tags to http://es.example.com/javier-lopez, the selected canonical:

<link rel=”alternate” hreflang="en" href="http://en.example.com/javier-lopez" />

<link rel=”alternate” hreflang="fr" href="http://fr.example.com/javier-lopez" />

rel=”alternate” indicates that the URL contains an alternate version located at the URI of the href value. hreflang identifies the language code of the alternate URL and can be specified with ISO-639.

Please note: If your site supports many languages and you’re worried about the increased file size when declaring numerous rel=”alternate” URLs, please see our Help Center article about configuring rel=”alternate” with file size constraints.
Once the steps are completed, the configuration on “The Network” would look like this:
  • http://en.example.com/javier-lopez
    either 301s with a language cookie or contains <link rel=”canonical” href=”http://es.example.com/javier-lopez” />
  • http://fr.example.com/javier-lopez
    either 301s with a language cookie or contains <link rel=”canonical” href=”http://es.example.com/javier-lopez” />
  • http://es.example.com/javier-lopez
    is the canonical and contains
    <link rel=”alternate” hreflang="en" href="http://en.example.com/javier-lopez" />
    and
    <link rel=”alternate” hreflang="fr" href="http://fr.example.com/javier-lopez" />

Results of the above implementation
  • When your content is returned in search results, users will likely see the URL that corresponds to their language preference, whether or not it’s the canonical. (Good news!) This is because with with rel=”canonical” or a 301 redirect, we can cluster the language variations with the canonical. With rel=”alternate” hreflang=”x” at serve-time we can deliver the URL of the most appropriate language to the user: English speakers will be served en.example.com/javier-lopez as the result for the URL in Javier’s profile, French speakers will see fr.example.com/javier-lopez, Spanish speakers will see es.example.com/javier-lopez.

  • By implementing step 1, only content from the canonical version will be available for users in search results (i.e. content from the duplicate versions won’t be searchable). Because the Spanish version es.example.com/javier-lopez is the canonical, queries that include template content from this page, e.g. [Javier Lopez familia] -- when using any language preference -- may return his profile (content from the canonical version). On the other hand, queries that include template content of the “duplicate” version, e.g. [Javier Lopez family], are less likely to return his profile page. If you would like the other language versions indexed separately and searchable, avoid using rel=”canonical” and rel=”alternate”.

  • Indexing properties, such as linking information, from the duplicate language variants will be consolidated with the canonical.

To recap (one more time, with feeling!)

For sites that have their template localized but the keep their pages’ main content untranslated:

Step 1: Once you have the canonical picked out you can use either rel=”canonical” or a 301 (permanent redirect) from the various localized pages to the canonical URL.

Step 2: On the canonical URL, specify the language-specific duplicated content with different boilerplate via the rel=”alternate” link tag, using its hreflang attribute. This way, Google can show the correctly-localized variant of your URLs to our international users.

We realize this can be a little complicated, so if you have questions, please ask in our webmaster forum!

Tuesday, 8 September 2015

Google Instant: Impact on Search queries

Webmaster Level: All

Webmasters, you may notice some changes in you Search queries data due to the launch of Google Instant. With Google Instant, the page updates dynamically to show results for the top completion of what the user has typed, so this means people could be seeing and visiting your website much faster than before, and often without clicking the search button or hitting “enter.”


While the presentation of the search results may change, our most important advice to webmasters remains the same: Users want to visit pages with compelling content and a great user experience.

With Google Instant, you may notice an increase in impressions because your site will appear in search results as users type.


Impressions are measured in three ways with Google Instant:
  1. Your site is displayed in search results as a response to a user’s completed query (e.g. by pressing “enter” or selecting a term from autocomplete). This is the traditional model.

    With Google Instant, we also measure impressions in these new cases:

  2. The user begins to type a term on Google and clicks on a link on the page, such as a search result, ad, or a related search.

  3. The user stops typing, and the results are displayed for a minimum of 3 seconds.
To give an example, let’s say your site has lots of impressions for [hotels] and [hotels in santa cruz]. Now, because Instant is quickly fetching results as the user types, the user could see your site in the search results for [hotels] after typing only the partial query [hote]. If a user types the partial query [hote] and then clicks on any result on the page for [hotels], that counts as an impression for your site. That impression will appear in Webmaster Tools for the query [hotels]. The term 'hotels' would also be included in the HTTP referrer when the user clicks through to visit your website.

It’s likely that your site will still see impressions for queries like [hotels in santa cruz], but because Instant is helping the user find results faster, your site may see an increase in impressions for shorter terms as well.

Let us know if we can help you better understand how these changes impact Webmaster Tools, measured Search queries and impressions, CTR, or anything else. We’d love to hear from you in our blog comments or Webmaster Help Forum.

Monday, 7 September 2015

Information about Sitelinks

You may have noticed that some search results include a set of links below them to pages within the site. We've just updated our help center with information on how we generate these links, called Sitelinks, and why we show them.

Our process for generating Sitelinks is completely automated. We show them when we think they'll be most useful to searchers, saving them time from hunting through web pages to find the information they are looking for. Over time, we may look for ways to incorporate input from webmasters too.

Sunday, 6 September 2015

Webmaster Central gets a new look

Written by David Sha, Webmaster Tools Team

We launched Webmaster Central back in August 2006, with a goal of creating a place for you to learn more about Google's crawling and indexing of websites, and to offer tools for submitting sitemaps and other content. Given all of your requests and recommendations, we've also been busy working behind the scenes to roll out exciting new features for Webmaster Tools, like internal/external links data and the Message Center, over the past year.

And so today, we're unveiling a new look on the Webmaster Central landing page at http://www.google.com/webmasters. You'll still find all of the tools and resources you've come to love like our Webmaster Blog and discussion group -- but now, in addition to these, we've added a few more you might enjoy and find useful. We hope that the new layout will make it easier to discover some additional resources that will help you learn even more about how to improve traffic to your site, submit content to Google, and enhance your site's functionality.

Here's a brief look at some of the new additions:

Analyze your visitors. Google Analytics is a free tool for webmasters to better understand their visitor traffic in order to improve site content. With metrics including the amount of time spent on each page and the percentage of new vs. returning visits to a page, webmasters can tailor their site's content around pages that resonate most with visitors.

Add custom search to your pages. Google Custom Search Engine (CSE) is a great way for webmasters to incorporate search into their site and help their site visitors find what they're looking for. CSE gives webmasters access to a XML API, allowing greater control over the search results look and feel, so you can keep visitors on your site focused only on your content.

Leverage Google's Developer Tools. Google Code has tons of Google APIs and developer tools to help webmasters put technologies like Google Maps and AJAX Search on their websites.

Add gadgets to your webpage. Google Gadgets for your Webpage are a quick and easy way for webmasters to enhance their sites with content-rich gadgets, free from the Google Gadget directory. Adding gadgets to your webpage can make your site more interactive and useful to visitors, making sure they keep coming back.

We'd love to get your feedback on the new site. Feel free to comment below, or join our discussion group.

Saturday, 5 September 2015

Better details about when Googlebot last visited a page

Most people know that Googlebot downloads pages from web servers to crawl the web. Not as many people know that if Googlebot accesses a page and gets a 304 (Not-Modified) response to a If-Modified-Since qualified request, Googlebot doesn't download the contents of that page. This reduces the bandwidth consumed on your web server.

When you look at Google's cache of a page (for instance, by using the cache: operator or clicking the Cached link under a URL in the search results), you can see the date that Googlebot retrieved that page. Previously, the date we listed for the page's cache was the date that we last successfully fetched the content of the page. This meant that even if we visited a page very recently, the cache date might be quite a bit older if the page hadn't changed since the previous visit. This made it difficult for webmasters to use the cache date we display to determine Googlebot's most recent visit. Consider the following example:
  1. Googlebot crawls a page on April 12, 2006.
  2. Our cached version of that page notes that "This is G o o g l e's cache of http://www.example.com/ as retrieved on April 12, 2006 20:02:06 GMT."
  3. Periodically, Googlebot checks to see if that page has changed, and each time, receives a Not-Modified response. For instance, on August 27, 2006, Googlebot checks the page, receives a Not-Modified response, and therefore, doesn't download the contents of the page.
  4. On August 28, 2006, our cached version of the page still shows the April 12, 2006 date -- the date we last downloaded the page's contents, even though Googlebot last visited the day before.
We've recently changed the date we show for the cached page to reflect when Googlebot last accessed it (whether the page had changed or not). This should make it easier for you to determine the most recent date Googlebot visited the page. For instance, in the above example, the cached version of the page would now say "This is G o o g l e's cache of http://www.example.com/ as retrieved on August 27, 2006 13:13:37 GMT."

Note that this change will be reflected for individual pages as we update those pages in our index.

Thursday, 3 September 2015

New ways to view Webmaster Tools messages

Webmaster Level: All

Now there’s a new way to see just the messages for a specific site. A new Messages feature will appear on all site pages. The feature is just like the Message Center on the home page, except it‘ll show only messages for the currently selected site. This gives you more freedom to choose how you want to view your messages: either for all your sites, or for just one site at a time.


Alerts (formally known as SiteNotice messages) will now be more prominent in the Message Center. These messages tell you about significant changes we’ve noticed related to your site which may indicate serious problems. For instance, alerts may warn you about an increase in crawl errors, an increase in 404 errors, or about possible outages. With their newfound prominence comes a new name: what used to be “SiteNotice messages” will now simply be known as “alerts.”

Messages containing alerts will be marked with an icon to make them quickly distinguishable from other messages. Each site’s Dashboard will display a notification whenever the site has unread alerts. The Dashboard notification will lead to the new site Message Center with a filter enabled to show only alerts for the current site.


You can also enable the alerts filter yourself. On the home page, enabling the alerts filter across all your sites is a great way to see alerts you may have missed and may help you find problems common across multiple sites. Even with these changes we recommend you use the email forwarding feature to receive these important alerts without having to visit Webmaster Tools.

We hope these new features make it easier to manage your messages. If you have any questions, please post them in our Webmaster Help Forum or leave your comments below.