Monday, 30 May 2011

Pharmaceuticals Found in Drinking Water

Since the 1990s, scientists have been concerned that there could possibly be minute amounts of drugs in our drinking water. The concern began to escalate when studies found that fish in the Potomac River and other places began showing signs of having both male and female characteristics. Scientists began investigating the effects of oral contraceptives that were found in sewage water. Later, the investigation expanded to look at other drugs.

A prevalent term employed to describe drug pollution found in drinking water is PPCP - Pharmaceuticals and Personal Care Items. PPCPs refer to any item employed by people for personal health or cosmetic reasons or utilised by agribusiness to improve growth or wellness of livestock. PPCPs consist of a diverse collection of thousands of chemical substances, which includes prescription and over-the-counter therapeutic drugs, veterinary drugs, fragrances, lotions, and cosmetics.

PPCPs can enter the environment by means of a selection of ways.

· Medication residues pass out of the body and into the water treatment program

· Externally-applied drugs and personal care merchandise utilised by customers are washed down a shower drain

· Unused or expired medications are placed in the trash or flushed down a toilet

The kinds of pharmaceuticals discovered in water tests included antibiotics, hormones, antiepileptic drugs, mood stabilizers and other medications. The drugs found in US drinking water are measured in parts per billion or trillion. Even though these are trace amounts, experts from private organizations and the government are concerned about the lengthy term exposure and wellness effects of these PPCPs. According to Benjamin Grumbles of the U.S. EPA, "We recognize it is a growing concern and we're taking it rather seriously."

While current waste water treatment systems filter out a lot of contaminants and biosolids, none have been engineered to particularly eliminate PPCPs. Most metropolitan water utilities do not use the technology offered to filter PPCPs from their water sources. As a result, these drugs stay in the water supply and continually get recycled into a city's water body.

Until our water utilities update their filtration procedure, buyers can take many actions to improve their water situation. 1st, contact your local water utility and ask them what pollutants they test for in the drinking water. Request to see the results. Second, don't flush expired or unused medications down the toilet. Most importantly, acquire an powerful household water filter. There are a wide selection of filters on the market utilizing the latest technologies to aid get rid of the drugs in our drinking water.

Wednesday, 18 May 2011

Troubleshooting Instant Previews in Webmaster Tools

Webmaster level: All

In November, we launched Instant Previews to help users better understand if a particular result was relevant for a their search query. Since launch, our Instant Previews team has been keeping an eye on common complaints and problems related to how pages are rendered for Instant Previews.

When we see issues with preview images, they are frequently due to:
  • Blocked resources due to a robots.txt entry
  • Cloaking: Erroneous content being served to the Googlebot user-agent
  • Poor alternative content when Flash is unavailable
To help webmasters diagnose these problems, we have a new Instant Preview tool in the Labs section of Webmaster Tools (in English only for now).



Here, you can input the URL of any page on your site. We will then fetch the page from your site and try to render it both as it would display in Chrome and through our Instant Preview renderer. Please keep in mind that both of these renders are done using a recent build of Webkit which does not include plugins such as Flash or Silverlight, so it's important to consider the value of providing alternative content for these situations. Alternative content can be helpful to search engines, and visitors to your site without the plugin would benefit as well.

Below the renders, you’ll also see automated feedback on problems our system can detect such as missing or roboted resources. And, in the future, we plan to add more informative and timely feedback to help improve your Instant Previews!

Please direct your questions and feedback to the Webmaster Forum.

Tuesday, 17 May 2011

Deep Impact - We Now Have the Technology to Avoid It

We all want a planet fit in the future for our kids, and as a species we've grow to be conscious over the last couple of decades specifically how fragile the Earth really is. We've realised the devastation that could be caused by a super volcano, an asteroid impact, earthquakes, coronal mass ejections from our nearby star, a gamma ray burst aligned at our planet, plus our own self-inflicted harm to our environment in the form of climate change, pollution, harm to the marine ecosystem and ozone layer damage caused by chlorofluorocarbons.

A distinctive feature of homo sapiens amongst the other animals with whom we share the Earth is our capacity not just to adapt to the environment, but really change it. Indeed we've changed it - but adversely when it comes to ozone depletion, pollution and climate alter, and indeed we're now generating inroads in reversing our negative impact. Yet, all of these issues are 'walks in the park' compared to the other threats, with 1 major exception - asteroid impacts.

We live in a cosmic shooting gallery. Any individual who disagrees should really look no further than Comet Shoemaker-Levy 9, ripped apart by Jupiter's gravity and pummelling into the mighty gas giant in July, 1994. If that collision had occurred on Earth, none of us would be here now.

Having said that, the Earth is not immune and collisions with both asteroids and comets have happened in our planet's past, and will happen once more. The last time a civilisation-destroying asteroid 10kms across hit our terrestrial bulls eye was 65 million years ago at Chicxulub on the Yucatan Peninsular. Luckily, there was no civilisation about to destroy, but it was a bad day for T Rex and its dinosaur cousins, along with 80% of the plant and animal species on the planet. The death of the dinosaurs resulting at least partially from the impact on the Yucatan had the benefit of allowing tiny mammals and ultimately us to exploit the ecological niches vacated by these most successful of animals (in terms of the longevity of their reign).

At the identical time although, it's a sobering thought to feel that mass extinctions caused by incoming asteroids occur, on average, just about every 100 million years, and the next one will wipe out mankind. It is not a question of if, it is a question of when it will happen. Unlike T Rex although, the very good news is that we can stay clear of such an unpleasant visitor from space, even with our present technologies. Or rather we can make sure that such a possible impactor avoids us.

Worldwide, apart from the United States and NASA, most governments have not taken the problem of asteroid impacts seriously sufficient. Commendably NASA has surveyed huge parts of the solar method, specifically in the vicinity of Earth for Near Earth Objects (NEOs) and Potentially Hazardous Asteroids (PHAs) (those which directly cross the Earth's orbit around the Sun). Such objects routinely pass between the Earth and the Moon, even between the Earth and some of our geo-stationary satellites. Full details of NEOs and PHAs discovered are obtainable here:

So what can we do if an NEO or PHA is detected and has our name on it? If the asteroid is composed of a loose barely gravitationally-bound conglomerate of ice and rock the nuclear choice of blowing it to smithereens with a detonation would replace 1 difficulty with a host of them, we would then face the prospect of a number of smaller fragments reigning down on the Earth causing most likely as much destruction.

One of the most considered scientific methods of removing the threat once one of these killer inbound asteroids has been located is not a Bruce Willis-style detonation at all, but concerns making use of the pressure of sunlight.

Light is composed of particles called photons, and like any particles, photons from the Sun create pressure - albeit especially little pressure. Focus the photons utilizing mirrors into an intense beam, obtain the asteroid early enough, and only a miniscule change of the object's trajectory by photon pressure is required.

The Pasadena, California based Planetary Society (TPS), the largest public space interest group in the world, is now working with a team at the University of Glasgow in Scotland to study a new technique which uses this concentrated light to gently move an asteroid -- a project they call "Mirror Bees." The researchers in Scotland, under the leadership of Massimiliano Vasile, became interested in this approach when they discovered that Mirror Bees would function even more easily and effectively than each approach, apart from nuclear warheads (the difficulties with this option has already been discussed).

This new technique involves several modest spacecraft -- each carrying a mirror -- swarming around a hazardous asteroid. The spacecraft could precisely tilt their mirrors to focus sunlight onto a tiny spot on the asteroid, vaporizing the rock and metal, and creating a jet plume of super-heated gases and debris. Alternatively, the satellites could contain potent lasers pumped by sunlight, and the lasers could be utilized to vaporize the rock. The asteroid would become the fuel for its own rocket -- and slowly, the asteroid would move into a new trajectory.

Main questions still stay about this technique. For example, will the plume of superheated gases ejected from an asteroid dissipate, or will it block sunlight to the mirrors? Would the debris settle on the satellite mirrors? Can the asteroid's rotation be dealt with efficiently? Will the gas plumes be enough to deflect the asteroid?

TPS is stepping in to fund a series of laboratory experiments to answer these and other questions. Vasile's group is working with Ian Watson and the laser lab of the University of Glasgow's Mechanical Engineering Department to devise some ingenious tiny-scale experiments. TPS will be funding equipment, supplies, and a graduate student dedicated to working on the experiments. Only by means of these varieties of studies, as nicely as additional theoretical research, can the details of this technique be worked out and understood. If it pans out, it will be a rapid, effective, and safe choice to use against the asteroid that inevitably will come Earth's way.

It is comforting to know that this valuable investigation is being undertaken against one of the biggest threats against our civilisation. Until we commence seriously colonising space the human race still has all its eggs in 1 basket here on Earth. It would be foolish and irresponsible to ignore the risk, to be able to do something about it and but do absolutely nothing. T Rex had an excuse. With our intelligence and technology, we don't.

Easier URL removals for site owners

Webmaster Level: All

We recently made a change to the Remove URL tool in Webmaster Tools to eliminate the requirement that the webpage's URL must first be blocked by a site owner before the page can be removed from Google's search results. Because you've already verified ownership of the site, we can eliminate this requirement to make it easier for you, as the site owner, to remove unwanted pages (e.g. pages accidentally made public) from Google's search results.

Removals persist for at least 90 days
When a page’s URL is requested for removal, the request is temporary and persists for at least 90 days. We may continue to crawl the page during the 90-day period but we will not display it in the search results. You can still revoke the removal request at any time during those 90 days. After the 90-day period, the page can reappear in our search results, assuming you haven’t made any other changes that could impact the page’s availability.

Permanent removal
In order to permanently remove a URL, you must ensure that one of the following page blocking methods is implemented for the URL of the page that you want removed:
This will ensure that the page is permanently removed from Google's search results for as long as the page is blocked. If at any time in the future you remove the previously implemented page blocking method, we may potentially re-crawl and index the page. For immediate and permanent removal, you can request that a page be removed using the Remove URL tool and then permanently block the page’s URL before the 90-day expiration of the removal request.



For more information about URL removals, see our “URL removal explained” blog series covering this topic. If you still have questions about this change or about URL removal requests in general, please post in our Webmaster Help Forum.

Thursday, 12 May 2011

Website Security for Webmasters

Webmaster level: Intermediate to Advanced

Users are taught to protect themselves from malicious programs by installing sophisticated antivirus software, but often they may also entrust their private information to websites like yours, in which case it’s important to protect their data. It’s also very important to protect your own data; if you have an online store, you don’t want to be robbed.

Over the years companies and webmasters have learned—often the hard way—that web application security is not a joke; we’ve seen user passwords leaked due to SQL injection attacks, cookies stolen with XSS, and websites taken over by hackers due to negligent input validation.

Today we’ll show you some examples of how a web application can be exploited so you can learn from them; for this we’ll use Gruyere, an intentionally vulnerable application we use for security training internally, too. Do not probe others’ websites for vulnerabilities without permission as it may be perceived as hacking; but you’re welcome—nay, encouraged—to run tests on Gruyere.


Client state manipulation - What will happen if I alter the URL?

Let’s say you have an image hosting site and you’re using a PHP script to display the images users have uploaded:

http://www.example.com/showimage.php?imgloc=/garyillyes/kitten.jpg

So what will the application do if I alter the URL to something like this and userpasswords.txt is an actual file?

http://www.example.com/showimage.php?imgloc=/../../userpasswords.txt

Will I get the content of userpasswords.txt?

Another example of client state manipulation is when form fields are not validated. For instance, let’s say you have this form:



It seems that the username of the submitter is stored in a hidden input field. Well, that’s great! Does that mean that if I change the value of that field to another username, I can submit the form as that user? It may very well happen; the user input is apparently not authenticated with, for example, a token which can be verified on the server.
Imagine the situation if that form were part of your shopping cart and I modified the price of a $1000 item to $1, and then placed the order.

Protecting your application against this kind of attack is not easy; take a look at the third part of Gruyere to learn a few tips about how to defend your app.

Cross-site scripting (XSS) - User input can’t be trusted



A simple, harmless URL:
http://google-gruyere.appspot.com/611788451095/%3Cscript%3Ealert('0wn3d')%3C/script%3E
But is it truly harmless? If I decode the percent-encoded characters, I get:
<script>alert('0wn3d')</script>

Gruyere, just like many sites with custom error pages, is designed to include the path component in the HTML page. This can introduce security bugs, like XSS, as it introduces user input directly into the rendered HTML page of the web application. You might say, “It’s just an alert box, so what?” The thing is, if I can inject an alert box, I can most likely inject something else, too, and maybe steal your cookies which I could use to sign in to your site as you.

Another example is when the stored user input isn’t sanitized. Let’s say I write a comment on your blog; the comment is simple:
<a href=”javascript:alert(‘0wn3d’)”>Click here to see a kitten</a>

If other users click on my innocent link, I have their cookies:



You can learn how to find XSS vulnerabilities in your own web app and how to fix them in the second part of Gruyere; or, if you’re an advanced developer, take a look at the automatic escaping features in template systems we blogged about on our Online Security blog.

Cross-site request forgery (XSRF) - Should I trust requests from evil.com?

Oops, a broken picture. It can’t be dangerous--it’s broken, after all--which means that the URL of the image returns a 404 or it’s just malformed. Is that true in all of the cases?

No, it’s not! You can specify any URL as an image source, regardless of its content type. It can be an HTML page, a JavaScript file, or some other potentially malicious resource. In this case the image source was a simple page’s URL:



That page will only work if I’m logged in and I have some cookies set. Since I was actually logged in to the application, when the browser tried to fetch the image by accessing the image source URL, it also deleted my first snippet. This doesn’t sound particularly dangerous, but if I’m a bit familiar with the app, I could also invoke a URL which deletes a user’s profile or lets admins grant permissions for other users.

To protect your app against XSRF you should not allow state changing actions to be called via GET; the POST method was invented for this kind of state-changing request. This change alone may have mitigated the above attack, but usually it's not enough and you need to include an unpredictable value in all state changing requests to prevent XSRF. Please head to Gruyere if you want to learn more about XSRF.

Cross-site script inclusion (XSSI) - All your script are belong to us

Many sites today can dynamically update a page's content via asynchronous JavaScript requests that return JSON data. Sometimes, JSON can contain sensitive data, and if the correct precautions are not in place, it may be possible for an attacker to steal this sensitive information.

Let’s imagine the following scenario: I have created a standard HTML page and send you the link; since you trust me, you visit the link I sent you. The page contains only a few lines:
<script>function _feed(s) {alert("Your private snippet is: " + s['private_snippet']);}</script><script src="http://google-gruyere.appspot.com/611788451095/feed.gtl"></script>


Since you’re signed in to Gruyere and you have a private snippet, you’ll see an alert box on my page informing you about the contents of your snippet. As always, if I managed to fire up an alert box, I can do whatever else I want; in this case it was a simple snippet, but it could have been your biggest secret, too.

It’s not too hard to defend your app against XSSI, but it still requires careful thinking. You can use tokens as explained in the XSRF section, set your script to answer only POST requests, or simply start the JSON response with ‘\n’ to make sure the script is not executable.

SQL Injection - Still think user input is safe?

What will happen if I try to sign in to your app with a username like
JohnDoe’; DROP TABLE members;--

While this specific example won’t expose user data, it can cause great headaches because it has the potential to completely remove the SQL table where your app stores information about members.

Generally, you can protect your app from SQL injection with proactive thinking and input validation. First, are you sure the SQL user needs to have permission to execute “DROP TABLE members”? Wouldn’t it be enough to grant only SELECT rights? By setting the SQL user’s permissions carefully, you can avoid painful experiences and lots of troubles. You might also want to configure error reporting in such way that the database and its tables’ names aren’t exposed in the case of a failed query.
Second, as we learned in the XSS case, never trust user input: what looks like a login form to you, looks like a potential doorway to an attacker. Always sanitize and quotesafe the input that will be stored in a database, and whenever possible make use of statements generally referred to as prepared or parametrized statements available in most database programming interfaces.

Knowing how web applications can be exploited is the first step in understanding how to defend them. In light of this, we encourage you to take the Gruyere course, take other web security courses from the Google Code University and check out skipfish if you're looking for an automated web application security testing tool. If you have more questions please post them in our Webmaster Help Forum.

Wednesday, 11 May 2011

Introducing the Google Webmaster Team

We’re pleased to introduce the Google Webmaster Team as contributors to the Webmaster Central Blog. As the team responsible for tens of thousands of Google’s informational web pages, they’re here to offer tips and advice based on their experiences as hands-on webmasters.
Back in the 1990s, anyone who maintained a website called themselves a “webmaster” regardless of whether they were a designer, developer, author, system administrator, or someone who had just stumbled across GeoCities and created their first web page. As the technologies changed over the years, so did the roles and skills of those managing websites.
Around 20 years after the word was first used, we still refer to ourselves as the Google Webmaster Team because it’s the only term that really covers the wide variety of roles that we have on our team. Although most of us have solid knowledge of HTML, CSS, JavaScript and other web technologies, we also have specialists in design, development, user experience, information architecture, system administration, and project management.
Part of the Google Webmaster Team, Mountain View
In contrast to the Google Webmaster Central Team—which mainly focuses on helping webmasters outside of Google understand web search and how things like crawling and indexing affect their sites—our team is responsible for designing, implementing, optimizing and maintaining Google’s corporate pages, informational product pages, landing pages for marketing campaigns, and our error page. Our team also develops internal tools to increase our productivity and help to maintain the thousands of HTML pages that we own.
We’re working hard to follow, challenge and evolve best practices and web standards to ensure that all our new pages are produced to the highest quality and provide the best user experience, and we’re constantly evaluating and updating our legacy pages to ensure their deprecated HTML isn’t just left to rot.
We want to share our work and experiences with other webmasters, so we recently launched our @GoogleWebTeam account on Twitter to keep our followers updated on the latest news about our projects, web standards, and anything else which may be of interest to other webmasters, web designers and web developers. We’ll be posting here on the Webmaster Central Blog when we want to share anything longer than 140 characters.
Before we share more details about our processes and experiences, please let us know if there’s anything you’d like us to specifically cover by leaving a comment here or by tweeting @GoogleWebTeam.

Page Speed Online has a shiny new API

Andrew
Richard
Webmaster level: intermediate

A few weeks ago, we introduced Page Speed Online, a web-based performance analysis tool that gives developers optimization suggestions. Almost immediately, developers asked us to make an API available to integrate into other tools and their regression testing suites. We were happy to oblige.

Today, as part of Google I/O, we are excited to introduce the Page Speed Online API as part of the Google APIs. With this API, developers now have the ability to integrate performance analysis very simply in their command-line tools and web performance dashboards.

We have provided a getting started guide that helps you to get up and running quickly, understand the API, and start monitoring the performance improvements that you make to your web pages. Not only that, in the request, you’ll be able to specify whether you’d like to see mobile or desktop analysis, and also get Page Speed suggestions in one of the 40 languages that we support, giving API access to the vast majority of developers in their native or preferred language.

We’re also pleased to share that the WordPress plugin W3 Total Cache now uses the Page Speed Online API to provide Page Speed suggestions to WordPress users, right in the WordPress dashboard. “The Page Speed tool itself provides extremely pointed and valuable insight into performance pitfalls. Providing that tool via an API has allowed me to directly correlate that feedback with actionable solutions that W3 Total Cache provides.” said Frederick Townes, CTO Mashable and W3 Total Cache author.

Take the Page Speed Online API for a spin and send us feedback on our mailing list. We’d love to hear your experience integrating the new Page Speed Online API.

Andrew Oates is a Software Engineer on the Page Speed Team in Google's Cambridge, Massachusetts office. You can find him in the credits for the Pixar film Up.

Richard Rabbat is the Product Management Lead on the "Make the Web Faster" initiative. He has launched Page Speed, mod_pagespeed and WebP. At Google since 2006, Richard works with engineering teams across the world.

Sunday, 8 May 2011

Sales Tip - Increase Sales Using Both Technology through CRM and The Human Touch

Sales ideas to improve sales can be discovered just about every day from local enterprise columns to national sales magazines to hundreds of books dedicated to thousands of sales suggestions. Yet, if these points had been successful, then why do we have to have a lot more?

With Client Relationship Management (CRM) software systems now capable of tracking the progress of every sales individual permitting each and every person to be even far more successful as their sales territories expand, then why is selling so tough? We know selling to be tricky due to the fact of the high demand for superb sales persons.

As a organization coach, I have worked with hundreds of people all who hope to increase sales and look for that magic blue rock or sales tip that will magically catapult them above everyone else. This might be 1 reason that so a lot of folks have bought CRM systems.

Yet, to enhance sales in today's complex globally driven economy does demand an efficient and successful program to monitor activity such as the CRMs, but also demands the personal touch. Individuals invest in from those that they trust because they think that their needs will be meet. Technology is not the most beneficial meaning the number 1 tool to be employed to build trust. What that tool is comes from inside every sales individual.

What is the one factor that consumers look for each day beyond the morning newspaper or fantastic morning from a family members member? Answer is basically the mail. We have been conditioned considering that our youngest years to look for the mail. And how happy are we especially around holidays and our birthdays when we uncover that hand addressed card?

The personal touch conveyed in that effortless gesture of sending a thank you note, a individual note, a card acknowledging that special day reaffirmed your trust for that individual no matter how little that trust was. Now, quick forward to the present day where individuals are still conditioned to receive mail even if we call it snail mail, do you believe that your clients, prospects and even suspects really feel any differently about those hand-addressed notes? I sincerely doubt it.

Advertising study suggests that personal handwritten envelopes are generally opened before organization size envelopes. Why? Given that individuals are creatures of habits or conditioning, but alot more importantly, persons want to connect with persons who have taken the time to make that individual connection. Take the time and begin to cultivate the habit of sending small business notes and thanks notes. Keep in mind, your individual touch builds trust and boost sales come a great number of times from people today who trust you.

Friday, 6 May 2011

More guidance on building high-quality sites

Webmaster level: All
In recent months we’ve been especially focused on helping people find high-quality sites in Google’s search results. The “Panda” algorithm change has improved rankings for a large number of high-quality websites, so most of you reading have nothing to be concerned about. However, for the sites that may have been affected by Panda we wanted to provide additional guidance on how Google searches for high-quality sites.
Our advice for publishers continues to be to focus on delivering the best possible user experience on your websites and not to focus too much on what they think are Google’s current ranking algorithms or signals. Some publishers have fixated on our prior Panda algorithm change, but Panda was just one of roughly 500 search improvements we expect to roll out to search this year. In fact, since we launched Panda, we've rolled out over a dozen additional tweaks to our ranking algorithms, and some sites have incorrectly assumed that changes in their rankings were related to Panda. Search is a complicated and evolving art and science, so rather than focusing on specific algorithmic tweaks, we encourage you to focus on delivering the best possible experience for users.

What counts as a high-quality site?

Our site quality algorithms are aimed at helping people find "high-quality" sites by reducing the rankings of low-quality content. The recent "Panda" change tackles the difficult task of algorithmically assessing website quality. Taking a step back, we wanted to explain some of the ideas and research that drive the development of our algorithms.
Below are some questions that one could use to assess the "quality" of a page or an article. These are the kinds of questions we ask ourselves as we write algorithms that attempt to assess site quality. Think of it as our take at encoding what we think our users want.
Of course, we aren't disclosing the actual ranking signals used in our algorithms because we don't want folks to game our search results; but if you want to step into Google's mindset, the questions below provide some guidance on how we've been looking at the issue:
  • Would you trust the information presented in this article?
  • Is this article written by an expert or enthusiast who knows the topic well, or is it more shallow in nature?
  • Does the site have duplicate, overlapping, or redundant articles on the same or similar topics with slightly different keyword variations?
  • Would you be comfortable giving your credit card information to this site?
  • Does this article have spelling, stylistic, or factual errors?
  • Are the topics driven by genuine interests of readers of the site, or does the site generate content by attempting to guess what might rank well in search engines?
  • Does the article provide original content or information, original reporting, original research, or original analysis?
  • Does the page provide substantial value when compared to other pages in search results?
  • How much quality control is done on content?
  • Does the article describe both sides of a story?
  • Is the site a recognized authority on its topic?
  • Is the content mass-produced by or outsourced to a large number of creators, or spread across a large network of sites, so that individual pages or sites don’t get as much attention or care?
  • Was the article edited well, or does it appear sloppy or hastily produced?
  • For a health related query, would you trust information from this site?
  • Would you recognize this site as an authoritative source when mentioned by name?
  • Does this article provide a complete or comprehensive description of the topic?
  • Does this article contain insightful analysis or interesting information that is beyond obvious?
  • Is this the sort of page you’d want to bookmark, share with a friend, or recommend?
  • Does this article have an excessive amount of ads that distract from or interfere with the main content?
  • Would you expect to see this article in a printed magazine, encyclopedia or book?
  • Are the articles short, unsubstantial, or otherwise lacking in helpful specifics?
  • Are the pages produced with great care and attention to detail vs. less attention to detail?
  • Would users complain when they see pages from this site?
Writing an algorithm to assess page or site quality is a much harder task, but we hope the questions above give some insight into how we try to write algorithms that distinguish higher-quality sites from lower-quality sites.

What you can do

We've been hearing from many of you that you want more guidance on what you can do to improve your rankings on Google, particularly if you think you've been impacted by the Panda update. We encourage you to keep questions like the ones above in mind as you focus on developing high-quality content rather than trying to optimize for any particular Google algorithm.
One other specific piece of guidance we've offered is that low-quality content on some parts of a website can impact the whole site’s rankings, and thus removing low quality pages, merging or improving the content of individual shallow pages into more useful pages, or moving low quality pages to a different domain could eventually help the rankings of your higher-quality content.
We're continuing to work on additional algorithmic iterations to help webmasters operating high-quality sites get more traffic from search. As you continue to improve your sites, rather than focusing on one particular algorithmic tweak, we encourage you to ask yourself the same sorts of questions we ask when looking at the big picture. This way your site will be more likely to rank well for the long-term. In the meantime, if you have feedback, please tell us through our Webmaster Forum. We continue to monitor threads on the forum and pass site info on to the search quality team as we work on future iterations of our ranking algorithms.

Flash support in Instant Previews

Webmaster level: All

With Instant Previews, users can see a snapshot of a search result before clicking on it. We’ve made a number of improvements to the feature since its introduction last November, and if you own a site, one of the most relevant changes for you is that Instant Previews now supports Flash.



An Instant Preview with rich content rendered


In most cases, when the preview for a page is generated through our regular crawl, we will now render a snapshot of any Flash components on the page. This will replace the "puzzle piece" icon that previously appeared to indicate Flash components, and should improve the accuracy of the previews.

However, for pages that are fetched on demand by the "Google Web Preview" user-agent, we will generate a preview without Flash in order to minimize latency. In these cases the preview will appear as if the page were visited by someone using a browser without Flash enabled, and "Install Flash" messages may appear in the preview, depending on how your website handles users without Flash.

To improve your previews for these on-demand renders, here are some guidelines for using Flash on your site:
  • Make sure that your site has a reasonable, seamless experience for visitors without Flash. This may involve creating HTML-only equivalents for your Flash-based content that will automatically be shown to visitors who can't view Flash. Providing a good experience for this case will improve your preview and make your visitors happier.

  • If Flash components are rendering but appear as loading screens instead of actual content, try reducing the loading time for the component. This makes it more likely we'll render it properly.

  • If you have Flash videos on your site, consider submitting a Video Sitemap which helps us to generate thumbnails for your videos in Instant Previews.

  • If most of the page is rendering properly but you still see puzzle pieces appearing for some smaller components, these may be fixed in future crawls of your page.
If you have additional questions, please feel free to post them in our Webmaster Help Forum.

As always, we'll keep you updated as we continue to make improvements to Instant Previews.

Monday, 2 May 2011

Do 404s hurt my site?

Webmaster level: Beginner/Intermediate

So there you are, minding your own business, using Webmaster Tools to check out how awesome your site is... but, wait! The Crawl errors page is full of 404 (Not found) errors! Is disaster imminent??


Fear not, my young padawan. Let’s take a look at 404s and how they do (or do not) affect your site:

Q: Do the 404 errors reported in Webmaster Tools affect my site’s ranking?
A:
404s are a perfectly normal part of the web; the Internet is always changing, new content is born, old content dies, and when it dies it (ideally) returns a 404 HTTP response code. Search engines are aware of this; we have 404 errors on our own sites, as you can see above, and we find them all over the web. In fact, we actually prefer that, when you get rid of a page on your site, you make sure that it returns a proper 404 or 410 response code (rather than a “soft 404”). Keep in mind that in order for our crawler to see the HTTP response code of a URL, it has to be able to crawl that URL—if the URL is blocked by your robots.txt file we won’t be able to crawl it and see its response code. The fact that some URLs on your site no longer exist / return 404s does not affect how your site’s other URLs (the ones that return 200 (Successful)) perform in our search results.

Q: So 404s don’t hurt my website at all?
A:
If some URLs on your site 404, this fact alone does not hurt you or count against you in Google’s search results. However, there may be other reasons that you’d want to address certain types of 404s. For example, if some of the pages that 404 are pages you actually care about, you should look into why we’re seeing 404s when we crawl them! If you see a misspelling of a legitimate URL (www.example.com/awsome instead of www.example.com/awesome), it’s likely that someone intended to link to you and simply made a typo. Instead of returning a 404, you could 301 redirect the misspelled URL to the correct URL and capture the intended traffic from that link. You can also make sure that, when users do land on a 404 page on your site, you help them find what they were looking for rather than just saying “404 Not found."

Q: Tell me more about “soft 404s.”
A:
A soft 404 is when a web server returns a response code other than 404 (or 410) for a URL that doesn’t exist. A common example is when a site owner wants to return a pretty 404 page with helpful information for his users, and thinks that in order to serve content to users he has to return a 200 response code. Not so! You can return a 404 response code while serving whatever content you want. Another example is when a site redirects any unknown URLs to their homepage instead of returning 404s. Both of these cases can have negative effects on our understanding and indexing of your site, so we recommend making sure your server returns the proper response codes for nonexistent content. Keep in mind that just because a page says “404 Not Found,” doesn’t mean it’s actually returning a 404 HTTP response code—use the Fetch as Googlebot feature in Webmaster Tools to double-check. If you don’t know how to configure your server to return the right response codes, check out your web host’s help documentation.

Q: How do I know whether a URL should 404, or 301, or 410?
A:
When you remove a page from your site, think about whether that content is moving somewhere else, or whether you no longer plan to have that type of content on your site. If you’re moving that content to a new URL, you should 301 redirect the old URL to the new URL—that way when users come to the old URL looking for that content, they’ll be automatically redirected to something relevant to what they were looking for. If you’re getting rid of that content entirely and don’t have anything on your site that would fill the same user need, then the old URL should return a 404 or 410. Currently Google treats 410s (Gone) the same as 404s (Not found), so it’s immaterial to us whether you return one or the other.

Q: Most of my 404s are for bizarro URLs that never existed on my site. What’s up with that? Where did they come from?
A:
If Google finds a link somewhere on the web that points to a URL on your domain, it may try to crawl that link, whether any content actually exists there or not; and when it does, your server should return a 404 if there’s nothing there to find. These links could be caused by someone making a typo when linking to you, some type of misconfiguration (if the links are automatically generated, e.g. by a CMS), or by Google’s increased efforts to recognize and crawl links embedded in JavaScript or other embedded content; or they may be part of a quick check from our side to see how your server handles unknown URLs, to name just a few. If you see 404s reported in Webmaster Tools for URLs that don’t exist on your site, you can safely ignore them. We don’t know which URLs are important to you vs. which are supposed to 404, so we show you all the 404s we found on your site and let you decide which, if any, require your attention.

Q: Someone has scraped my site and caused a bunch of 404s in the process. They’re all “real” URLs with other code tacked on, like http://www.example.com/images/kittens.jpg" width="100" height="300" alt="kittens"/></a... Will this hurt my site?
A:
Generally you don’t need to worry about “broken links” like this hurting your site. We understand that site owners have little to no control over people who scrape their site, or who link to them in strange ways. If you’re a whiz with the regex, you could consider redirecting these URLs as described here, but generally it’s not worth worrying about. Remember that you can also file a takedown request when you believe someone is stealing original content from your website.

Q: Last week I fixed all the 404s that Webmaster Tools reported, but they’re still listed in my account. Does this mean I didn’t fix them correctly? How long will it take for them to disappear?
A:
Take a look at the ‘Detected’ column on the Crawl errors page—this is the most recent date on which we detected each error. If the date(s) in that column are from before the time you fixed the errors, that means we haven’t encountered these errors since that date. If the dates are more recent, it means we’re continuing to see these 404s when we crawl.

After implementing a fix, you can check whether our crawler is seeing the new response code by using Fetch as Googlebot. Test a few URLs and, if they look good, these errors should soon start to disappear from your list of Crawl errors.

Q: Can I use Google’s URL removal tool to make 404 errors disappear from my account faster?
A:
No; the URL removal tool removes URLs from Google’s search results, not from your Webmaster Tools account. It’s designed for urgent removal requests only, and using it isn’t necessary when a URL already returns a 404, as such a URL will drop out of our search results naturally over time. See the bottom half of this blog post for more details on what the URL removal tool can and can’t do for you.

Still want to know more about 404s? Check out 404 week from our blog, or drop by our Webmaster Help Forum.

Sunday, 1 May 2011

Video for Business Risk and Comfort

-- Physical Risk Protection --

Although we want to get as significantly revenues by means of the front door, we must close the back door so that the revenues stay in the organization. Sadly, the back door is not the only location where risk exists. Risk exists at the front door, side doors, the roof, and inside the doors as nicely. Video is a good means of addressing the risks by offering a deterrent impact as nicely as the evidence of behaviours or events.

We have been making use of a mix of access control, intrusion alarming and video surveillance as the collective protection mix. Every member of the mix has been implemented previously independent of each other. The price of installing, operating, and maintaining them is high. In addition, we pay for a separate monitoring and patrol service. We pay insurance premiums too. The price for protecting the small business is really high.

According to business sources, 1 in just about every 10 persons in New Zealand was the victim of house intrusion and property vandalism circumstances reported, and only half of the cases were resolved. This high rate of loss and low rate of resolution provides a lot of incentive for us to investigate the subject matter.

Technologies comes into the equation at this point. How do we deploy technology to lower risks and expenses for nowadays and tomorrow?

-- Physical Environmental Comfort --

A modern developing has ventilation, air conditioning, hot water supply, lifts, and lighting to offer a desirable working environment. These services do not happen naturally but are the result of investments and installations. Similar to security systems, they were previously separate but have began to turn into even more integrated lately. Can we integrate these comfort systems with risk prevention systems?

The short answer is yes. The principle of integration described in this paper is applicable.

-- From Yesterday to Currently --

Members of the protection mix are independent of each other. Feel of access control with door locks, intruder alarms with passive infrared detectors, fire and smoke alarms with specific sensors, and video surveillance with cameras. Most likely, an organization has installed 4 brands of equipment at the same time.

We are quite pleased to see today that some vendors have managed to collect signals from all members of the protection mix and to supply status views to enterprise owners or managers via a regular PC screen. Most events such as door locking or intruder detection are on/off in nature. That is, each and every event is perfect described by words such as ON or OFF, OPEN or CLOSE, and YES or NO. The specifics is vital but lacks visual connections.

In comes the video and the whole scenario springs to life. The building manager sits in the comfort of his chair in the control room. He can see who has just come in the door, the time of the day, name of the individual and what he/she is wearing today. He can track where he/she has navigated inside the campus which may be reasonably huge with restricted access areas. Yes, there is a privacy issue and this has to be declared beforehand. The declaration will provide a deterrent effect in case an individual contemplates to perform any unacceptable behaviour.

Video is expensive to install, operate and preserve. It has the greatest share of technologies advances for this reason. Yesterday we have analogue cameras. Nowadays we have IP cameras where IP stands for World wide web Protocol. IP cameras currently are alot more high priced than analogue cameras but they boost the level of effects considerably in areas not economically achievable with analogue. The most obvious advantage is image clarity. It is typical to have an IP camera with four times the pixel resolution of an analogue camera. The price may be five times more but wait. It has several a lot more desirable capabilities.

On the cost side, IP cameras can be installed further away from the monitoring or recording centre with standard and low price personal computer cable as against the even more expensive video coaxial cable for analogue cameras. The very same personal computer cable can carry electricity to power the camera but this is not doable for analogue cameras. These elements translate into cheaper installation and labour costs, higher high quality video and simpler operation and maintenance.

-- From Nowadays to Tomorrow --

IP is an open standard. We are secure to say that the Online is becoming alot more indispensable as time goes on. IP cameras will be installed at one corner of the world and its videos will be noticed at one more corner in the near future. This is a matter of Internet connection bandwidth.

Signal transmission media has fallen into two levels: optical fibre for high bandwidth and long distance, and copper wire for lower bandwidth and shorter distance. Let us assume (rightly) that an individual else such as the government or Internet operators will look right after optical fibres outside of our premises and we will look immediately after copper wire in our premises. For the sake of simplicity, let us assume that an IP 1.3MP (Mega Pixel) camera produces 3Mbps and a dedicated copper cable can deal with 20 cameras comfortably.

Access control, intruder alarms, smoke and fire alarms etc do not use much transmission bandwidth. These systems can be very easily incorporated into the same cable running IP cameras.

This current state of technologies has pointed out that there is no technologies obstacle to all members of the mix to be IP based over time. The door lock and unlock controller will be based on an ultra simple PC in the future. We can manage door locks much less difficult than a desktop PC. This is not too far away.

Physical security will grow to be a customer of cyber security. The two matters are separate and one does not replace the other.

-- Video Surveillance Cameras --

There are various grades of analogue cameras. There are much more grades of IP cameras. This is for the reason that IP cameras have incorporated a lot more functions and variations that analogue cameras are not capable of.

The most fundamental example is the level of resolution of images. We have off-the-shelf models with 5MP (Mega Pixels) offering a panoramic view of 180 degrees for example. One such IP camera has 10 times the level of details of the finest analogue camera. There are numerous variations of resolutions in the IP range.

One more example is Dual Streaming. A typical IP camera is capable of producing 2 video streams of distinctive resolutions and frame rates. One goes to the control centre for recording and one goes to a mobile phone for real time viewing. Similarly an IP camera can generate 1 video stream only but the control centre can scale down the resolution and frame rate for client viewing. Analogue cameras can't present this.

-- Video Surveillance Recording --

As the PC business is rather competitive, the regular PC storage device called Hard Disk Drive (HDD) has turn into the de facto storage device for security systems. Nevertheless, it will make far more sense if we use the standard PC as the recording device, computation device and communication device as well as the display device. This will propel the security industry further to merge with PC technologies which is undergoing quickly developments.

Network Video Recorder (NVR) is the name given to the recording device that is fully PC based. However, some manufacturers have produced NVR that does not have the full PC flexibility in order to support the plug and play habits of some system installers when they transition from analogue to IP cameras. Whilst modular, these devices have been created proprietary and lock buyers in. They lose some of the positive aspects connected with open platform such as ease of repair or flexibility of expansion.

-- Beyond Risk and Comfort --

An open platform based NVR is capable of taking on extra advanced functions such as intelligent security analysis or object counting.

We are in a position to apply more intelligence such as counting the quantity of objects entering a boundary or passing by means of a trip wire. The object can be human or a vehicle. We are able to convert images into numbers such as vehicle licence plate numbers or meaningful patterns such as gun shape or a jacket hood over the head. In fact, there is no limit on intelligent applications.

The subject matter is referred to as Video Analytics. More offerings are turning up on the horizon over time.

-- Guidance --

There are 2 standard criteria for asset obtain decisions: fitness for purpose and total cost of ownership. They will be sufficient if options remain stationary with respect to time. Possibilities do not stay stationary as far as video surveillance technologies is concerned. The transitioning trend has grow to be the 3rd criteria for choice generating. Consider the choices on the market and how the organization can use these selections in the near future to extend either the upper line or the bottom line of the organization.

Pay attention to developments of open platform technologies.

END