Many people assumed that Google had rolled out their algorithm that punished “over-optimized sites” when webmasters began complaining that their ranking in the SERPs had suddenly plummeted. We were not exempt from this assumption. However, a recent article in Search Engine Land shows Google claiming an alternative explanation for the ranks lost.
According to Matt Cutts, head of Google’s spam team, the drop in the SERP rankings is caused not by what the team is referring to as a discussed “over-optimization penalty,” but rather due to—and this is where things get weird—a flaw in the company’s algorithm.
Why is that weird? Well, quite simply, it’s weird because it is incredibly rare for Google to admit that a mistake has been made at any level within their company. When you think about it, this makes sense: Google’s brand is heavily focused on tech-savvy, reliability and a commitment to perfection. It’s strange, then, that this mistake seems so basic.
A piece of Google’s algorithm was recently developed (end of 2011) which is charged with determining whether or not a site is a parked domain—that is, a domain that does not have a website yet built on it. It is this piece of Google’s algorithm which is responsible for this error, according to Matt Cutts. “I apologize for this;” Matt Cutts said of the incident, “it looks like the issue is fixed now, and we’ll look into how to prevent this from happening again.” We’re proud to say that none of our clients have been affected by this algorithmic glitch, due to our adherence to clean, SEO-based web design principles.
So, how could Google mistake a website for a parked domain? It’s actually easier than you might think for a computer to make this mistake. It is possible (even likely) that a given website has, contained on its servers, some .html files with absolutely nothing in them. This is exactly what a parked website looks like: It has server space, and there is a file claimed associated to that domain and server space, but the file itself is empty. It is feasible, then, for an imperfect algorithm to happen across such a file and assume the website is entirely empty. In this case, such would have the result of Google marking the site as a parked domain, and punishing their search rankings accordingly.
The idea is completely understandable: Nobody wants to be directed to an empty website. However, such glitches mark a major setback for affected websites, and such websites are reasonably upset. If flaws in the parked domain algorithm are truly the cause of these search punishments, then the best thing a website could do is to ensure that they have a 404 page which enables crawlers to get back to the homepage, and also to ensure that there are no empty files on their server or server space. If you do these two things, then the next time GoogleBot visits your website, your search rank should be completely or mostly restored.
Need more help with Search?