Our Blog

Even Our Confusion is Duplicatedby Nathan S

If you have spent any length of time with an SEO, then you know that nothing kills a website’s Google standing faster than duplicate content. In the post-Panda era, duplication of content is perhaps one of the surest ways to bomb your entire website’s chances of landing on the first page of results. So, you just have to make sure that you aren’t copying and pasting web pages and giving them different URLs on your site, right? Only the least scrupulous of SEOs would have duplicate content on their website!

Actually, it turns out that keeping duplicate content off your site is, to put it mildly, harder than it seems. A Google web crawler can very easily find URLs to your site that you didn’t even know existed, and, if the correct steps are not taken, it will index each and every one of them.

These will then be considered duplicate content, because these URLs all show the same webpage when they are entered into your site. This means that if Google indexes your homepage at www.example.com, and at example.com, and at www.example.com/, it will likely flag you for duplicate content! This is what makes preventing duplicate content on a website more than simply not plagiarizing—it is as much about ensuring that Google knows that www.example.com and www.example.com/ are the same website.

While Google tends to catch this specific example most of the time, there are dozens of examples like it which Google will not catch: regional pages, search results pages, and the like can all lead your site to get Panda-slapped if not properly managed. Preventing duplicate content, therefore, is as much about ensuring that a site is properly indexed as it is as ensuring that each page is sufficiently unique.

Share on FacebookTweet about this on TwitterGoogle+Pin on PinterestEmail to someone

nathan-schaad