12-22-2015, 10:48 AM
This might be an isolated case, but I've got a tip for everyone who's thinking that Copyscape shares the same eye with Google in terms of spotting Plagiarism. Google is way advanced in terms of spotting article sources. In this quick case study, I did not receive a penalty from Google whatsoever, but there's a huge possibility that Google can track what Copyscape can't.
I downloaded Expired Article Hunter for the first time the other day. The app scraped articles in the dog niche and with my copyscape API connected, it returned no results (which means it's 100% unique, or the site is already un-indexed).
I wanted to test this out before using it for pillar content (yes, pillar). I fixed the article manually in Grammarly and made some improvements to make it read-worthy and valuable. The idea I had was to use Hubpages and see if I can bypass their strict proofreading.
1. I made a well-structured and attractive hub out of the article.
2. I did another Plagiarism check on Grammarly and Copyscape, this time in Copyscape website itself just to make sure. 100% Unique. No duplicate results. I sent my hub.
3. Within 24 hours I received an email from them that they have unpublished my article because of detected duplication. They also advised me to make changes.
4. I replied to their e-mail and defended that the content is unique and I find the duplication to be untrue. I said that the duplicated parts might just be snippets of my research (I asked this because I want to make sure that the suspected parts aren't just snippets).
I requested for the site URL. They have replied after 24 hours with 2 URLs.
Just in case there are factors which explain why it wasn't detected, I made a Copyscape copied text comparison instead of a URL comparison. It turns out 60% of the article is a duplicate of the link they provided. If you play innocent and take a good look at it, the paragraphs were blatantly duplicated.
== CONCLUSION ==
Copyscape, being a reputable (and perhaps most trusted) plagiarism checker, can't always detect duplicates. I don't know what Hubpages' system is for checking duplicates, but I'm very sure that Google can top that.
We don't know what Google does to its database of cached pages. I also think Big G is playing little in the game of detecting duplicates, but they have an even advanced algorithm for dealing with this... or will have that system in the future.
Expired Article Hunter is a powerful tool for scraping articles, but these articles aren't foolproof. Not to Big G, at least. If you want to use it, use it to combine content altogether and spin it for content other than your pillar. Who knows, Google may be doing the same process Expired Article Hunter does to check for related content from history and see if it's a duplicate? I don't see this to be a sandbox issue at the moment, since anyone can just leave their old domain and take their content with them. However, I recommend that you use expired and unoriginal articles in moderation.
BE SAFE.
May the Force be With You.

I downloaded Expired Article Hunter for the first time the other day. The app scraped articles in the dog niche and with my copyscape API connected, it returned no results (which means it's 100% unique, or the site is already un-indexed).
I wanted to test this out before using it for pillar content (yes, pillar). I fixed the article manually in Grammarly and made some improvements to make it read-worthy and valuable. The idea I had was to use Hubpages and see if I can bypass their strict proofreading.
1. I made a well-structured and attractive hub out of the article.
2. I did another Plagiarism check on Grammarly and Copyscape, this time in Copyscape website itself just to make sure. 100% Unique. No duplicate results. I sent my hub.
3. Within 24 hours I received an email from them that they have unpublished my article because of detected duplication. They also advised me to make changes.
4. I replied to their e-mail and defended that the content is unique and I find the duplication to be untrue. I said that the duplicated parts might just be snippets of my research (I asked this because I want to make sure that the suspected parts aren't just snippets).
I requested for the site URL. They have replied after 24 hours with 2 URLs.
Just in case there are factors which explain why it wasn't detected, I made a Copyscape copied text comparison instead of a URL comparison. It turns out 60% of the article is a duplicate of the link they provided. If you play innocent and take a good look at it, the paragraphs were blatantly duplicated.
== CONCLUSION ==
Copyscape, being a reputable (and perhaps most trusted) plagiarism checker, can't always detect duplicates. I don't know what Hubpages' system is for checking duplicates, but I'm very sure that Google can top that.
We don't know what Google does to its database of cached pages. I also think Big G is playing little in the game of detecting duplicates, but they have an even advanced algorithm for dealing with this... or will have that system in the future.
Expired Article Hunter is a powerful tool for scraping articles, but these articles aren't foolproof. Not to Big G, at least. If you want to use it, use it to combine content altogether and spin it for content other than your pillar. Who knows, Google may be doing the same process Expired Article Hunter does to check for related content from history and see if it's a duplicate? I don't see this to be a sandbox issue at the moment, since anyone can just leave their old domain and take their content with them. However, I recommend that you use expired and unoriginal articles in moderation.
BE SAFE.
May the Force be With You.
