08-13-2011, 01:50 AM
If you’re on the lookout for high page rank and auto approve blogs, finding them is easy, but finding pages that haven’t been spammed to death is not.
One of the biggest problems when finding auto approved blogs is that there’s usually someone or multiple people that have already found the blog before you. This usually means that the blog post has already been hit hundreds or even thousands of times by other spammers.
The problem with that is, the more outbound links a page has, the less “link juice†will get passed on to your page. This is where using a couple of search operators as well as a basic footprint will help you find posts that have not yet been spammed to death.
So if we were to take a look at this blog: http://10newsblogs.com/halfyeardawn/2009...ful-sleep/
You can see the blog is auto approve but there is tons of spam on that page. What we want to do is find a footprint on this page that we can use to find other pages on this blog with comments open. The first obvious page would be, “Leave a replyâ€. We could also use the, “Nameâ€, “Emailâ€, “Website†fields because they are all common across the pages with comments open.
Our next step would be to use a search operator to find all the indexed pages of the website. The common footprint for this would be site:http://www.thedomain.com
So if we go to Scrapebox and check, “custom footprint†then put in site: http://10newsblogs.com (make sure you use the root of the domain) IE instead of http://10newsblogs.com/blog/category/post/ it would be just http://10newsblogs.com
Then we click, “start harvestingâ€. You can see Scrapebox found 200 other pages indexed by Search Engines.
The problem is you can see how it also found other pages pointing to .pdf guides and other pages we cannot comment on. So what we need to do is combine the search operator we just used with the footprint we found on the blog.
Let’s try the “leave a reply†footprint we found on the comment page.
The footprint we enter in Scrapebox will now be site:http://10newsblogs.com/ + “Leave a Replyâ€
You can see now that we have 157 pages scraped from that website that all contain comment fields.
All you have to do now is run a PR (page rank) check to find out which of these pages has the highest page rank. Then pick which ones have the lowest outbound links and begin posting your comments. That’s all there is to it!
Do you have any other methods for finding high pr auto approve blogs? Share by leaving a comment below!
One of the biggest problems when finding auto approved blogs is that there’s usually someone or multiple people that have already found the blog before you. This usually means that the blog post has already been hit hundreds or even thousands of times by other spammers.
The problem with that is, the more outbound links a page has, the less “link juice†will get passed on to your page. This is where using a couple of search operators as well as a basic footprint will help you find posts that have not yet been spammed to death.
So if we were to take a look at this blog: http://10newsblogs.com/halfyeardawn/2009...ful-sleep/
You can see the blog is auto approve but there is tons of spam on that page. What we want to do is find a footprint on this page that we can use to find other pages on this blog with comments open. The first obvious page would be, “Leave a replyâ€. We could also use the, “Nameâ€, “Emailâ€, “Website†fields because they are all common across the pages with comments open.
Our next step would be to use a search operator to find all the indexed pages of the website. The common footprint for this would be site:http://www.thedomain.com
So if we go to Scrapebox and check, “custom footprint†then put in site: http://10newsblogs.com (make sure you use the root of the domain) IE instead of http://10newsblogs.com/blog/category/post/ it would be just http://10newsblogs.com
Then we click, “start harvestingâ€. You can see Scrapebox found 200 other pages indexed by Search Engines.
The problem is you can see how it also found other pages pointing to .pdf guides and other pages we cannot comment on. So what we need to do is combine the search operator we just used with the footprint we found on the blog.
Let’s try the “leave a reply†footprint we found on the comment page.
The footprint we enter in Scrapebox will now be site:http://10newsblogs.com/ + “Leave a Replyâ€
You can see now that we have 157 pages scraped from that website that all contain comment fields.
All you have to do now is run a PR (page rank) check to find out which of these pages has the highest page rank. Then pick which ones have the lowest outbound links and begin posting your comments. That’s all there is to it!
Do you have any other methods for finding high pr auto approve blogs? Share by leaving a comment below!