Best Blackhat Forum

Full Version: [Best Offer - $13] DDominator, An Awesome Expired Domain Crawler! Buy DD License NOW!
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43
Dear Dev Warrior:

Yes, I am certain having a a great list of seed urls and keywords will greatly increase the power of any expired domain scraper. This applies equally to ddominator and scrapebox.

Perhaps my keywords or url list were inadequate in some way. That is always a possibility, and I am open to that point. However, none of ddominator training helps with finding great seed urls or keywords.

The user is left to his own devices to find these crucial seed urls.

However, scrapebox does have a video where they show how to use "wiki" urls to scrape for local "low hanging fruit" expired "local" [see the youtube video link posted in my prior post] domains...which I find to be useful.

Yes it is possible that using DNS to check for unregistered domains could lead to more false positives as to expired domains. However running the "skimmed down" list through a bulk registrar checker should solve that problem.

Scrapebox does a marvelous job of checking for expired web 2.0 properties, and does that without needing to buy any premium plugins. See their vanity name checker for more info on that process.

Yes, Majestic vs Moz metrics is now a hot topic, and many people are leaning toward Majestic as the better source of metrics.

However, that issue is moot here, as we're looking for results on thousands of scraped domains, and a slight bias in favor of majestic should have little or no impact on how we find high value domains.

If you want to analyze the metrics of your money site or competitors sites, then yes, majestic would be better for that purpose. But, it is overkill here during these initial scraping functions to locate expired domains.

In the end, after experimenting with numerous sample seed urls, I still was unable to get any high value domains using ddominator. That could be my problem, or it could be a built in bias of ddominator...I don't know at this point. I gave it a try, and it didn't work out.

However, all things considered, the absolute reliability and power of scrapebox [as a scraping entity] over the past decade is virtually unmatched by any other software in this industry.

This new premium plugin for scrapebox is part of the scrapebox family of products. I trust scrapebox to get this right.

The one time fee of $47 [ .... vs the $47 monthly fee for ddominator and the "point" system which can run down to zero with heavy use, so you'd need to buy more points {another $47 } even though you've already paid for the month for the "ultimate" package] ... for the scrapebox plugin is a no brainer.

Sincerely;

Badcoffee


(07-04-2016 04:19 AM)♕ Dev Warrior ♕ Wrote: [ -> ]@badcoffee,
Thanks for being DDominator user and for your reviews!
I understand your points here and would like to let you understand that finding high quality expired domains depends on multiple factors like niche keywords, the foot prints you used (eventually generating seed urls to crawl and search expired domains). And this will happen with any crawler available! So if you are using scrapebox with expired domain plugin, you still need good seed urls which you need to generate using keywords+footprints! FYI, there are many happy DDominator users who find many very high metrics domains daily and it simply means they have a very good collection of seed urls or keywords+footprints!!

Also you might noticed or not (you might find this a bit of comparison of DD and Scrapebox expired domain crawler)

1. There are a lot of manual interventions required to work with scrapebox however DDominator is Real 1-Click Software!

2. Scrapebox plugin requires separate MOZ Account Access but DDominator doesn't require MOZ API access.

3. Scrapebox plugin doesn't support Majestic Account Access but DDominator does support Majestic API metrics.

4. <Very Important> Scrapebox plugin checks domain availability based on DNS check which will give a LOT OF FALSE domain availability results however DDominator checks domains with 2 steps checks - a. DNS check and b. queries various domain registries to verify actual domain availability status.

5. DDominator provides you a unique feature of Domain resurrection inbuilt with the software out of the box which is also 1-Click function!

6. DDominator gives you ability to find expired web 2.0 websites as well.

7. Using DDominator, one can work side by side on the current result set of expired domains found with crawler is working in the background like filter, export, etc unlike other Crawlers.

By writing this, I am here to tell that DDominator is special software to find high metrics expired domains and to resurrect the domains from wayback machine unlike Scrapebox which is really a great software for multiple tasks but not specialized in one! Hope you get the point.

If we talk about current development phase of DDominator, we have added ahrefs metrics support, reverse crawl, Bing and yahoo search engine support etc.

And to end this message, I will again point out that finding expired domains is not tool dependent it depends on many factors including your keywords+footprints and seed urls however softwares help you performing these tasks easily and efficiently!
@badcoffee,
Thanks for your reply. Go ahead with your choice of software!

I understand that there are NOT many tutorials available as of now, however, we and some of our software affiliate/reviewers are working on video tutorials for DDominator! Scrapebox has most of the tutorials available from its users and that's one of the good thing about any software! And I am sure you will be seeing a lot of tutorials and reviews of DDominator very soon.

I just ran the DDominator for the same wikipedia URL in that video, and you know in very less time 10 mins (compared to that video), DDominator found 30 expired domains with backlinks from wikipedia!!

I would again insist on this that finding expired domains depend on multiple factors as I described in my last post!

[Image: xF4T55s.png]

Also, you are welcome anytime to use DDominator if you feel like.
Thanks again. Thanks
To your credit, I like the way you're taking the high road in defending your product, and using concrete examples to illustrate your product features.

You have avoided any name calling or false comparisons.

This tactic will save your product, even if it's not as capable as scrapebox to accomplish the same thing at far less expense.

I do hope your product is successful, and you continue with developing new features.

PS: I am running the same seed wikipedia url -- https://en.wikipedia.org/wiki/List_of_la...population -- on your ddominator software to compare results...will post my results when the software has run approx 10 minutes as you have done with a image of the results.

Sincerely; Badcoffee
Here my results from running for approx 14 minutes...vs 10 minutes on your end.

Found 16 expired domains....nothing over 5 TF.

[Image: 6vqbbNY]

http://imgur.com/6vqbbNY
(07-04-2016 06:40 AM)badcoffee Wrote: [ -> ]Here my results from running for approx 14 minutes...vs 10 minutes on your end.

Found 16 expired domains....nothing over 5 TF.

[Image: 6vqbbNY]

http://imgur.com/6vqbbNY

Can you run same scan again with these settings -

Settings --> Crawler Tab

[Image: OUijBkA.png]

Settings --> Metrics Tab

[Image: dYGnODw.png]

Ok, running ddominator with your exact seed url and settings as pictured above for 15 minutes to evaluate new settings....will post results soon....

OK, tried your settings, and ddominator only crawled the single wikepedia page, because you selected in your directions to not allow scraping of external links...no domains were found, and only ran for 4 seconds.....

I will change the setting to allow external crawling, to see if that helps.....
(07-04-2016 07:04 AM)badcoffee Wrote: [ -> ]Ok, running ddominator with your exact seed url and settings as pictured above for 15 minutes to evaluate new settings....will post results soon....

OK, tried your settings, and ddominator only crawled the single wikepedia page, because you selected in your directions to not allow scraping of external links...no domains were found, and only ran for 4 seconds.....

I will change the setting to allow external crawling, to see if that helps.....

You should NOT allow external crawling! Come to skype (ddominator_support) and I might help you achieving these results with DDominator easily!
Another result of the same url within 15 minutes - 40+ expired domains. Tested on another VPS though!

[Image: 8TguuM6.png]
Update on the campaign - DDominator found 100+ domains in 30 minutes!

[Image: 2qNozXB.png]
After speaking with ddomintor customer service rep via skype, I was given some modified settings and an example of another seed url.

I will test these out over the next 2 days, and get back with any new results. Perhaps this will help resolve my poor results using my old settings. I am hoping that I can revoke my prior findings, and give a positive review of the software.

PS: This is the problem with this software...the settings are very elusive to get right [for instance the crawl level, 1 - 10, which is it, 1 or 3 or 5 or 10?, scrape external links, yes or no?, use single seed url from high value seed site, or use many?, or use keywords instead of seed url's......and so on...who knows?], and ddominator doesn't provide training in getting any of these settings right in the first place.

They really need to step up and get a video made showing the ideal settings, and showing lucrative seed urls to crawl to get results.

PS: You'll notice that the seed url given in all of the prior postings [just above] by Dev Warrior are taken from the Scrapebox video training example that I posted on page 24 of this thread...i.e.the wikipedia largest cities in california seed url. Had I not posted that video by scrapebox, then their would be no results for Dev Warrior to show you in the above posts. This is what I mean, lack of training examples can result in disgruntled customers.
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43
Reference URL's