Scrapers use other search engines

Torontopaul

New member
Oct 11, 2017
52
0
0
Idk anything about programming but I have a few thoughts to help.
Since google has blocked, censored, blacklisted or had over 900 sites shutdown,

Cant developers use other search engines that are safer to use like startpage, dogpike, duckduckgo, or some other private search engines?

I want to help.
How can i help?
Learn python?
Gofund?
 

DE5T1NY

New member
Nov 28, 2017
34
0
0
Belfast
Hello Torontopaul

Don't think using another search engine on a blocked site will work as it will only lead you too the said sites address.

It is possible to use Proxy sites on search engines to bypass a blocked site, maybe a scraper could get around this by using a code like this :-

Example:

url spoof="search engine">url address/find?s=tt;q=\1</url

The "url address" catches the full path of the url, adding a code into a "scraper" to remove the sites address and a few /s and it should capture their goals.

You can use a webwidecrawler instead of proxy to find a link to a working magnet torrent from a block site etc, so I see no reason why it shouldn't be added to a scraper code also.

So the next question is "What is best or even possible" Proxy, Cache or Webwidecrawler ?

Love Destiny.