Scrapers use other search engines Scrapers use other search engines | Kodi Forums
Results 1 to 2 of 2
  1. #1

    Scrapers use other search engines

    Idk anything about programming but I have a few thoughts to help.
    Since google has blocked, censored, blacklisted or had over 900 sites shutdown,

    Cant developers use other search engines that are safer to use like startpage, dogpike, duckduckgo, or some other private search engines?

    I want to help.
    How can i help?
    Learn python?
    Gofund?


  2. #2
    Member DE5T1NY's Avatar
    Join Date
    Nov 2017
    Location
    Belfast
    Posts
    34
    Hello Torontopaul

    Don't think using another search engine on a blocked site will work as it will only lead you too the said sites address.

    It is possible to use Proxy sites on search engines to bypass a blocked site, maybe a scraper could get around this by using a code like this :-

    Example:

    url spoof="search engine">url address/find?s=tt;q=\1</url

    The "url address" catches the full path of the url, adding a code into a "scraper" to remove the sites address and a few /s and it should capture their goals.

    You can use a webwidecrawler instead of proxy to find a link to a working magnet torrent from a block site etc, so I see no reason why it shouldn't be added to a scraper code also.

    So the next question is "What is best or even possible" Proxy, Cache or Webwidecrawler ?

    Love Destiny.

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •