Keyword Selector Tool 

What is FyberSpider?

FyberSpider is the FyberSearch web crawler. There are two active versions of FyberSpider, 1.2 and 1.3. FyberSpider 1.2 is the crawler used for our online services like URL Breakdown and URL Inclusion Status. FyberSpider 1.3 is the crawler that catalogs web pages to later be indexed.

FyberSpider's user agent/browser string should be one of the following two lines:
User-Agent: FyberSpider/1.2 (http://www.fybersearch.com/fyberspider.php)
User-Agent: FyberSpider/1.3 (http://www.fybersearch.com/fyberspider.php)


Prevent Content from being Crawled

FyberSpider is the only web crawler that we are aware of that allows you to keep it from crawling specific portions of a web page. Just place the <fybersearch_ignore> HTML tag before the content you would like FyberSpider to ignore on your web page and the </fybersearch_ignore> HTML closing tag after the content you would like FyberSpider to ignore on your web page.

Not sure how/if this works? First, go take a look at this example file and then break the URL down. You will see that only the content not within the <fybersearch_ignore></fybersearch_ignore> tags is displayed. If you still do not understand, try looking at the source code of the example file.


Web Crawling Frequency

It depends on how good your web pages taste, but they should never be eaten too quickly. If FyberSpider eats too much, please contact us. In general, 3 or more seconds should pass between crawling pages from the same domain name.


Prevent Web Pages from being Crawled

FyberSpider will obey the The Robots Exclusion Protocol.
FyberSpider will not obey The Robots META tag at this time.

If you would like to use the robots.txt file to tell FyberSpider (or other robots) not to crawl pages of your website the first step is to create a file named "robots.txt". You should then follow the examples found on this page. If you would like to block FyberSpider instead of all robots from a page, enter "fyberspider" into the user agent line instead of a "*". Upload it to your server so that it can be viewed from your domain name. Example: www.yourwebsiteaddress.com/robots.txt.

Please note that blocking FyberSpider via the robots.txt file will block all versions of FyberSpider.


URLs FyberSpider Will Not Crawl

It is not our goal to catalog every web page in existence. Our goal is to limit our index to only the most valuable web pages for each word or combination of words and filter out the rest.

Therefore, we have many strict rules as to which URLs we choose to catalog and index that other search engines do not share. To help prevent competitors and spammers from attempting to use these rules to our disadvantage, we will not be publishing the details.