skipfish笔记

栏目: 编程工具 · 发布时间: 5年前

内容简介:Skipfish是一个积极的Web应用程序的安全性侦察工具。 它准备了一个互动为目标的网站的站点地图进行一个递归爬网和基于字典的探头。 然后,将得到的地图是带注释的与许多活性(但希望非破坏性的)安全检查的输出。 最终报告工具生成的是,作为一个专业的网络应用程序安全评估的基础。To compile it, simply unpack the archive and try make. Chances are, you will need to installNext, you need to read the

Skipfish是一个积极的Web应用程序的安全性侦察工具。 它准备了一个互动为目标的网站的站点地图进行一个递归爬网和基于字典的探头。 然后,将得到的地图是带注释的与许多活性(但希望非破坏性的)安全检查的输出。 最终报告 工具 生成的是,作为一个专业的网络应用程序安全评估的基础。

https://github.com/spinkham/skipfish

http://code.google.com/p/skipfish/
 

Install

安装所需软件库:

sudo apt-get install libssl0.9.8
sudo apt-get install libssl-dev
sudo apt-get install openssl
sudo apt-get install libidn11-dev
 

安装skipfish:

wget http://skipfish.googlecode.com/files/skipfish-1.69b.tgz
tar zxvf skipfish-1.69b.tgz
mv skipfish-1.69b skipfish
cd skipfish
make
//编译完成,在目录中生成skipfish可执行程序
cp dictionaries/default.wl skipfish.wl
 
//拷贝其中一个字典,用来扫描
./skipfish-o data http://mall.midea.com/detail/index
//其中data是输出目录,扫描结束后可打开data目录下的index.html查看扫描结果
1 2 3

SomeParams

skipfish web application scanner-version2.10b 
Usage: /home/admin/workspace/skipfish/skipfish[options... ] -W wordlist-o output_dir start_url[start_url2... ] 

Authentication and access options: 
验证和访问选项: 

 -A user:pass - use specified HTTP authentication credentials
 使用特定的http验证 
 -F host=IP-pretend that'host'resolves to'IP' 
 -C name=val-append a custom cookie to all requests
 对所有请求添加一个自定的cookie
 -H name=val-append a custom HTTP header to all requests
 对所有请求添加一个自定的http请求头 
 -b(i|f|p) - use headers consistent with MSIE/ Firefox /iPhone
 伪装成IE/FIREFOX/IPHONE的浏览器 
 -N- do not accept any new cookies
 不允许新的cookies
 --auth-form url-form authentication URL
 --auth-user user-form authentication user
 --auth-pass pass -form authentication password
 --auth-verify-url-URL for in-session detection

Crawl scope options: 

 -d max_depth-maximum crawl tree depth(16)最大抓取深度 
 -c max_child-maximum children to index per node(512)最大抓取节点 
 -x max_desc-maximum descendants to index per branch(8192)每个索引分支抓取后代数 
 -r r_limit-max total number of requests to send(100000000)最大请求数量 
 -p crawl% -node and link crawl probability(100%) 节点连接抓取几率 
 -q hex-repeat probabilistic scan with given seed
 -I string -only follow URLs matching'string'URL必须匹配字符串 
 -X string -exclude URLs matching'string'URL排除字符串 
 -K string - do not fuzz parameters named'string' 
 -D domain-crawl cross-site links to another domain跨域扫描 
 -B domain-trust,but do not crawl,another domain
 -Z- do not descend into 5xx locations5xx错误时不再抓取 
 -O- do not submit any forms不尝试提交表单 
 -P- do not parse HTML,etc,to find new links不解析HTML查找连接 

Reporting options: 

 -o dir-write output to specified directory(required) 
 -M-log warnings about mixed content/non-SSL passwords
 -E-log all HTTP/1.0 /HTTP/1.1caching intent mismatches
 -U-log all external URLs and e-mails seen
 -Q-completely suppress duplicate nodes in reports
 -u-be quiet,disable realtime progress stats
 -v-enable runtime logging(to stderr) 

Dictionary management options: 

 -W wordlist- use a specified read-write wordlist(required) 
 -S wordlist-load a supplemental read-only wordlist
 -L- do not auto-learn new keywords for the site
 -Y- do not fuzz extensions in directory brute-force
 -R age-purge words hit more than'age'scans ago
 -T name=val-add new form auto-fill rule
 -G max_guess-maximum number of keyword guesses to keep(256) 

 -z sigfile-load signatures from this file

Performance settings: 

 -g max_conn-max simultaneous TCP connections, global (40) 最大全局TCP链接 
 -m host_conn-max simultaneous connections,per target IP(10) 最大链接/目标IP
 -f max_fail-max number of consecutive HTTP errors(100) 最大http错误 
 -t req_tmout-total request response timeout(20s) 请求超时时间 
 -w rw_tmout-individual network I/O timeout(10s) 
 -i idle_tmout-timeout on idle HTTP connections(10s) 
 -s s_limit-response size limit(400000B) 限制大小 
 -e- do not keep binary responses for reporting不报告二进制响应 

Other settings: 

 -l max_req-max requests per second(0.000000) 
 -k duration-stop scanning after the given duration h:m:s
 --config file-load the specified configuration file
 

How to run the scanner?

To compile it, simply unpack the archive and try make. Chances are, you will need to install http://ftp.gnu.org/gnu/libidn/libidn-1.18.tar.gz ‘>libidn or http://www.pcre.org/ ‘>libpcre3 first.

Next, you need to read the instructions provided in doc/dictionaries.txt to select the right dictionary file and configure it correctly. This step has a profound impact on the quality of scan results later on, so don’t skip it.

Once you have the dictionary selected, you can use -S to load that dictionary,

and -W to specify an initially empty file for any newly learned site-specific

keywords (which will come handy in future assessments):

$ touch new_dict.wl

$ ./skipfish -o output_dir -S existing_dictionary.wl -W new_dict.wl \

http://www.example.com/some/starting/path.txt

You can use -W- if you don’t want to store auto-learned keywords anywhere.

Note that you can provide more than one starting URL if so desired; all of them will be crawled. You can also read a list of URLs from a file using this syntax:

$ ./skipfish …other options… -o output_dir @/path/to/url_list.txt

The tool will display some helpful stats while the scan is in progress. You

can also switch to a list of in-flight HTTP requests by pressing return.

In the example above, skipfish will scan the entire www.example.com (including services on other ports, if linked to from the main page), and write a report to output_dir/index.html. You can then view this report with your favorite browser (JavaScript must be enabled; and because of recent file:/// security improvements in certain browsers, you might need to access results over HTTP). The index.html file is static; actual results are stored as a hierarchy of JSON files, suitable for machine processing or different presentation frontends if needs be. A text-based list of all the visited URLs, plus some useful metadata, is stored to a file named pivots.txt, too.

A simple companion script, sfscandiff, can be used to compute a delta for two scans executed against the same target with the same flags. The newer report will be non-destructively annotated by adding red background to all new or changed nodes; and blue background to all new or changed issues found.

Some sites may require authentication; for simple HTTP credentials, you can try:

$ ./skipfish -A user:pass …other parameters…

Alternatively, if the site relies on HTTP cookies instead, log in in your browser or using

a simple curl script, and then provide skipfish with a session cookie:

$ ./skipfish -C name=val …other parameters…

Other session cookies may be passed the same way, one per each -C option.

Certain URLs on the site may log out your session; you can combat this in two ways: by using the -N option, which causes the scanner to reject attempts to set or delete cookies; or with the -X parameter, which prevents matching URLs from being fetched:

$ ./skipfish -X /logout/logout.aspx …other parameters…

The -X option is also useful for speeding up your scans by excluding /icons/, /doc/,

/manuals/, and other standard, mundane locations along these lines. In general, you can

use -X and -I (only spider URLs matching a substring) to limit the scope of a scan any way you like - including restricting it only to a specific protocol and port:

$ ./skipfish -I http://example.com:1234/ …other parameters…

A related function, -K, allows you to specify parameter names not to fuzz

(useful for applications that put session IDs in the URL, to minimize noise).

Another useful scoping option is -D - allowing you to specify additional hosts or domains to consider in-scope for the test. By default, all hosts appearing in the command-line URLs are added to the list - but you can use -D to broaden these rules, for example:

$ ./skipfish -D test2.example.com -o output-dir http://test1.example.com/

…or, for a domain wildcard match, use:

$ ./skipfish -D .example.com -o output-dir http://test1.example.com/

In some cases, you do not want to actually crawl a third-party domain, but you trust the owner of that domain enough not to worry about cross-domain content inclusion from that location. To suppress warnings, you can use the -B option, for example:

$ ./skipfish -B .google-analytics.com -B .googleapis.com …other parameters…

By default, skipfish sends minimalistic HTTP headers to reduce the amount of data exchanged over the wire; some sites examine User-Agent strings or header ordering to reject unsupported clients, however. In such a case, you can use -b ie or -b ffox to mimic one of the two popular browsers; and -b phone to mimic iPhone.

When it comes to customizing your HTTP requests, you can also use the -H option to insert any additional, non-standard headers (including an arbitrary User-Agent value); or -F to define a custom mapping between a host and an IP (bypassing the resolver). The latter feature is particularly useful for not-yet-launched or legacy services.

Some sites may be too big to scan in a reasonable timeframe. If the site features well-defined tarpits - for example, 100,000 nearly identical user profiles as a part of a social network - these specific locations can be excluded with -X or -S. In other cases, you may need to resort to other settings: -d limits crawl depth to a specified number of subdirectories; -c limits the number of children per directory, -x limits the total number of descendants per crawl tree branch; and -r limits the total number of requests to send in a scan.

An interesting option is available for repeated assessments: -p. By specifying a percentage between 1 and 100%, it is possible to tell the crawler to follow fewer than 100% of all links, and try fewer than 100% of all dictionary entries. This - naturally - limits the completeness of a scan, but unlike most other settings, it does so in a balanced, non-deterministic manner. It is extremely useful when you are setting up time-bound, but periodic assessments of your infrastructure. Another related option is -q, which sets the initial random seed for the crawler to a specified value. This can be used to exactly reproduce a previous scan to compare results. Randomness is relied upon most heavily in the -p mode, but also for making a couple of other scan management decisions elsewhere.

Some particularly complex (or broken) services may involve a very high number of identical or nearly identical pages. Although these occurrences are by default grayed out in the report, they still use up some screen estate and take a while to process on JavaScript level. In such extreme cases, you may use the -Q option to suppress reporting of duplicate nodes altogether, before the report is written. This may give you a less comprehensive understanding of how the site is organized, but has no impact on test coverage.

In certain quick assessments, you might also have no interest in paying any particular attention to the desired functionality of the site - hoping to explore non-linked secrets only. In such a case, you may specify -P to inhibit all HTML parsing. This limits the coverage and takes away the ability for the scanner to learn new keywords by looking at the HTML, but speeds up the test dramatically. Another similarly crippling option that reduces the risk of persistent effects of a scan is -O, which inhibits all form parsing

and submission steps.

Some sites that handle sensitive user data care about SSL - and about getting it right. Skipfish may optionally assist you in figuring out problematic mixed content or password submission scenarios - use the -M option to enable this. The scanner will complain about situations such as http:// scripts being loaded on https:// pages - but will disregard non-risk scenarios such as images.

Likewise, certain pedantic sites may care about cases where caching is restricted on HTTP/1.1 level, but no explicit HTTP/1.0 caching directive is given on specifying -E in the command-line causes skipfish to log all such cases carefully.

In some occasions, you want to limit the requests per second to limit the load on the targets server (or possibly bypass DoS protection). The -l flag can be used to set this limit and the value given is the maximum amount of requests per second you want skipfish to perform.

Scans typically should not take weeks. In many cases, you probably want to limit the scan duration so that it fits within a certain time window. This can be done with the -k flag, which allows the amount of hours, minutes and seconds to be specified in a H:M:S format. Use of

this flag can affect the scan coverage if the scan timeout occurs before testing all pages.

Lastly, in some assessments that involve self-contained sites without extensive user content, the auditor may care about any external e-mails or HTTP links seen, even if they have no immediate security impact. Use the -U option to have these logged.

Dictionary management is a special topic, and - as mentioned - is covered in more detail in dictionaries/README-FIRST. Please read that file before proceeding. Some of the relevant options include -S and -W (covered earlier), -L to suppress auto-learning, -G to limit the keyword guess jar size, -R to drop old dictionary entries, and -Y to inhibit expensive k e y w o r d . extension fuzzing.

Skipfish also features a form auto-completion mechanism in order to maximize scan coverage. The values should be non-malicious, as they are not meant to implement security checks - but rather, to get past input validation logic. You can define additional rules, or override existing ones, with the -T option (-T form_field_name=field_value, e.g. -T login=test123 -T password=test321 - although note that -C and -A are a much better method of logging in).

There is also a handful of performance-related options. Use -g to set the maximum number of connections to maintain, globally, to all targets (it is sensible to keep this under 50 or so to avoid overwhelming the TCP/IP stack on your system or on the nearby NAT / firewall devices); and -m to set the per-IP limit (experiment a bit: 2-4 is usually good for localhost, 4-8 for local networks, 10-20 for external targets, 30+ for really lagged or non-keep-alive hosts). You can also use -w to set the I/O timeout (i.e., skipfish will wait only so long for an individual read or write), and -t to set the total request timeout, to account for really slow or really fast sites.

Lastly, -f controls the maximum number of consecutive HTTP errors you are willing to see before aborting the scan; and -s sets the maximum length of a response to fetch and parse (longer responses will be truncated).

When scanning large, multimedia-heavy sites, you may also want to specify -e. This prevents binary documents from being kept in memory for reporting purposes, freeing up a lot of RAM.

Further rate-limiting is available through third-party user mode tools such as http://monkey.org/~marius/trickle/ ‘>trickle, or kernel-level traffic shaping.

Oh, and real-time scan statistics can be suppressed with -u.

But seriously, how to run it?

A standard, authenticated scan of a well-designed and self-contained site (warns about all external links, e-mails, mixed content, and caching header issues), including gentle brute-force:

$ touch new_dict.wl

$ ./skipfish -MEU -S dictionaries/minimal.wl -W new_dict.wl \

-C"AuthCookie=value" -X/logout.aspx-o output_dir \
1

http://www.example.com/

Five-connection crawl, but no brute-force; pretending to be MSIE and caring less about ambiguous MIME or character set mismatches, and trusting example.com links:

$ ./skipfish -m 5 -L -W- -o output_dir -b ie -B example.com http://www.example.com/

Heavy brute force only (no HTML link extraction), limited to a single directory and timing out after 5 seconds:

$ touch new_dict.wl

$ ./skipfish -S dictionaries/complete.wl -W new_dict.wl -P -I http://www.example.com/dir1/ \

-o output_dir -t 5 -I http://www.example.com/dir1/

For a short list of all command-line options, try ./skipfish -h. A quick primer on some of the particularly useful options is also http://lcamtuf.blogspot.com/2010/11/understanding-and-using-skipfish.html ‘>given here.


以上所述就是小编给大家介绍的《skipfish笔记》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

Kotlin实战

Kotlin实战

【美】Dmitry Jemerov(德米特里·詹莫瑞福)、【美】 Svetlana Isakova(斯维特拉娜·伊凡诺沃) / 覃宇、罗丽、李思阳、蒋扬海 / 电子工业出版社 / 2017-8 / 89.00

《Kotlin 实战》将从语言的基本特性开始,逐渐覆盖其更多的高级特性,尤其注重讲解如何将 Koltin 集成到已有 Java 工程实践及其背后的原理。本书分为两个部分。第一部分讲解如何开始使用 Kotlin 现有的库和API,包括基本语法、扩展函数和扩展属性、数据类和伴生对象、lambda 表达式,以及数据类型系统(着重讲解了可空性和集合的概念)。第二部分教你如何使用 Kotlin 构建自己的 ......一起来看看 《Kotlin实战》 这本书的介绍吧!

在线进制转换器
在线进制转换器

各进制数互转换器

html转js在线工具
html转js在线工具

html转js在线工具

HSV CMYK 转换工具
HSV CMYK 转换工具

HSV CMYK互换工具