Skip to content

Options

c0dejump edited this page Jan 27, 2022 · 7 revisions

Options

> General:
    -u URL                URL to scan [required]
    -f FILE_URL           file with multiple URLs to scan
    -t THREAD             Number of threads to use for URL Fuzzing. Default: 30
    --exclude EXCLUDE [EXCLUDE ...] Exclude page, response code, response size. (Exemples: --exclude 500,337b)   
    --auto                Automatic threads depending response to website. Max: 30
    --update              For automatic update

> Wordlist Settings:
    -w WORDLIST           Wordlist used for Fuzzing the desired webite. Default: dichawk.txt     
    -b                    Adding prefix/suffix backup extensions during the scan. (Exemples: exemple.com/~ex/, exemple.com/ex.php.bak...) beware, take more longer
    -p PREFIX             Add prefix in wordlist to scan

> Request Settings:             
    -H HEADER_            Modify header. (Exemple: -H "cookie: test")    
    -a USER_AGENT         Choice user-agent. Default: Random    
    --redirect            For scan with redirect response (301/302)      
    --auth AUTH           HTTP authentification. (Exemples: --auth admin:admin)               
    --timesleep TS        To define a timesleep/rate-limit if app is unstable during scan.

> Tips:            
    -r                    Recursive dir/files      
    -s SUBDOMAINS         Subdomain tester         
    --js                  For try to found keys or token in the javascript page
    --nfs                 Not the first step of scan during the first running (waf, vhosts, wayback etc...)    
    --ffs                 Force the first step of scan during the first running (waf, vhosts, wayback etc...)              
    --notify              For receveid notify when the scan finished (only work on linux)

> Export Settings:                    
    -o OUTPUT             Output to site_scan.txt (default in website directory)     
    -of OUTPUT_TYPE       Output file format. Available formats: json, csv, txt 

Without options

python3 hawkscan.py -u https://toto.com/


With Javascript analysis

Check in found page if a javascript file exist and check in this if any apiKey or other endpoint exist

python3 hawkscan.py -u https://toto.com/ --js


With Backup extensions

Scan with backup extensions (see config.py to check all backup ext)

python3 hawkscan.py -u https://toto.com/ -b
Scan with all extensions in config file: ['.db', '.swp', '.yml', '.xsd', '.xml', '.wml', '.bkp', '.rar', '.zip', '.7z', '.bak', '.bac', '.BAK', '.NEW', '.old', '.bkf', '.bok', '.cgi', '.dat', '.ini', '.log', '.key', '.conf', '.env', '_bak', '_old', '.bak1', '.json', '.lock', '.save', '.atom', '.action', '_backup', '.backup', '.config', '?stats=1', '/authorize/', '.md', '.gz', '.txt', '%01', '(1)', '.sql.gz']

python3 hawkscan.py -u https://toto.com/ -b min
Scan with a minimum backup extensions: ['.bkp', '.bak', '.bac', '.BAK', '.NEW', '.old', '_bak', '_old', '.bak1', '_backup', '.backup']


With exclude option

Scan with an exclusion, exclusion types:

python3 hawkscan.py -u https://toto.com/ --exclude 403
Scan without 403 response code

python3 hawkscan.py -u https://toto.com/ --exclude 1337b
Scan without all page which equal to 1337 bytes

python3 hawkscan.py -u https://toto.com/ --exclude https://toto.com/toto.php
calculation of the "toto.php" page and performs a percentage of resemblance with the other pages

python3 hawkscan.py -u https://toto.com/ --exclude 403,1337b Exclude all page with 403 response code and all page equal to 1337b


Clone this wiki locally