ParamSpider mines and filters URLs from web archives to aid bug hunting, fuzzing, and security probing.
Mining URLs from dark corners of Web Archives for bug hunting/fuzzing/further probing
ParamSpider is designed for security researchers and bug bounty hunters to discover hidden or forgotten URLs related to a target domain by mining web archives like the Wayback Machine. It helps users focus on relevant URLs for vulnerability scanning, fuzzing, and further security analysis by filtering out irrelevant links.
Ensure Python and pip are installed before installation. Using proxies can help bypass network restrictions or anonymize requests. Filtering helps reduce noise but users should verify URLs for relevance. Contributions are welcome, indicating an active community.
git clone https://github.com/devanshbatham/paramspider
cd paramspider
pip install .
paramspider -d example.com
Discover URLs for a single domain
paramspider -l domains.txt
Discover URLs for multiple domains listed in a file
paramspider -d example.com -s
Stream discovered URLs directly to the terminal
paramspider -d example.com --proxy '127.0.0.1:7890'
Set up a proxy for web requests
paramspider -d example.com -p '"><h1>reflection</h1>'
Add a custom placeholder for URL parameter values instead of the default 'FUZZ'