A blazing-fast Rust CLI tool that scans codebases to detect AI/LLM usage, audit security vulnerabilities, and enforce cost budgets.
⚡ Blazing-fast security scanner for AI/LLM usage in codebases. Detect vulnerabilities, enforce budgets, and audit AI implementations.
aiscan is designed for developers and security teams to inventory AI and large language model calls within their codebases, identify potential security risks based on OWASP LLM vulnerabilities, and enforce usage cost limits. It is ideal for integrating AI security checks into CI/CD pipelines to maintain safe and cost-effective AI implementations.
Ensure configuration limits (tokens, requests, spend) are set appropriately to avoid budget overruns. Integrate aiscan into CI/CD pipelines to automate AI security checks. Currently, pre-built binaries are upcoming; building from source requires Rust and Cargo. The tool supports over 500 languages via tree-sitter, enabling broad codebase compatibility.
git clone https://github.com/haasonsaas/aiscan.git
cd aiscan
cargo install --path .
For pre-built binaries (coming soon), download the appropriate binary for your OS from the releases page
Make the binary executable (e.g., chmod +x aiscan on macOS/Linux)
aiscan init
Initialize a default configuration file (.aiscan.toml) in your project
aiscan scan .
Scan the current directory to inventory AI/LLM usage
aiscan scan src/
Scan a specific directory for AI/LLM calls
aiscan scan . --output inventory.json
Scan and save the AI usage inventory results to a JSON file
aiscan audit .
Run a comprehensive security audit on the current directory
aiscan audit . --output report.json
Run a security audit and save the detailed report as JSON
aiscan audit . --json
Run a security audit and output results in JSON format
aiscan ci . --json
Run AI security scan in CI mode with machine-readable JSON output and exit codes indicating status