A command-line tool that provides instant AI-driven code reviews using local and cloud-based Large Language Models to improve code quality and catch bugs.
A command-line tool for developers to get instant code reviews from state-of-the-art Large Language Models (LLMs). Supports both local models (Ollama) and cloud providers (OpenAI, Hugging Face, Azure). Improve code quality, catch bugs, and receive AI-driven feedback effortlessly.
CRLLM is designed for developers who want to automate and enhance their code review process by leveraging advanced LLMs. It helps catch bugs, improve readability, and suggest best practices early in the development workflow, reducing manual review effort and accelerating DevSecOps pipelines.
To use local models, Ollama must be installed; otherwise, API keys for cloud providers are required. Enabling RAG support enhances reviews with source context but requires additional configuration. Use the .crllm_ignore file to exclude files or directories from reviews, similar to .gitignore.
Ensure Python 3.8+ is installed
Install pipx following https://pipx.pypa.io/stable/installation/
Install Ollama from https://ollama.com/download if running models locally
Install crllm from GitHub using: pipx install git+https://github.com/lukasrump/crllm.git
Alternatively, install crllm from PyPI using: pipx install crllm
crllm path/to/your/codefile.py
Perform a code review on a specific file or Git repository
crllm -i .
Initialize project configuration interactively in the current directory