NFGuard Documentation

Version 0.1.0 (Pre-Release) — February 17, 2026

🚧 Pre-Release Notice
This is an early pre-release version. NFGuard is functional but still evolving — you may encounter bugs or incomplete features. We are actively improving the tool with every update.

Found a bug? Have a suggestion? Please open an issue on GitHub — your feedback helps us build a better tool for the entire community.

This project is made with ❤️ for the cybersecurity community and enthusiasts worldwide.

Overview

NFGuard is an AI-powered security CLI that uses a multi-agent architecture to orchestrate 34+ security tools. You describe what you want in natural language, and the AI orchestrator delegates tasks to specialized agents (Recon, Web Testing, Vulnerability Scanning, Reporting), each with access to the right tools for the job.

All tools are bundled as pre-compiled binaries — no manual installation of individual tools required. NFGuard works with any OpenAI-compatible LLM provider, including local models.

Requirements

  • Operating System: Linux x86_64 (amd64) — native Linux or WSL (Windows Subsystem for Linux)
  • Root access: Required for installation (sudo)
  • LLM Provider: Any OpenAI-compatible API endpoint (local or cloud)
  • Disk space: ~650MB (bundled security binaries + Python runtime)
Platform Compatibility This release (v0.1.0) supports Linux x86_64 and WSL (Windows Subsystem for Linux). WSL runs native Linux binaries, so NFGuard works identically on WSL as on a regular Linux system. macOS and Linux ARM64 builds are planned for future releases.

Installation

Quick Install (one command)

Copy and paste this into your terminal:

curl -sL https://raw.githubusercontent.com/dolutech/nfguard-cli/main/install.sh | sudo bash

Then configure your provider and launch:

nano ~/.nfguard/providers.yaml nfguard

Manual Installation

If you prefer to download and inspect the script first:

# 1. Download the installer curl -sL https://raw.githubusercontent.com/dolutech/nfguard-cli/main/install.sh -o install.sh # 2. Inspect it (optional) less install.sh # 3. Run the installer sudo bash install.sh # 4. Configure and launch nano ~/.nfguard/providers.yaml nfguard

What the installer does

  • Downloads NFGuard v0.1.0 from GitHub Releases
  • Extracts NFGuard to /opt/nfguard/
  • Creates a symlink at /usr/local/bin/nfguard (available system-wide)
  • Creates a config directory at ~/.nfguard/ with default configuration templates
  • Sets secure file permissions (600) on config files

First Run

On first launch, if you haven't configured a provider yet, NFGuard will run an interactive setup wizard that guides you through:

  1. Entering your provider's base URL
  2. Entering your API key
  3. Selecting a model from the available models

The wizard saves everything to ~/.nfguard/ automatically.

Configure a Provider

NFGuard works with any LLM provider that exposes an OpenAI-compatible API endpoint. Configuration is done in ~/.nfguard/providers.yaml:

providers: my-provider: base_url: https://api.example.com/v1 api_key: your-api-key-here default_model: model-name
Security Note The providers.yaml file contains your API keys and has restricted permissions (chmod 600). Never share this file or commit it to version control.

Local LLM (Recommended)

We strongly recommend using a local LLM Running a local model gives you maximum privacy — your security data never leaves your machine. It also eliminates API costs and latency to external servers.

NFGuard works great with local LLM servers that expose an OpenAI-compatible endpoint:

Ollama

# Install Ollama and pull a model ollama pull llama3.1:70b # Configure in ~/.nfguard/providers.yaml providers: ollama: base_url: http://localhost:11434/v1 api_key: ollama default_model: llama3.1:70b

LM Studio

providers: lmstudio: base_url: http://localhost:1234/v1 api_key: lm-studio default_model: your-loaded-model

Recommended local models

ModelNotes
GPT-OSS 120BStrong open-source model for tool-use and reasoning
Minimax M2.5Excellent performance for multi-step security workflows
Qwen 3.5 397B-A17BMoE architecture — high capability with efficient inference
GLM-4.7-FlashFast and lightweight — good for machines with limited VRAM

Cloud Providers

If you prefer cloud-based models, any OpenAI-compatible provider works.

Chutes.ai (Recommended)

We recommend Chutes.ai as a cloud provider. It offers a large catalog of open-weight models, decentralized infrastructure, and competitive pricing.

providers: chutes: base_url: https://llm.chutes.ai/v1 api_key: your-chutes-api-key default_model: openai/gpt-oss-120b-TEE

Browse available models at chutes.ai and use the model ID in your config (e.g., openai/gpt-oss-120b-TEE).

Disclosure Our recommendation of Chutes.ai is not sponsored. We recommend it based on its open-weight model catalog, decentralized architecture, and cost-effectiveness. You are free to use any OpenAI-compatible provider.

OpenRouter

providers: openrouter: base_url: https://openrouter.ai/api/v1 api_key: sk-or-xxxxxxxxxxxxxxxxxxxx default_model: z-ai/glm-5

Anthropic

providers: anthropic: base_url: https://api.anthropic.com/v1 api_key: sk-ant-xxxxxxxxxxxxxxxxxxxxxxxx default_model: claude-sonnet-4-5-20250929

OpenAI

providers: openai: base_url: https://api.openai.com/v1 api_key: sk-xxxxxxxxxxxxxxxxxxxxxxxx default_model: gpt-5.2

Other Providers

Any service with an OpenAI-compatible /v1/chat/completions endpoint will work. Examples: Together AI, Groq, Fireworks AI, DeepInfra, etc.

Important: Model Guardrails Some proprietary models (e.g., GPT, Claude, Gemini) have built-in safety guardrails that may refuse to execute certain security testing tasks — even when you have explicit authorization to test the target. For this reason, we strongly recommend using open-weight models (GPT-OSS, Qwen, GLM, Minimax, etc.) which provide full control over model behavior. If you need a model perfectly tailored to your security workflow, consider fine-tuning an open-weight model for your specific use case.

Change the Model

In config file

Edit ~/.nfguard/config.yaml:

# Default model used by the orchestrator default_model: anthropic/claude-sonnet-4-20250514

At runtime (in the REPL)

# List available models from your provider /models # Switch to a different model /model gpt-4o # Switch provider /provider openai

Config Files

All configuration is stored in ~/.nfguard/:

FilePurpose
config.yamlGeneral settings: default provider, model, log level
providers.yamlLLM provider credentials (API keys, base URLs)
mcp.yamlMCP server connections (optional)
skills/Custom YAML skill definitions
agents/specialists/Custom specialist agent configs

Basic Usage

Start NFGuard

# Launch the interactive REPL nfguard # Check version nfguard --version # Show help nfguard --help

Natural language commands

Just describe what you want. The AI orchestrator will figure out which tools and agents to use:

# Reconnaissance nfguard> Run a full recon on target.com nfguard> Find all subdomains of example.com nfguard> What ports are open on 192.168.1.0/24? # Vulnerability scanning nfguard> Scan target.com for vulnerabilities nfguard> Check if this Apache 2.4.49 version has known CVEs nfguard> Test example.com/login for SQL injection # Web testing nfguard> Fuzz directories on https://app.example.com nfguard> Find hidden parameters on this endpoint nfguard> Test for XSS on the search page # Reporting nfguard> Generate a PDF report of our findings nfguard> Create an executive summary of this assessment

Slash Commands

Quick commands available in the REPL:

CommandDescription
/helpShow available commands
/exitExit NFGuard
/clearClear conversation history
/compactManually compact context (summarize conversation)
/export [format] [file]Export conversation (markdown, json, html)
/providersList configured providers
/provider <name>Switch active provider
/modelsList available models
/model <name>Switch active model
/toolsList available security tools
/agentsList specialist agents
/skillsList available skills/workflows
/mcpShow MCP server connections
/context [tier|number]Show or set context window size (64k/131k/200k/400k/1m)
/settingsConfigure tool API keys, default timeout, and max retries
/damage-control on|offToggle bash guardrails
/create-agentsCreate a custom specialist agent

Built-in Skills

Skills are pre-built workflows that chain multiple tools together with a single command:

/full-recon <target>

Run complete reconnaissance on a target:

nfguard> /full-recon example.com # Executes: WHOIS → DNS records (A, AAAA, MX, NS, TXT) → Port scan (top 1000)

/vuln-check <target>

Check a target for known vulnerabilities:

nfguard> /vuln-check Apache 2.4.49 nfguard> /vuln-check target.com # Executes: Shodan lookup → Nuclei scan (high/critical severity)

/web-audit <url>

Run a web application security audit:

nfguard> /web-audit https://app.example.com # Executes: Nuclei full scan → Gobuster directory enumeration

Custom Skills

The AI can create new skills during a conversation and save them as YAML files in ~/.nfguard/skills/, making them immediately available as slash commands.

/settings — Configuration

The /settings command opens an interactive menu to configure:

  • Tool API Keys — Set API keys for tools that require them (Shodan, ProjectDiscovery PDCP). Keys are stored securely in ~/.nfguard/tools/ with restricted permissions (0600).
  • Default Tool Timeout — Set the maximum time a tool can run before being killed. Options: 30s, 60s, 120s (default), 300s.
  • Max Tool Retries — Set how many times a failed tool is automatically retried. Options: 1x to 5x (default: 2x).
nfguard> /settings # Opens interactive menu: # 1 Tool API Keys # 2 Default Tool Timeout (120s) # 3 Max Tool Retries (2) # 0 Back

/context — Context Window

The /context command shows or sets the context window size used for conversation history management:

nfguard> /context # Shows current context window and available tiers nfguard> /context 200k # Set to 200,000 tokens nfguard> /context 1m # Set to 1,048,576 tokens (for large-context models)

Available tiers: 64k, 131k, 200k, 400k, 1m. You can also pass a custom integer value.

Agent Delegation

The orchestrator automatically decides which specialist agent to use based on your request. You can also explicitly mention an agent:

# Automatic delegation (orchestrator decides) nfguard> Scan target.com for vulnerabilities # Explicit agent reference nfguard> @recon enumerate all subdomains of example.com nfguard> @web-testing test the login form for SQL injection nfguard> @reporting generate a PDF report of our findings
AgentFocusTools
ReconAgentReconnaissance, DNS, OSINT18 tools
WebTestingAgentWeb app security testing12 tools
VulnScanningAgentCVE scanning, severity analysis3 tools
ReportingAgentPDF/DOCX report generation1 tool

MCP Server Mode

NFGuard can run as a Model Context Protocol (MCP) server, exposing all 34+ security tools for use by any MCP-compatible client (e.g., Claude Desktop):

nfguard serve

This starts a JSON-RPC server on stdin/stdout. Configure it in your MCP client as a local command server.

Session Export

Export your conversation and findings for documentation:

# Export as Markdown (default) /export # Export as JSON /export json findings.json # Export as HTML /export html report.html

All Tools Reference

Use /tools in the REPL to see which tools are installed and ready. Full list:

ToolCategoryDescription
subfinderReconPassive subdomain discovery
amassReconAttack surface mapping (OWASP)
theharvesterReconOSINT gathering (emails, hosts)
shodanReconInternet-wide device search
uncoverReconMulti-engine search (Shodan, Censys...)
alterxReconSubdomain wordlist permutation
asnmapReconASN to CIDR range mapping
cdncheckReconCDN/WAF/cloud detection
subzyReconSubdomain takeover detection
whoisReconDomain registration lookup
dnsxDNSFast DNS resolution (all types)
doggoDNSModern DNS query with JSON
naabuNetworkFast port scanner
tlsxNetworkTLS/SSL certificate scanner
mapcidrNetworkCIDR range manipulation
katanaWebWeb crawler (headless browser)
gauWebKnown URLs from archives
waybackurlsWebHistorical URLs (Wayback Machine)
unfurlWebURL component extraction
anewUtilityLine deduplication
httpxWebHTTP probing & tech detection
webfetchWebIn-process HTTP client (SSRF-safe)
gobusterFuzzingDirectory/file brute-forcing
ffufFuzzingFast web fuzzer
feroxbusterFuzzingRecursive content discovery
nucleiVulnScanTemplate-based vulnerability scanner
dalfoxVulnScanXSS scanner
crlfuzzVulnScanCRLF injection scanner
sqlmapVulnScanSQL injection detection
arjunVulnScanHidden parameter discovery
interactshVulnScanOut-of-band interaction (blind vulns)
reportgenReportingPDF/DOCX report generator
notifyReportingSlack/Discord/Telegram notifications

Uninstall

To completely remove NFGuard from your system:

# Remove the installation (keeps your configs) curl -sL https://raw.githubusercontent.com/dolutech/nfguard-cli/main/install.sh | sudo bash -s -- --uninstall # If you also want to remove your configuration rm -rf ~/.nfguard/

The uninstaller removes /opt/nfguard/ and the /usr/local/bin/nfguard symlink but preserves your ~/.nfguard/ configuration directory.

License

NFGuard is released under the MIT License. It is a community project — currently distributed as a compiled binary.

Use responsibly and only on systems you have explicit authorization to test. Unauthorized security testing is illegal in most jurisdictions.

Contact & Feedback

For questions, bug reports, or feature requests:

NFGuard is a community project made with ❤️ for cybersecurity professionals and enthusiasts. Every issue you open and every suggestion you share helps make this tool better for everyone.