The last post walked through a 5-script workflow that turned raw show command output into AI-generated network documentation. It worked. It also involved pasting a prompt into a browser chat window, waiting, and copy-pasting the result back out.
Fine for a one-off. Not fine when your manager asks for docs on a site, or when you want this to run on a schedule, or when a teammate needs to run it without you standing over their shoulder. So this post is the sequel. One script. One command. SSH to finished runbook with no human in the middle.
The Problem With The Manual Workflow
The 5-script workflow had three friction points that kept it from being a real tool:
- Copy-pasting into a browser chat does not scale past a single run
- A scheduled job cannot paste anything into a browser
- Every manual handoff is a place for something to go wrong or be forgotten
The fix is obvious in hindsight. Collapse the whole thing into one script that calls the provider API directly, and writes the finished runbook to disk.
The Pipeline
One script, six stages. You run it with a single command.
python api-doc-pipeline.py
Under the hood it does this:
- Collect – Netmiko SSHes to every device in your inventory and runs role-based show commands
- Redact – strips credentials, IPs, hostnames, MACs, serials, usernames, certs, timestamps, versions, interface descriptions, and VLAN names before anything leaves your network
- Assemble – builds the full prompt with a token estimate and context window check
- Call the API – hits Claude, OpenAI, or Gemini depending on the
--providerflag - Restore – swaps placeholders back to real values in the returned runbook
- Diagram (optional) – generates a draw.io topology from CDP neighbor data, fully local, no extra API call
Output lands at output/<timestamp>/runbook.md with real hostnames, real IPs, structured sections for inventory, interfaces, routing, and severity-tagged findings.
The Inventory File
Devices live in a YAML file, not hardcoded in Python:
devices:
- name: core-rtr-01
host: 192.168.1.250
device_type: cisco_ios
role: router
role: router runs router commands. role: switch runs switch commands. device_type is any Netmiko platform string. If you have never touched YAML, it is worth 10 minutes of your time. Ansible, Docker Compose, and Kubernetes all use the same format.
The API Key
Important thing to know before you go create an account. API access is a separate paid developer account from Claude Pro, ChatGPT Plus, or Gemini Advanced. Consumer chat subscriptions do not include API access. For the lab-scale runs in this post you are looking at cents per run, but you need to know that going in.
| Provider | Console | Env var |
|---|---|---|
| Claude (default) | https://console.anthropic.com/ | ANTHROPIC_API_KEY |
| OpenAI | https://platform.openai.com/api-keys | OPENAI_API_KEY |
| Gemini | https://aistudio.google.com/apikey | GEMINI_API_KEY |
Set the key in ~/.bashrc so it persists across terminal sessions:
echo 'export ANTHROPIC_API_KEY="sk-ant-..."' >> ~/.bashrc
source ~/.bashrc
Same concept as a Windows environment variable. Set it once, it is always there.
Running It Unattended
The point of automating this was scheduled runs. Three environment variables let the script run without any interactive prompts:
| Env var | Purpose |
|---|---|
NETDEV_USER |
SSH username |
NETDEV_PASS |
SSH password |
NETDEV_ENABLE |
Enable secret |
One gotcha worth calling out. If your environment does not use enable secrets, set export NETDEV_ENABLE="". An empty string skips the enable step. An unset variable triggers the interactive prompt, which breaks an unattended run.
Linux or WSL2 cron job, every Monday at 6 AM:
0 6 * * 1 cd ~/netops-toolkit/scripts/netmiko/api-automation-pipeline && ./venv/bin/python api-doc-pipeline.py >> ~/pipeline.log 2>&1
Windows Task Scheduler works too. Point a Basic Task at wsl.exe with arguments like:
-d Ubuntu -- bash -lc "cd /home/you/netops-toolkit/scripts/netmiko/api-automation-pipeline && ./venv/bin/python api-doc-pipeline.py"
Set the env vars in ~/.bashrc and the WSL shell picks them up.
Picking a Model
Defaults work out of the box. Use --model if you want to upgrade or cut cost.
| Provider | Default | Upgrade | Budget |
|---|---|---|---|
| Claude | claude-sonnet-4-6 |
claude-opus-4-6 |
– |
| OpenAI | gpt-5.4-mini |
gpt-5.4 |
gpt-5.4-nano |
| Gemini | gemini-3-flash |
gemini-3.1-pro-preview |
– |
On a 3-device lab, Sonnet 4.6 produced the best runbook in testing. Severity-tagged findings, the most detailed management services section, and it caught a failed SSH auth buried in syslog. Opus 4.6 was close. Sonnet 4.2 was noticeably weaker.
Adapting For Your Network
Three things change when you move this off a lab.
Vendor coverage. The built-in command sets assume Cisco IOS and IOS-XE output. Netmiko supports dozens of platforms, so the connection layer is fine. The command set is what needs work. Use the --commands flag to point the script at a YAML file with role-specific commands for your gear.
Credential handling. ~/.bashrc env vars are plain text on disk. That is fine for a personal home lab. In a team environment or a CI/CD pipeline, use a proper secrets manager. AWS Secrets Manager, HashiCorp Vault, Azure Key Vault, all fine choices. This applies to both the API key and the device credentials.
Redaction defaults. Full redaction is on by default. Everything leaving your network is stripped to placeholders before the API call, then restored from a map.json file that never leaves your machine. Power users can disable specific categories with --no-redact-* flags. --skip-redaction disables everything, which is fine for a lab and a bad idea anywhere else.
Honest Limitations
This is a real tool, not a magic box. A few things to know before you point it at production.
Context window limits. The script assembles every device’s output into a single prompt. For a 3-device lab you are at about 12K tokens, well within any modern model. For 200 devices you are way over. The script warns you at 80% of the model’s context limit. The honest answer is batch it. Split inventory by site, by function, or by closet and run the pipeline per shard. You also get better runbooks from smaller focused runs.
The runbook is a strong first draft. Real value still comes from a human reviewing it and adding context the AI cannot see. Business-critical paths, planned changes, historical incidents. The script gets you from blank page to 80%. The last 20% is yours.
IOS-centric out of the box. Multi-vendor support is a future video. For now, if you run this against non-Cisco gear, supply your own --commands YAML.
Get The Script
Everything from this post is open source and on GitHub:
The README has every flag documented, the full cron and Task Scheduler setup, and the CML lab YAML so you can replicate the demo environment exactly. Clone it, drop your inventory in, and run it.