Post

Automated Vulnerability Scanning for Your Homelab Containers (with AI Context)

Automated Vulnerability Scanning for Your Homelab Containers (with AI Context)

I run 14+ Docker containers across my homelab. Traefik, Authentik, Pi-hole, CrowdSec, Plex, Home Assistant — the usual suspects. Renovate keeps the images up to date automatically, which is great, but there’s a gap nobody talks about: between updates, what vulnerabilities are actually sitting in those running images?

For a while, my answer was “I don’t know.” I wasn’t scanning anything. Renovate would bump versions when upstream released them, but I had zero visibility into what CVEs existed in my stack right now. That’s the “set it and forget it” problem — everything looks fine until it isn’t, and you wouldn’t know either way.

So I built a tool that scans every container image in my homelab weekly, feeds the results to an AI that actually understands my infrastructure, and drops a prioritized report into a GitHub Issue. I used Claude as a coding assistant throughout the build, and the final pipeline also uses Claude CLI for the analysis step. Here’s how the whole thing works.

What This Tool Does

At a high level, the homelab vulnerability scanner handles the full pipeline from discovery to report:

  • Automatically discovers all container repos in your GitHub org (no hardcoded list to maintain)
  • Extracts Docker image references from docker-compose files, including Jinja2 templates
  • Scans every image with Trivy for HIGH and CRITICAL vulnerabilities
  • Feeds results to Claude CLI along with your environment context document
  • AI categorizes findings into three buckets: Needs Attention, Informational, and Clean
  • Creates a GitHub Issue with prioritized, actionable recommendations
  • Runs weekly on a cron schedule, plus manual dispatch whenever you want
  • Auto-closes previous reports so you only ever have one open issue to track

The key differentiator here isn’t the scanning — Trivy does that perfectly well on its own. It’s the context. The AI knows which of my services are internet-facing, which are behind SSO, and which ones Renovate will auto-update. A HIGH severity CVE in Traefik (handling all inbound traffic) is a very different conversation than the same CVE in a LAN-only photo manager behind Authentik.

How It Works

Here’s the full pipeline:

1
2
3
4
5
6
7
8
9
10
11
12
13
GitHub API ──> docker-compose files ──> image list
                                            │
                                            ▼
                                    Trivy scans (HIGH/CRITICAL)
                                            │
                                            ▼
                              Environment context + findings
                                            │
                                            ▼
                                  Claude CLI analysis
                                            │
                                            ▼
                              GitHub Issue (auto-closes previous)

Stage 1 — Discovery: The GitHub API dynamically finds all non-archived repos matching a naming pattern (in my case, *-Containers). For each repo, it fetches docker-compose files from the git tree and extracts image: references.

Stage 2 — Scanning: Each unique image gets pulled and scanned by Trivy with --severity HIGH,CRITICAL. Results are captured as JSON for structured processing.

Stage 3 — Analysis: The raw findings get combined with an environment context file that describes your infrastructure — what’s internet-facing, what’s LAN-only, what Renovate manages. This assembled prompt goes to Claude CLI.

Stage 4 — Reporting: The AI output becomes a GitHub Issue. Any previous open report gets closed with a “Superseded” comment. Raw scan data is preserved as a workflow artifact.

Prerequisites

Before setting this up, you’ll need:

  • A GitHub org with container repos that use docker-compose
  • A self-hosted GitHub Actions runner with Docker access (for pulling and scanning images)
  • Trivy — installed automatically by the workflow, no pre-setup needed
  • Claude CLI with an OAuth token (requires a Claude Max subscription)
  • A GitHub PAT with repo, workflow, and write:packages scopes (for API access and GHCR pulls)

Tip: Keep your PAT scopes minimal. You need repo for API access to private repos, workflow for Actions, and write:packages only if you host private images on GHCR. If all your images are public (Docker Hub, etc.), you can skip the packages scope.

Step 1 — The Workflow File

Create .github/workflows/weekly-audit.yml in your security audit repo:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
name: Weekly Vulnerability Audit

on:
  schedule:
    - cron: '0 6 * * 0'
  workflow_dispatch:

permissions:
  contents: read
  issues: write

jobs:
  audit:
    name: Scan Container Images
    runs-on: self-hosted
    steps:
      - name: Checkout repository
        uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6

      - name: Install Trivy
        run: |
          if ! command -v trivy &>/dev/null; then
            curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sh -s -- -b $HOME/.local/bin v0.69.3
            echo "$HOME/.local/bin" >> $GITHUB_PATH
          fi

      - name: Verify Trivy
        run: trivy --version

      - name: Login to GHCR
        run: echo "${{ secrets.PAT_TOKEN }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin

      - name: Collect images from all container repos
        env:
          GITHUB_TOKEN: ${{ secrets.PAT_TOKEN }}
        run: |
          ./scripts/collect-images.sh > images.txt
          echo "## Images to scan" >> $GITHUB_STEP_SUMMARY
          echo '```' >> $GITHUB_STEP_SUMMARY
          cat images.txt >> $GITHUB_STEP_SUMMARY
          echo '```' >> $GITHUB_STEP_SUMMARY
          echo "Found $(wc -l < images.txt | tr -d ' ') unique images"

      - name: Run Trivy scans and generate AI report
        env:
          CLAUDE_CODE_OAUTH_TOKEN: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
        run: |
          ./scripts/generate-report.sh images.txt > report.md

      - name: Create GitHub Issue with report
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
        run: |
          DATE=$(date +%Y-%m-%d)
          TITLE="Weekly Vulnerability Report — $DATE"

          PREV_ISSUE=$(gh issue list --state open --label "vulnerability-report" --json number --jq '.[0].number' 2>/dev/null || true)
          if [ -n "$PREV_ISSUE" ]; then
            gh issue close "$PREV_ISSUE" --comment "Superseded by new weekly report."
          fi

          gh issue create \
            --title "$TITLE" \
            --body-file report.md \
            --label "vulnerability-report"

      - name: Upload raw scan data
        uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
        if: always()
        with:
          name: trivy-scan-results
          path: |
            images.txt
            report.md

A few things worth calling out here:

Schedule: Sunday at 6am UTC, plus workflow_dispatch for manual runs. Weekly felt right — daily is noisy for a homelab, monthly is too slow. Your mileage may vary.

Separate install and verify steps: This tripped me up early on. $GITHUB_PATH updates only take effect in the next step, not the current one. If you install Trivy and try to verify it in the same step, the binary won’t be on the PATH yet. Split them.

GHCR login: If you host any private container images on GitHub Container Registry, you need to authenticate before Trivy can pull and scan them. The PAT_TOKEN handles this.

Step summary: The image list and scan stats get written to $GITHUB_STEP_SUMMARY, so you can see exactly what was scanned directly in the Actions UI without digging into logs.

Issue lifecycle: The workflow queries for any open issue with the vulnerability-report label, closes it with a comment, then creates a fresh one. You always have exactly one open vulnerability report. Clean and easy to track.

Artifact upload: Raw scan data and the final report are uploaded as workflow artifacts with if: always(). Even if a later step fails, you still have the data for debugging.

Warning: Your PAT_TOKEN needs repo scope for the GitHub API calls in collect-images.sh, and write:packages if you’re scanning private GHCR images. Without repo, the API calls will return empty results with no error — just silence.

Step 2 — Discovering Your Images

Create scripts/collect-images.sh:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
#!/usr/bin/env bash
# Collects all unique Docker image references from container repos.
# Uses GitHub API to fetch docker-compose templates and extract image: lines.
# Output: one image reference per line
set -euo pipefail

ORG="your-org"

# Dynamically discover all non-archived *-Containers repos
REPOS=$(gh api "orgs/$ORG/repos" --paginate --jq '
  .[] | select(.archived == false) | select(.name | endswith("-Containers")) | .name
' 2>/dev/null | sort)

if [ -z "$REPOS" ]; then
  echo "ERROR: No container repos found. Check GITHUB_TOKEN permissions." >&2
  exit 1
fi

echo "Discovered repos:" >&2
echo "$REPOS" >&2
echo "" >&2

TMPDIR=$(mktemp -d)
trap "rm -rf $TMPDIR" EXIT

IMAGES_FILE="$TMPDIR/all-images.txt"
touch "$IMAGES_FILE"

while IFS= read -r REPO; do
  [ -z "$REPO" ] && continue
  echo "Scanning $REPO..." >&2

  FILES=$(gh api "repos/$ORG/$REPO/git/trees/main?recursive=1" \
    --jq '.tree[].path | select(test("docker-compose.*\\.(yml|yaml|j2)$"))' 2>/dev/null || true)

  for FILE in $FILES; do
    CONTENT=$(gh api "repos/$ORG/$REPO/contents/$FILE" --jq '.content' 2>/dev/null | base64 -d 2>/dev/null || true)
    echo "$CONTENT" | grep -E '^\s*image:' | sed 's/.*image:\s*//' | sed 's/["'"'"']//g' | tr -d ' ' \
      | sed 's/#.*//' | grep -v '{{' >> "$IMAGES_FILE" || true
  done
done <<< "$REPOS"

sort -u "$IMAGES_FILE" | grep -v '^$'

Here’s what’s happening in each part:

Dynamic repo discovery: The script uses the GitHub API to list all non-archived repos ending in -Containers. If you add a new container repo tomorrow, it gets picked up automatically on the next scan. No hardcoded lists to maintain.

Git tree traversal: For each repo, it fetches the file tree recursively and filters for docker-compose files. This catches docker-compose.yml, docker-compose.yaml, and .j2 Jinja2 templates (I use Jinja2 templates for CI/CD-generated compose files).

Image extraction and cleanup: The grep/sed pipeline pulls out image: lines and strips quotes, whitespace, and Renovate comments (#renovate:depName=...). That last part is important — Trivy chokes on image refs that have trailing comments. The grep -v '{{' filter skips Jinja2 template variables that aren’t real image references.

Sorted unique output: The final sort -u deduplicates everything. If three repos all use traefik:v3.3, it only gets scanned once.

Tip: If your repos don’t follow the *-Containers naming convention, adjust the jq filter. You could match on topics, naming patterns, or even a custom property. The GitHub API gives you plenty to work with.

Step 3 — Scanning and AI Analysis

This is where it gets interesting. Create scripts/generate-report.sh:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
#!/usr/bin/env bash
# Runs Trivy image scan on each image and generates a markdown report.
# Feeds results + environment context to Claude CLI for AI analysis.
# Output: markdown report suitable for a GitHub Issue.
set -euo pipefail

SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
CONTEXT_FILE="$SCRIPT_DIR/../context/environment.md"
IMAGES_FILE="${1:-/dev/stdin}"
DATE=$(date +%Y-%m-%d)

TMPDIR=$(mktemp -d)
trap "rm -rf $TMPDIR" EXIT

TRIVY_RESULTS="$TMPDIR/trivy-results"
mkdir -p "$TRIVY_RESULTS"

echo "## Trivy Image Scan Results" > "$TMPDIR/raw-findings.md"
echo "" >> "$TMPDIR/raw-findings.md"

SCAN_COUNT=0
VULN_COUNT=0

while IFS= read -r IMAGE; do
  [ -z "$IMAGE" ] && continue
  SCAN_COUNT=$((SCAN_COUNT + 1))

  SAFE_NAME=$(echo "$IMAGE" | tr '/:@' '___')
  RESULT_FILE="$TRIVY_RESULTS/$SAFE_NAME.json"

  echo "Scanning: $IMAGE" >&2
  if trivy image --severity HIGH,CRITICAL --format json --quiet "$IMAGE" > "$RESULT_FILE" 2>/dev/null; then
    COUNT=$(jq '[.Results[]?.Vulnerabilities // [] | length] | add // 0' "$RESULT_FILE")
    VULN_COUNT=$((VULN_COUNT + COUNT))

    echo "### $IMAGE" >> "$TMPDIR/raw-findings.md"
    if [ "$COUNT" -eq 0 ]; then
      echo "No HIGH/CRITICAL vulnerabilities found." >> "$TMPDIR/raw-findings.md"
    else
      echo "$COUNT HIGH/CRITICAL vulnerabilities:" >> "$TMPDIR/raw-findings.md"
      echo '```json' >> "$TMPDIR/raw-findings.md"
      jq '[.Results[]?.Vulnerabilities[]? | {VulnerabilityID, Severity, PkgName, InstalledVersion, FixedVersion, Title}]' "$RESULT_FILE" >> "$TMPDIR/raw-findings.md"
      echo '```' >> "$TMPDIR/raw-findings.md"
    fi
    echo "" >> "$TMPDIR/raw-findings.md"
  else
    echo "### $IMAGE" >> "$TMPDIR/raw-findings.md"
    echo "Failed to scan (image may not be publicly accessible)." >> "$TMPDIR/raw-findings.md"
    echo "" >> "$TMPDIR/raw-findings.md"
  fi
done < "$IMAGES_FILE"

echo "Scanned $SCAN_COUNT images, found $VULN_COUNT HIGH/CRITICAL vulnerabilities" >&2

if [ -n "${GITHUB_STEP_SUMMARY:-}" ]; then
  echo "## Scan Summary" >> "$GITHUB_STEP_SUMMARY"
  echo "- Scanned **$SCAN_COUNT** images" >> "$GITHUB_STEP_SUMMARY"
  echo "- Found **$VULN_COUNT** HIGH/CRITICAL vulnerabilities" >> "$GITHUB_STEP_SUMMARY"
fi

PROMPT=$(cat <<'PROMPT_EOF'
You are a security analyst reviewing container image vulnerabilities for a homelab.

Review the Trivy scan results below and produce a weekly vulnerability report.

Categorize findings into:
1. **Needs Attention** — vulnerabilities in internet-facing services OR critical severity with no fix, that require awareness or action
2. **Informational** — vulnerabilities in LAN-only services behind SSO, or findings where a fix exists and Renovate will auto-update
3. **Clean** — images with no findings (list briefly)

For each finding in "Needs Attention" or "Informational":
- State the image name and which service it belongs to
- Describe the CVE in plain language (what it does, how it's exploited)
- Whether a fix exists and what version
- Your assessment of the risk given the environment context
- A specific recommended action (or "no action needed — Renovate will update")

Be concise. Focus on what's actionable. Don't repeat the raw data — interpret it.
PROMPT_EOF
)

FULL_PROMPT="$TMPDIR/full-prompt.md"
echo "$PROMPT" > "$FULL_PROMPT"
echo "" >> "$FULL_PROMPT"
echo "# Environment Context" >> "$FULL_PROMPT"
cat "$CONTEXT_FILE" >> "$FULL_PROMPT"
echo "" >> "$FULL_PROMPT"
echo "# Scan Results ($DATE)" >> "$FULL_PROMPT"
echo "Scanned $SCAN_COUNT images total." >> "$FULL_PROMPT"
cat "$TMPDIR/raw-findings.md" >> "$FULL_PROMPT"

PROMPT_SIZE=$(wc -c < "$FULL_PROMPT")
echo "Running AI analysis (prompt size: ${PROMPT_SIZE} bytes)..." >&2

AI_OUTPUT=""
if ! AI_OUTPUT=$(claude -p --max-turns 1 < "$FULL_PROMPT" 2>&1); then
  echo "::warning::Claude CLI failed. Stderr: $AI_OUTPUT" >&2
  echo "::warning::Retrying without full vulnerability JSON..." >&2

  SUMMARY_PROMPT="$TMPDIR/summary-prompt.md"
  echo "$PROMPT" > "$SUMMARY_PROMPT"
  echo "" >> "$SUMMARY_PROMPT"
  echo "# Environment Context" >> "$SUMMARY_PROMPT"
  cat "$CONTEXT_FILE" >> "$SUMMARY_PROMPT"
  echo "" >> "$SUMMARY_PROMPT"
  echo "# Scan Results ($DATE)" >> "$SUMMARY_PROMPT"
  echo "Scanned $SCAN_COUNT images total." >> "$SUMMARY_PROMPT"
  echo "" >> "$SUMMARY_PROMPT"

  while IFS= read -r IMAGE; do
    [ -z "$IMAGE" ] && continue
    SAFE_NAME=$(echo "$IMAGE" | tr '/:@' '___')
    RESULT_FILE="$TRIVY_RESULTS/$SAFE_NAME.json"
    if [ -f "$RESULT_FILE" ]; then
      COUNT=$(jq '[.Results[]?.Vulnerabilities // [] | length] | add // 0' "$RESULT_FILE")
      if [ "$COUNT" -eq 0 ]; then
        echo "### $IMAGE" >> "$SUMMARY_PROMPT"
        echo "No HIGH/CRITICAL vulnerabilities found." >> "$SUMMARY_PROMPT"
      else
        echo "### $IMAGE" >> "$SUMMARY_PROMPT"
        echo "$COUNT HIGH/CRITICAL vulnerabilities:" >> "$SUMMARY_PROMPT"
        jq -r '[.Results[]?.Vulnerabilities[]? | "\(.Severity) \(.VulnerabilityID) in \(.PkgName) \(.InstalledVersion) (fix: \(.FixedVersion // "none"))"] | unique[]' "$RESULT_FILE" >> "$SUMMARY_PROMPT"
      fi
      echo "" >> "$SUMMARY_PROMPT"
    else
      echo "### $IMAGE" >> "$SUMMARY_PROMPT"
      echo "Failed to scan." >> "$SUMMARY_PROMPT"
      echo "" >> "$SUMMARY_PROMPT"
    fi
  done < "$IMAGES_FILE"

  SUMMARY_SIZE=$(wc -c < "$SUMMARY_PROMPT")
  echo "Retrying with summary prompt (${SUMMARY_SIZE} bytes)..." >&2

  if ! AI_OUTPUT=$(claude -p --max-turns 1 < "$SUMMARY_PROMPT" 2>&1); then
    echo "::error::Claude CLI failed on retry: $AI_OUTPUT"
    if [ -n "${GITHUB_STEP_SUMMARY:-}" ]; then
      echo "## AI Analysis" >> "$GITHUB_STEP_SUMMARY"
      echo "**FAILED** — Claude CLI error: \`${AI_OUTPUT:0:200}\`" >> "$GITHUB_STEP_SUMMARY"
    fi
    exit 1
  fi
fi

if [ -z "$AI_OUTPUT" ]; then
  echo "::error::Claude CLI returned empty output. Check CLAUDE_CODE_OAUTH_TOKEN secret."
  if [ -n "${GITHUB_STEP_SUMMARY:-}" ]; then
    echo "## AI Analysis" >> "$GITHUB_STEP_SUMMARY"
    echo "**FAILED** — Claude CLI returned empty output" >> "$GITHUB_STEP_SUMMARY"
  fi
  exit 1
fi

if [ -n "${GITHUB_STEP_SUMMARY:-}" ]; then
  echo "## AI Analysis" >> "$GITHUB_STEP_SUMMARY"
  echo "AI analysis complete." >> "$GITHUB_STEP_SUMMARY"
fi

cat <<EOF
# Weekly Homelab Vulnerability Report — $DATE

$AI_OUTPUT

---

*Generated by security-audit workflow. Scanned $SCAN_COUNT images, $VULN_COUNT HIGH/CRITICAL findings.*
*Raw scan data available in workflow artifacts.*
EOF

This script is the heart of the whole tool, so let me break down the key parts.

The Trivy Scanning Loop

Each image goes through a trivy image scan filtered to HIGH and CRITICAL severities. I skip LOW and MEDIUM — they’re too noisy for a weekly report, and most of them are informational for a homelab context anyway. The --format json flag gives structured output that’s easy to process with jq, and --quiet suppresses Trivy’s progress output so it doesn’t pollute the report.

For each image, the script extracts a summary of findings (CVE ID, severity, package name, installed version, fixed version, and title) into a markdown document. Clean images get a simple “no vulnerabilities found” note.

The AI Prompt

The PROMPT heredoc is where the magic happens. Rather than just dumping raw CVE data into an issue, the prompt instructs Claude to act as a security analyst and interpret the findings:

  • Needs Attention: Internet-facing services or critical vulnerabilities with no available fix. These are the ones you actually need to think about.
  • Informational: LAN-only services behind SSO, or vulnerabilities where a fix exists and Renovate will handle the update automatically. Awareness, but no action needed.
  • Clean: Images with zero findings. Listed briefly so you know they were scanned.

The AI isn’t just summarizing — it’s making a risk assessment based on your specific environment.

Environment Context Injection

The prompt gets assembled from three pieces: the system prompt (categorization instructions), your environment.md context file (infrastructure details), and the raw Trivy findings. This is the key insight of the whole project. Without the context file, the AI would treat every CVE the same. With it, the AI knows that a vulnerability in Traefik is far more urgent than the same vulnerability in a service that’s only reachable from your LAN and sits behind SSO.

The Fallback Mechanism

This took some iteration to get right. When you’re scanning 14+ container images, the full JSON output from Trivy can be large. If the assembled prompt exceeds Claude’s context window, the first attempt fails. The fallback retries with a compact format — one line per CVE instead of the full JSON blob. In practice, this cuts the prompt size by 80-90% while preserving all the information the AI needs for categorization.

Info: The fallback mechanism uses if ! AI_OUTPUT=$(claude -p ...) instead of set -e to capture the failure gracefully. Under set -e, a failed command causes immediate script exit before you can handle the error. The if ! pattern lets you catch the failure and retry.

Danger: Your environment.md file effectively documents your attack surface — which services are internet-facing, what’s behind SSO, what’s LAN-only. Keep it in a private repo. You don’t want that information public.

Step 4 — Environment Context

Create context/environment.md with details about your infrastructure. Here’s a genericized example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# Homelab Environment Context

## Network Topology
- All services run on Proxmox VMs/LXCs on a private LAN
- Traefik reverse proxy handles all inbound HTTPS traffic
- CrowdSec provides brute-force protection on public endpoints
- Pi-hole handles internal DNS (split-horizon with Cloudflare for public)
- UniFi firewall segments IoT, server, and management VLANs

## Internet-Facing Services
- **Traefik** — reverse proxy, terminates TLS, handles all public traffic
- **Authentik** — SSO provider, login portal is publicly accessible
- **CrowdSec** — security engine, API exposed for bouncer registration

## LAN-Only Services (behind SSO)
- Home Assistant, Plex, Immich, Dozzle, Semaphore
- All require Authentik SSO authentication
- Not reachable from the internet without VPN

## LAN-Only Services (no SSO)
- Pi-hole admin panels (management VLAN only)
- Proxmox web UI (management VLAN only)

## Update Policy
- Renovate Bot auto-merges patch and minor version bumps
- Major version bumps require manual review
- Docker image digest updates auto-merge without PR

Each section gives the AI the information it needs to differentiate risk levels:

  • Network topology tells the AI about your defense layers (firewall, WAF, IDS)
  • Internet-facing services are the high-priority targets — vulnerabilities here get flagged as “Needs Attention”
  • LAN-only behind SSO are lower risk — an attacker needs both network access and valid credentials
  • Update policy tells the AI whether Renovate will handle a fix automatically or if manual intervention is needed

The context file doesn’t need to be exhaustive. A few paragraphs covering exposure levels and update policies is enough for the AI to make meaningful risk assessments.

Tip: Even if you’re not using this tool, documenting your service exposure levels is a valuable security exercise. Knowing which services are internet-facing vs. LAN-only helps you prioritize hardening efforts and incident response.

Step 5 — Issue Lifecycle

The workflow manages GitHub Issues so you don’t have to think about it:

  1. Before creating a new report, it queries for any open issue with the vulnerability-report label
  2. If one exists, it closes it with a “Superseded by new weekly report” comment
  3. A new issue is created with the AI-generated report as the body
  4. The vulnerability-report label makes it easy to filter in your issue tracker

This means you only ever have one open vulnerability report at any time. No pile-up of stale issues. If you want historical data, the closed issues are still there, and the raw Trivy JSON is preserved as a workflow artifact on each run.

The one-issue-at-a-time model keeps things simple. Check your issues on Monday, see if anything in “Needs Attention” actually needs attention, and move on. If it’s all “Informational” and “Clean,” you just got a free confirmation that your stack is in good shape.

What the Output Looks Like

Here’s a representative example of what the AI report looks like in a GitHub Issue:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
## Needs Attention

### traefik:v3.3 (Reverse Proxy — Internet-Facing)
- **CVE-2026-XXXX** — Remote code execution in HTTP/2 handler
- Fix available: v3.3.1
- Risk: HIGH — internet-facing, handles all inbound traffic
- Action: Monitor for Renovate update, consider manual bump if delayed

## Informational

### homeassistant/home-assistant:2026.3 (Home Automation — LAN Only)
- **CVE-2026-YYYY** — Path traversal in REST API
- Fix available: 2026.3.1
- Risk: LOW — LAN-only, behind SSO, requires authenticated access
- Action: No action needed — Renovate will auto-update

### ghcr.io/your-org/custom-app:latest (Internal Tool — LAN Only)
- **CVE-2026-ZZZZ** — Denial of service in OpenSSL
- Fix available: upstream pending
- Risk: LOW — LAN-only, not internet-reachable
- Action: Monitor upstream for fix

## Clean
- dozzle/dozzle:v8.12
- crowdsecurity/crowdsec:v1.6
- pihole/pihole:2026.03
- ghcr.io/your-org/terrascan:latest

Notice how the same severity level (HIGH) gets different risk assessments depending on exposure. The Traefik CVE is flagged as genuinely concerning because it’s internet-facing. The Home Assistant CVE is informational because it’s LAN-only and behind SSO. This is the entire value proposition — context-aware triage instead of a wall of undifferentiated CVE data.

The “Clean” section is short but important. It confirms those images were actually scanned and came back clear, rather than being silently skipped.

Setting It Up For Your Homelab

The full source is available on GitHub: github.com/SpaceTerran/homelab-vulnerability-scanner

Here’s the quick start:

  1. Fork the repo (or clone it into your own org)
  2. Customize context/environment.md with your infrastructure details — what’s internet-facing, what’s behind SSO, what your update policy looks like
  3. Update ORG="your-org" in scripts/collect-images.sh to match your GitHub org name
  4. Add repository secrets:
    • PAT_TOKEN — GitHub PAT with repo + workflow + write:packages scopes
    • CLAUDE_CODE_OAUTH_TOKEN — Claude CLI OAuth token from your Claude Max subscription
  5. Create the vulnerability-report label in your repo’s issue settings
  6. Enable the workflow and trigger it manually to test — hit the “Run workflow” button in the Actions tab

The first run will take a few minutes as Docker pulls all the images for scanning. Subsequent runs are faster if the images are cached on your runner.

Info: If you’re not using a *-Containers naming convention, you’ll need to adjust the repo discovery filter in collect-images.sh. The jq filter on the GitHub API response is the only place that hardcodes the naming pattern.

Wrapping Up

This started as a “I should probably be scanning my containers” thought and turned into something I genuinely look forward to checking each week. The combination of automated scanning and context-aware AI triage means I spend about two minutes reviewing the report instead of twenty minutes manually parsing CVE databases.

The real win is the prioritization. Before this, every vulnerability felt equally urgent (or equally ignorable). Now I know instantly whether something needs my attention or whether Renovate is going to handle it on the next update cycle. For a homelab where I’d rather spend time building things than auditing things, that trade-off is exactly right.

I’d love to hear how you’re handling container security in your homelab. Are you scanning regularly, or is it more of a “hope for the best” situation? Drop a comment below and let’s compare notes.

The full project is on GitHub: github.com/SpaceTerran/homelab-vulnerability-scanner



Note: Trivy is a trademark of Aqua Security. Claude is a trademark of Anthropic. GitHub and GitHub Actions are trademarks of GitHub, Inc. This project is not affiliated with or endorsed by any of these companies.

This post is licensed under CC BY 4.0 by the author.

© Isaac Blum. Some rights reserved. Note: Some links may contain affiliate codes. Thank you for the support.

Using the Chirpy theme for Jekyll.