Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions ARCHITECTURE.md
Original file line number Diff line number Diff line change
Expand Up @@ -430,10 +430,19 @@ func OrchestratorWorkflow(ctx workflow.Context, input WorkflowInput) (*WorkflowO
// Stage 2: STORE - Create S3 snapshot
workflow.ExecuteActivity(ctx, CreateSnapshotActivity, ...)

// Notify the optional out-of-process emitter webhook.
// Skipped when input.EmitterWebhookURL is empty. Failures are
// non-fatal because the snapshot is already durable in S3.
if input.EmitterWebhookURL != "" {
workflow.ExecuteActivity(ctx, NotifyEmitterActivity, ...)
}

return output, nil
}
```

**Optional emitter webhook:** When `EMITTER_WEBHOOK_URL` is set, the orchestrator POSTs `{"snapshot_id": "<id>"}` to `<url>/trigger-act` so a downstream service can pick up the snapshot immediately instead of polling. Most users either skip the webhook entirely or implement an in-process emitter (`pkg/emitters`) — see the README's "Extending Version Guard" section.

**Scheduling:**
- Run on a schedule (e.g., every 6 hours)
- Or trigger manually via Temporal CLI/API
Expand Down
129 changes: 126 additions & 3 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -147,10 +147,15 @@ temporal: ## Start local Temporal dev server and open Web UI
--dynamic-config-value limit.blobSize.warn=15000000

.PHONY: dev
dev: ## Run the service locally with auto-reload on code changes
dev: ## Run the service locally (auto-reload if `entr` is installed)
@if [ -f .env ]; then set -a; . ./.env; set +a; fi; \
echo "🚀 Starting Version Guard with auto-reload (Ctrl+C to stop)..."; \
find . -name '*.go' -not -path './vendor/*' | entr -r go run ./cmd/server
if command -v entr >/dev/null 2>&1; then \
echo "🚀 Starting Version Guard with auto-reload via entr (Ctrl+C to stop)..."; \
find . -name '*.go' -not -path './vendor/*' | entr -r go run ./cmd/server; \
else \
echo "🚀 Starting Version Guard (no auto-reload — install entr for that). Ctrl+C to stop..."; \
go run ./cmd/server; \
fi

.PHONY: run-locally
run-locally: build ## Run the service locally (connects to local Temporal)
Expand All @@ -167,6 +172,124 @@ run-server: build ## Run server locally
@echo "🚀 Starting server locally..."
@CONFIG_ENV=development bin/$(BINARY_NAME) --mode=server

# ── Webhook E2E (detector → emitter) ──────────────────────────────────────────
# Everything below runs in Docker, so no local `temporal` or `curl` install is required.
# Pre-reqs (run in separate terminals before invoking these targets):
# 1. make temporal-docker (Temporal dev server in Docker)
# 2. (in version-guard-emitter) make dev (emitter worker + HTTP on host :8082, via .env)
# 3. EMITTER_WEBHOOK_URL=http://localhost:8082 make dev (detector worker + admin HTTP on host :8081)
# Resource value must be a config ID (the `id:` field in pkg/config/defaults
# resources.yaml: aurora-postgresql, aurora-mysql, eks, elasticache-redis,
# elasticache-valkey, elasticache-memcached, opensearch, rds-mysql,
# rds-postgresql, lambda) — NOT a type constant like "AURORA". The detector's
# inventory map is keyed by config ID so multiple configs of the same type
# (e.g. two aurora flavors) can have independent inventory sources.
WEBHOOK_E2E_RESOURCE := aurora-postgresql
TEMPORAL_DOCKER_IMAGE := temporalio/admin-tools:latest
CURL_DOCKER_IMAGE := curlimages/curl:latest
# Inside containers we reach host-side processes via host.docker.internal (Docker Desktop on macOS/Windows).
HOST_FROM_DOCKER := host.docker.internal
# Host ports
DETECTOR_ADMIN_PORT := 8081
EMITTER_ADMIN_PORT := 8082

.PHONY: temporal-docker
temporal-docker: ## Start Temporal dev server in Docker (alternative to `make temporal`)
@echo "🕰️ Starting Temporal dev server in Docker (namespace: $(TEMPORAL_NAMESPACE))..."
@echo " Frontend: localhost:7233 Web UI: http://localhost:8233"
@open http://localhost:8233 &
@docker run --rm \
--name version-guard-temporal-dev \
-p 7233:7233 -p 8233:8233 \
$(TEMPORAL_DOCKER_IMAGE) \
temporal server start-dev \
--ip 0.0.0.0 \
--namespace $(TEMPORAL_NAMESPACE) \
--dynamic-config-value limit.blobSize.error=20000000 \
--dynamic-config-value limit.blobSize.warn=15000000

.PHONY: webhook-e2e
webhook-e2e: ## Trigger an end-to-end run via the detector's POST /scan (in Docker)
@command -v docker >/dev/null 2>&1 || { echo "❌ docker not found"; exit 1; }
@echo "🚀 POST /scan to detector at :$(DETECTOR_ADMIN_PORT) (resource=$(WEBHOOK_E2E_RESOURCE))..."
@echo " Watch: http://localhost:8233/namespaces/$(TEMPORAL_NAMESPACE)/workflows"
@docker run --rm \
--add-host=$(HOST_FROM_DOCKER):host-gateway \
$(CURL_DOCKER_IMAGE) \
-fsSi -X POST http://$(HOST_FROM_DOCKER):$(DETECTOR_ADMIN_PORT)/scan \
-H 'Content-Type: application/json' \
-d '{"resource_types":["$(WEBHOOK_E2E_RESOURCE)"]}'
@echo ""
@echo "✅ Detector orchestrator workflow started; expect a matching version-guard-act-<snapshotID> ActWorkflow on the emitter."

.PHONY: webhook-e2e-smoke
webhook-e2e-smoke: ## Hit the emitter /trigger-act webhook directly (no detector) via Docker
@command -v docker >/dev/null 2>&1 || { echo "❌ docker not found"; exit 1; }
@SID="smoke-$$(date +%s)"; \
echo "🔎 POST /trigger-act to emitter at :$(EMITTER_ADMIN_PORT) with snapshot_id=$$SID..."; \
docker run --rm \
--add-host=$(HOST_FROM_DOCKER):host-gateway \
$(CURL_DOCKER_IMAGE) \
-fsSi -X POST http://$(HOST_FROM_DOCKER):$(EMITTER_ADMIN_PORT)/trigger-act \
-H 'Content-Type: application/json' \
-d "{\"snapshot_id\":\"$$SID\"}"

# ── Docker Compose (full stack) ───────────────────────────────────────────────
# `make compose-*` targets bring up Temporal + MinIO + endoflife + detector,
# and (when EMITTER_PATH points at a real directory) the emitter alongside via
# Compose's `with-emitter` profile. EMITTER_PATH defaults to a sibling checkout
# at ../version-guard-emitter; override if yours lives elsewhere:
# make compose-up EMITTER_PATH=/Users/me/code/my-emitter
# Open-source users without an emitter checkout get a detector-only stack
# automatically — same `make compose-up` / `make compose-e2e` commands.
EMITTER_PATH ?= ../version-guard-emitter
COMPOSE_PROJECT := version-guard
COMPOSE_BASE := EMITTER_PATH=$(EMITTER_PATH) docker compose -p $(COMPOSE_PROJECT)
EMITTER_AVAILABLE := $(wildcard $(EMITTER_PATH))
COMPOSE_PROFILE := $(if $(EMITTER_AVAILABLE),--profile with-emitter,)

.PHONY: compose-up
compose-up: ## Bring up the stack (auto-includes emitter if EMITTER_PATH exists)
@command -v docker >/dev/null 2>&1 || { echo "❌ docker not found"; exit 1; }
@if [ -n "$(EMITTER_AVAILABLE)" ]; then \
echo "🐳 Bringing up full stack (detector + emitter + Temporal + MinIO + endoflife)..."; \
else \
echo "🐳 Bringing up detector-only stack (EMITTER_PATH=$(EMITTER_PATH) not found — set it to also exercise the /trigger-act webhook)..."; \
fi
@$(COMPOSE_BASE) $(COMPOSE_PROFILE) up --build -d
@if [ -n "$(EMITTER_AVAILABLE)" ]; then \
echo "✅ Stack up. Detector :$(DETECTOR_ADMIN_PORT), emitter :8083 (host) → :8080 (container), Temporal UI http://localhost:8233"; \
else \
echo "✅ Stack up. Detector :$(DETECTOR_ADMIN_PORT), Temporal UI http://localhost:8233. emitter webhook will log a non-fatal failure — snapshots still land in MinIO."; \
fi

.PHONY: compose-down
compose-down: ## Tear down the compose stack and remove volumes
@command -v docker >/dev/null 2>&1 || { echo "❌ docker not found"; exit 1; }
@$(COMPOSE_BASE) --profile with-emitter down -v --remove-orphans
@echo "✅ Stack torn down."

.PHONY: compose-logs
compose-logs: ## Tail logs from all compose services
@$(COMPOSE_BASE) $(COMPOSE_PROFILE) logs -f --tail=200

.PHONY: compose-e2e
compose-e2e: compose-up ## E2e: bring up the stack, fire /scan, tail logs (Ctrl+C to stop)
@echo "⏳ Waiting 10s for services to register workflows..."
@sleep 10
@echo "🚀 POST /scan (resource=$(WEBHOOK_E2E_RESOURCE))..."
@docker run --rm --network $(COMPOSE_PROJECT)_default $(CURL_DOCKER_IMAGE) \
-fsSi -X POST http://version-guard:8081/scan \
-H 'Content-Type: application/json' \
-d '{"resource_types":["$(WEBHOOK_E2E_RESOURCE)"]}' || true
@echo ""
@if [ -n "$(EMITTER_AVAILABLE)" ]; then \
echo "✅ Scan triggered. Detector → /trigger-act webhook → emitter ActWorkflow. Tailing logs (Ctrl+C to stop; then \`make compose-down\` to clean up)."; \
else \
echo "✅ Scan triggered (detector-only). Snapshot will land in MinIO; /trigger-act webhook will log a non-fatal failure. Tailing logs (Ctrl+C to stop; then \`make compose-down\` to clean up)."; \
fi
@$(COMPOSE_BASE) $(COMPOSE_PROFILE) logs -f

# ── Docker ────────────────────────────────────────────────────────────────────

.PHONY: docker-build
Expand Down
42 changes: 40 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -176,6 +176,22 @@ Temporal SDK metrics are enabled by default and exposed at
http://localhost:9090/metrics. Set `TEMPORAL_METRICS_ENABLED=false` to disable
them, or set `TEMPORAL_METRICS_LISTEN_ADDRESS` to use a different address.

#### End-to-end with `make compose-*`

The same commands work for everyone — they auto-detect whether a webhook-style emitter is present and adjust accordingly:

```bash
make compose-e2e # build → up → POST /scan → tail logs
make compose-down # tear everything down
```

- **Open-source users (no emitter):** detector + Temporal + MinIO + endoflife come up. The orchestrator still posts to `EMITTER_WEBHOOK_URL`; with no listener it logs a single non-fatal failure and the snapshot still lands in MinIO. Use this to verify the DETECT → STORE pipeline.
- **Block (or anyone with a webhook emitter):** drop a sibling checkout at `../version-guard-emitter`, or set `EMITTER_PATH=/path/to/your/emitter`, and the same `make compose-e2e` brings up the emitter alongside via Compose's [`with-emitter` profile](https://docs.docker.com/compose/profiles/) and exercises the full DETECT → STORE → ACT flow.

##### Emitter integration model

Block runs an internal companion service that consumes snapshots and posts findings to its security tooling (private repo, not publicly available). The orchestrator's optional emitter webhook (`EMITTER_WEBHOOK_URL`) is the link between detector and that service. **Most open-source users don't need it** — implement an in-process emitter against the `pkg/emitters` interfaces instead (see [Extending Version Guard](#-extending-version-guard)). The webhook path is for users who prefer to keep their emitter in a separate process or repository.

### Run Locally (manual)

If you prefer running components individually:
Expand Down Expand Up @@ -317,6 +333,9 @@ Version Guard is configured via environment variables or CLI flags:
| `SCHEDULE_CRON` | Cron expression for scan schedule | `0 6 * * *` (daily 06:00 UTC) |
| `SCHEDULE_ID` | Temporal schedule ID (stable across restarts) | `version-guard-scan` |
| `SCHEDULE_JITTER` | Random jitter to prevent thundering herd | `5m` |
| `SNAPSHOT_STORE` | Snapshot backend: `s3` or `memory` (in-process; for laptop dev / CI smoke tests) | `s3` |
| `INVENTORY_FALLBACK` | When Wiz creds are missing: empty (skip resource and fail-fast) or `mock` (synthesize 1 fake resource per config — dev only, never set in production) | _(empty)_ |
| `EMITTER_WEBHOOK_URL` | Optional. Base URL of an out-of-process emitter that exposes `POST /trigger-act`. When set, the orchestrator workflow notifies it after each snapshot is persisted. Empty disables the webhook — Version Guard still ships findings via in-process emitters and S3. See [Extending Version Guard](#-extending-version-guard) below. | _(empty)_ |
| `--verbose` / `-v` | Enable debug-level logging | `false` |

**Custom Resource Catalog:**
Expand Down Expand Up @@ -450,7 +469,26 @@ type DashboardEmitter interface {
- `pkg/emitters/examples/logging_emitter.go` - Logs findings to stdout (included)
- **Your custom emitter** - Send findings to Jira, ServiceNow, Slack, PagerDuty, etc.

### 2. Consuming S3 Snapshots
### 2. Out-of-process Emitter via Webhook (Optional)

For users who already run a separate service that consumes snapshots (e.g. a long-running worker that writes to a different system), Version Guard can **notify** that service every time a snapshot is persisted, instead of (or in addition to) calling in-process emitters. Set `EMITTER_WEBHOOK_URL=https://your-emitter.example.com` and the orchestrator workflow will:

1. POST `{"snapshot_id": "<id>"}` to `<EMITTER_WEBHOOK_URL>/trigger-act`.
2. Expect a `2xx` response (the body is logged but not required to follow any schema).
3. Treat any failure as **non-fatal** — the snapshot is already durable in your snapshot store, and Temporal's retry policy will handle transient errors.

**You build the receiver.** Any HTTP server that handles `POST /trigger-act` works. Block runs an internal companion service for this (private repo, not publicly available) — for OSS, a 30-line Go/Python/Node handler that starts your own workflow / job is enough. Replying with `2xx` is the only contract.

**When to choose this vs. in-process emitters:**

| You want… | Use |
|---|---|
| A pluggable callback inside the detector pod (logging, Slack, Jira, simple webhooks) | In-process emitter via `pkg/emitters` (see §1 above) |
| A separate long-running service with its own deployment cadence, scaling, or runtime | Out-of-process webhook emitter |
| Both | Set `EMITTER_WEBHOOK_URL` AND register an in-process emitter — they run independently |
| Neither (just consume snapshots out-of-band) | Skip both; read the JSON from S3 (see §3 below) |

### 3. Consuming S3 Snapshots

Snapshots are stored as JSON in S3:
```
Expand Down Expand Up @@ -486,7 +524,7 @@ s3://your-bucket/snapshots/latest.json
**Consume snapshots with:**
- AWS Lambda triggered on S3 events
- Scheduled cron job reading `latest.json`
- Custom Temporal workflow (implement `Stage 3: ACT`)
- Custom Temporal workflow (implement your own follow-up workflow)

## 📖 Documentation

Expand Down
2 changes: 2 additions & 0 deletions cmd/cli/main.go
Original file line number Diff line number Diff line change
Expand Up @@ -212,6 +212,8 @@ func (c *ScanStartCmd) Run(ctx *Context) error {
// --resource-type explicitly. An empty list propagates to the
// orchestrator, which rejects it with ErrNoResourceTypes so the
// caller gets an immediate, descriptive failure.
// CLI-triggered runs do not chain to the emitter webhook — operators
// using the CLI typically just want to verify the detector path.
trigger := scan.NewTrigger(temporalClient, ctx.TemporalTaskQueue, nil)
res, err := trigger.Run(context.Background(), scan.Input{
ScanID: c.ScanID,
Expand Down
Loading
Loading