AWS
- CloudFrontCDN
- S3Static origin
- ACMCertificate
- CloudFront FunctionLockdown (header check)
Spare-time tinkering shop. The same primitives I touch professionally, exercised at personal scale on my own dollars: multi-cloud, GitOps, IaC, secret discipline, content-as-code, on-prem fallback. Not enterprise-grade; breadth, not production scale.
Keep my hands on every primitive I use professionally, on a working personal site that ships every change through the same pipeline.
Four origins behind one Cloudflare Load Balancer: AWS S3 + CloudFront, GCP backend bucket, GCP Cloud Run (us-central1 and us-west1 sharing one backend service via two regional NEGs), and a fourth path on the home k3s cluster that serves the staging domain. Weighted random across the three cloud pools at ~33% each. The on-prem path is documented separately at /lab/homelab. For continuity behavior across all four, see /lab/failover.
.github/workflows/ci.yml.github/workflows/deploy.yml.github/workflows/deploy-preview.yml.github/workflows/sync-preview.yml.github/workflows/infra.yml.github/workflows/infracost.ymlscripts/build-tui-data.mjsscripts/build-rss.mjsscripts/merge-cost-breakdown.mjsscripts/build-pdf.mjspackage.json (npm run build)The cost pipeline opens its own pull requests with refreshed line items. Walked through at /lab/finops.
infra/bootstrap/ · ~1 resourcesinfra/aws/ · ~13 resourcesinfra/gcp/ · ~35 resourcesinfra/cloudflare/ · ~11 resourcesinfra/secrets/ · ~14 resourcesOpenTofu (not Terraform). Remote state in Cloudflare R2 with native locking, no DynamoDB. The deploy role manages the data plane (S3, CloudFront, ACM, GCS, LB) but is intentionally not granted IAM-mutation perms; trust-policy edits stay a deliberate local-apply action so CI cannot rewrite its own boundary.
No long-lived AWS or GCP credentials in GitHub. Rotation means editing the 1Password item and re-running infra/secrets/; never editing GitHub secrets by hand.
Every content surface on this site is a curated file in the repo. Profile, experience, speaking, labs metadata, architecture, hand-estimated cost overlay, project case studies, and the writing essays all live as YAML or Markdown under content/ and are loaded at build time. Derivative artifacts (RSS feed, sitemap, TUI command outputs, the dual-mode CV PDFs) are emitted by the build-time pipelines above, not authored separately. No CMS, no runtime; every content change is a pull request.
The same out/ bundle that ships to AWS and GCP also ships to a small k3s cluster at home. Push to the preview branch, the workflow builds a multi-arch container and pushes to private GHCR with a numeric :run-N tag, and Flux Image Automation in the home-ops repo watches the tag, commits a HelmRelease bump, and rolls the deployment. Push-to- live in roughly 5 to 8 minutes, served from a Mac Mini and two Raspberry Pis.
The full tour of the home cluster (three-layer architecture, ~38 HelmReleases, hybrid SOPS plus 1Password secret discipline, two-tier R2 plus NFS backups) lives at /lab/homelab.