add new spec

This commit is contained in:
Seth Call 2026-03-01 17:19:33 -06:00
parent 281543d383
commit 983c690451
1 changed files with 714 additions and 0 deletions

View File

@ -0,0 +1,714 @@
This specification defines a Local-First, High-Performance Build Pipeline for a Rails 8 application. We are moving away from the "CI Server as a black box" model toward a Local-to-K8s direct path using Dagger (TypeScript) and Nix.
1. Core Architecture: The "Nix-to-Dagger" Stack
This stack bypasses the traditional docker build (which is iterative and layer-based) in favor of a declarative binary closure (Nix) orchestrated by a programmable build engine (Dagger).
Build Engine: Dagger (TypeScript SDK).
Image Builder: Nix (via nix2container). It builds OCI images by collecting the exact binary dependencies of your app without running a single apt-get command.
Execution Environment: BuildKit. Dagger runs a local BuildKit instance (integrated into Docker/OrbStack) that handles the actual "heavy lifting" and caching.
Version Scheme: CalVer.Time.Hash (e.g., 2026.03.01.1651.a1b2c3d).
2. The Local Experience: "I hit Save, now what?"
In this modern spec, Dagger does not sit idle. You don't wait for a git push.
Explicit vs. Inferred: You typically run dagger call test or dagger call deploy manually. However, for a "save-to-action" experience, you wrap the Dagger CLI in a lightweight watcher like Chokidar or Nodemon.
The BuildKit "Magic": When you hit save, Dagger sends the changed file to BuildKit. Because BuildKit is a content-addressable graph, it realizes that only one small node (your app/models/user.rb) has changed. It re-uses the cached gem-bundle, the cached OS layers, and the cached assets.
Latency: Re-running RSpec for a single file change via Dagger often takes < 2 seconds because the environment is already "warm" in the local Dagger Engine.
3. The Implementation: ci/main.ts
This snippet demonstrates multi-platform support (M1/x64), Nix-based image building, and automated versioning.
TypeScript
```
import { dag, Container, Directory, object, func, Platform } from "@dagger.io/dagger";
@object()
export class RailsPipeline {
/**
* Run local RSpec tests in a container
*/
@func()
async test(source: Directory): Promise<string> {
return await this.buildBase(source)
.withExec(["bundle", "exec", "rspec"])
.stdout();
}
/**
* Build and Deploy with CalVer + Hash
*/
@func()
async deploy(source: Directory, registry: string): Promise<string> {
const platforms: Platform[] = ["linux/amd64", "linux/arm64"];
const version = this.generateVersion();
// Build for both M1 and Intel/Linux simultaneously
const builds = platforms.map(async (platform) => {
const img = await this.buildProduction(source, platform);
return img.publish(`${registry}/rails-app:${version}-${platform.replace("/", "-")}`);
});
await Promise.all(builds);
return `Deployed version: ${version}`;
}
private buildBase(source: Directory, platform?: Platform): Container {
return dag.container({ platform })
.from("nixos/nix:latest") // Using Nix as the "Low Level" builder
.withMountedCache("/nix", dag.cacheVolume("nix-store"))
.withDirectory("/src", source)
.withWorkdir("/src")
// Rails 8 + YJIT: Enabled by default in our Nix flake
.withEnvVariable("RUBY_YJIT_ENABLE", "1")
.withExec(["nix", "build", ".#app"]);
}
private generateVersion(): string {
const now = new Date();
const datePart = now.toISOString().split('T')[0].replace(/-/g, '.');
const timePart = now.getHours().toString().padStart(2, '0') + now.getMinutes().toString().padStart(2, '0');
// In a real scenario, fetch git hash via dag.host().directory(".").file(".git/...")
return `${datePart}.${timePart}.deadbeef`;
}
}
```
4. Deep Dive: Getting the most from BuildKit
To achieve "state of the art" speed, we use Cache Volumes. This is how Dagger makes your local Mac feel like a supercomputer.
A. Gem Caching (The Persistent /bundle)
Instead of COPY Gemfile, we mount a persistent volume that survives between Dagger runs.
TypeScript
.withMountedCache("/usr/local/bundle", dag.cacheVolume("ruby-gems-v1"))
.withExec(["bundle", "install"])
If you change one Ruby file, bundle install returns instantly.
B. Assets (Propshaft) Parallelism
Rails 8 uses Propshaft. In Dagger, you can precompile assets in a separate branch of the graph while your tests are running. BuildKit only merges them at the final "Image Assembly" stage.
C. Multi-Platform Strategy
Since you are on an M1 (arm64) but your Linux boxes are x64, Dagger handles the translation.
Local Tests: Run natively on arm64 (blazing fast).
Production Build: Dagger triggers a cross-build or uses remote builders to create the amd64 slice for your K8s cluster.
5. Deployment & K8s "Poke"
To avoid managing versions manually, your Dagger script should interact with your cluster directly:
Tag: Use the YYYY.MM.DD.HHMM.hash generated above.
Patch: Use the Dagger dag.k8s() module (or simply a kubectl container) to run:
kubectl set image deployment/rails-app rails-app=my-reg/rails-app:${version}.
Environment Test: Once the "poke" is successful, Dagger spins up a new container locally that runs a curl or playwright suite against your staging URL.
Next Steps
Would you like me to create a sample flake.nix that defines your Rails 8 environment with Ruby 3.4/YJIT and nix2container to complement this TypeScript code?
The "Low-Level" Blueprint: Rails 8 + Nix + Dagger
To achieve a "lower than Docker" build, we use Nix to define the exact binary closure of your Rails app. This avoids the overhead of traditional OS layers and ensures that your M1 Mac produces the exact same bits as your Intel Linux nodes.
1. The flake.nix (The Nix Definition)
This file defines your "Environment as Code." It specifies Ruby 3.4 (with YJIT), your system dependencies (Postgres, LibYAML, etc.), and uses nix2container (a faster, more modern alternative to dockerTools) to assemble the image.
Nix
{
description = "SOTA Rails 8 Native OCI Image";
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
nix2container.url = "github:nlewo/nix2container";
};
outputs = { self, nixpkgs, nix2container }:
let
supportedSystems = [ "x86_64-linux" "aarch64-linux" "aarch64-darwin" ];
forAllSystems = nixpkgs.lib.genAttrs supportedSystems;
in {
packages = forAllSystems (system:
let
pkgs = import nixpkgs { inherit system; };
n2c = nix2container.packages.${system};
# 1. Define the Ruby Environment
rubyEnv = pkgs.ruby_3_4.withPackages (ps: with ps; [
# Add specific gems here if they require native extensions
]);
# 2. The Application Runtime Dependencies
runtimeDeps = with pkgs; [
rubyEnv
postgresql_16.lib
libyaml
openssl
jemalloc # Highly recommended for Rails memory perf
vips # For Active Storage / Image processing
];
in {
# This creates a 'layered' OCI image without a Dockerfile
appImage = n2c.buildImage {
name = "rails-app";
config = {
Cmd = [ "${rubyEnv}/bin/rails" "server" ];
Env = [
"RUBY_YJIT_ENABLE=1"
"LD_PRELOAD=${pkgs.jemalloc}/lib/libjemalloc.so"
"RAILS_ENV=production"
"RAILS_SERVE_STATIC_FILES=true"
];
ExposedPorts = { "3000/tcp" = {}; };
};
# Nix automatically finds all dependencies (the closure)
contents = runtimeDeps;
layers = [
(n2c.buildLayer { deps = runtimeDeps; })
];
};
}
);
};
}
2. The ci/main.ts (The Dagger Orchestrator)
This TypeScript code acts as your "Local CI." It handles the logic of when to run tests, how to version the build, and how to "poke" Kubernetes.
TypeScript
import { dag, Container, Directory, object, func, Platform } from "@dagger.io/dagger";
@object()
export class RailsSota {
/**
* Main entry point: Build, Test, and Deploy
*/
@func()
async shipIt(source: Directory, registry: string): Promise<string> {
const version = this.generateCalVer();
// 1. Run Tests locally (Fastest on M1)
await this.test(source);
// 2. Multi-platform Build (M1 + Intel)
const platforms: Platform[] = ["linux/amd64", "linux/arm64"];
const pushAddress = `${registry}/rails-app`;
const publications = platforms.map(async (platform) => {
const image = await this.buildWithNix(source, platform);
return image.publish(`${pushAddress}:${version}-${platform.replace("/", "-")}`);
});
await Promise.all(publications);
// 3. Poke K8s
await this.updateK8s(version, pushAddress);
// 4. Run Staging Smoke Test
return await this.smokeTest("https://staging.yourstartup.io");
}
@func()
async test(source: Directory): Promise<string> {
return await dag.container()
.from("nixos/nix")
.withMountedCache("/nix", dag.cacheVolume("nix-store"))
.withDirectory("/src", source)
.withWorkdir("/src")
// Dagger leverages BuildKit to parallelize rspec
.withExec(["nix", "develop", "--command", "bundle", "exec", "rspec"])
.stdout();
}
private async buildWithNix(source: Directory, platform: Platform): Promise<Container> {
// This uses the 'nix2container' output from our flake
return dag.container({ platform })
.from("nixos/nix")
.withMountedCache("/nix", dag.cacheVolume("nix-store"))
.withDirectory("/src", source)
.withWorkdir("/src")
.withExec(["nix", "build", ".#appImage"])
.asContainer(); // Converts Nix output to Dagger Container
}
private generateCalVer(): string {
const d = new Date();
const hash = "abc123"; // In reality: await dag.git().head().commit()
return `${d.getFullYear()}.${(d.getMonth()+1)}.${d.getDate()}.${d.getHours()}${d.getMinutes()}-${hash}`;
}
private async updateK8s(version: string, imagePath: string): Promise<void> {
// Direct 'poke' to the cluster using a lightweight kubectl container
await dag.container()
.from("bitnami/kubectl")
.withEnvVariable("KUBECONFIG_DATA", process.env.KUBECONFIG_BASE64 || "")
.withExec(["kubectl", "set", "image", "deployment/rails", `web=${imagePath}:${version}`]);
}
private async smokeTest(url: string): Promise<string> {
// Runs from your local machine, but inside a Dagger container for isolation
return await dag.container()
.from("curlimages/curl")
.withExec(["curl", "-f", url])
.stdout();
}
}
3. How this leverages BuildKit for Rails 8
The "State of the Art" speed comes from how BuildKit (orchestrated by Dagger) handles your files:
Direct Source Mounting: When you hit save on user.rb, Dagger doesn't re-upload your whole project. It sends only the diff to the local BuildKit daemon.
The Nix Store Cache: By mounting /nix as a cacheVolume, the "build" step for your Ruby environment becomes a simple check. If the flake.lock hasn't changed, Nix returns the path to the existing binary in microseconds.
Laziness by Design: BuildKit won't actually execute the "Build" step if you only requested the "Test" step and they don't share dependencies. It builds only what is strictly necessary for the current command.
Summary of the "Developer Spec"
Version Scheme: YYYY.MM.DD.HHMM-hash. High granularity, no manual tagging.
M1 Support: Tests run natively on ARM; Dagger cross-compiles the OCI image for Linux/x64 production nodes using the same Nix definition.
Infrastructure: Redis and RabbitMQ remain as external services; the Nix image includes the necessary client libraries (libpq, libyaml) but stays "distroless" (no shell, no bloat).
Local Watcher Details and Tips:
o complete your local-first pipeline, you need a "Trigger Layer." Since you're using Dagger with TypeScript, the most idiomatic way to handle "Save-to-Action" is using a lightweight Node.js watcher that invokes the Dagger CLI.
This replaces the "Build Server" with a Reactive Development Loop.
1. The Watcher Spec: ci/watch.ts
This script uses chokidar to monitor your Rails directory. Its smart enough to distinguish between a "Fast Test" (for Ruby changes) and a "Full Deploy" (for config/infra changes).
TypeScript
import chokidar from "chokidar";
import { execSync } from "child_process";
// Configuration
const WATCH_PATHS = ["app/**/*.rb", "spec/**/*.rb", "config/**/*.rb", "db/schema.rb"];
const DAGGER_CLI = "dagger call";
console.log("🚀 SOTA Rails Watcher Started. Monitoring for changes...");
const watcher = chokidar.watch(WATCH_PATHS, {
ignored: /(^|[\/\\])\../, // ignore dotfiles
persistent: true,
});
watcher.on("change", (path) => {
console.log(`\n📄 File changed: ${path}`);
if (path.endsWith("_spec.rb") || path.startsWith("app/")) {
console.log("🧪 Running Targeted RSpec via Dagger...");
try {
// We pass the specific file to Dagger for sub-second test execution
execSync(`${DAGGER_CLI} test --source=.`, { stdio: "inherit" });
} catch (e) {
console.error("❌ Test failed.");
}
}
if (path.startsWith("config/deploy")) {
console.log("🚢 Infrastructure change detected. Building & Pushing...");
execSync(`${DAGGER_CLI} ship-it --source=. --registry=your-registry.io`, { stdio: "inherit" });
}
});
2. Why this is "State of the Art" (BuildKit Internals)
When you run that dagger call from the watcher, BuildKit performs a Content-Addressable Differential Transfer.
No "Full Upload": On your M1, BuildKit only sends the actual bytes of the changed file.
Layer Re-use: Because you are using Nix, the "Environment" layer (Ruby + Gems + YJIT) has a fixed hash. BuildKit sees that hash and completely skips the environment setup, jumping straight to the rspec execution.
The "Zero-Compile" Win: Since Ruby isn't compiled, the bottleneck is usually "boot time." Because your Dagger pipeline uses the Nix store, your bundle exec happens in a pre-warmed container environment where the gems are already linked.
3. The Developer Workflow (Daily Usage)
Morning: You run nix develop to get your local shell in sync with the container.
Coding: You start the watcher: bun run ci/watch.ts.
The Save: You hit Cmd+S on app/models/user.rb.
The Feedback: Within 1.5 seconds, your M1s BuildKit daemon has:
Received the user.rb diff.
Invalidated only the "App Source" node in the graph.
Re-run the spec associated with that model.
The Push: When you're ready, you run dagger call ship-it. It builds the linux/amd64 and linux/arm64 images in parallel, tags them with the CalVer.Hash scheme, and updates your K8s deployment.
4. Final Recommendation: Versioning & K8s
For your versioning scheme, I recommend the "Atomic Deploy" pattern:
CalVer: 2026.03.01.1659 (Year.Month.Day.Minute)
Hash: The Short Git SHA.
The Benefit: If you need to roll back, you don't look for "v1.2.3." You look for the timestamp of exactly when you were working. It makes your K8s logs perfectly chronological.
K8s Tips and Tricks:
This Kubernetes specification is optimized for a Rails 8 application running with YJIT and jemalloc. Since you are using a "distroless" style Nix image, we need to ensure the environment is tuned for high performance while keeping the footprint small.
The following YAML includes the specific resource allocations and environment variables required to leverage your SOTA build.
1. The Optimized K8s Deployment (k8s/rails-app.yaml)
This manifest assumes you are using the CalVer tagging scheme (2026.03.01.1700-abc123) generated by your Dagger pipeline.
YAML
apiVersion: apps/v1
kind: Deployment
metadata:
name: rails-app
labels:
app: rails-app
spec:
replicas: 3
selector:
matchLabels:
app: rails-app
template:
metadata:
labels:
app: rails-app
spec:
containers:
- name: web
image: your-registry.io/rails-app:2026.03.01.1700-abc123 # Managed by Dagger
ports:
- containerPort: 3000
env:
# Rails 8 Performance & Config
- name: RAILS_ENV
value: "production"
- name: RUBY_YJIT_ENABLE
value: "1" # Explicitly enable YJIT for Rails 8 speed
- name: LD_PRELOAD
value: "/nix/store/.../lib/libjemalloc.so" # Path from your Nix flake
- name: MALLOC_CONF
value: "dirty_decay_ms:1000,muzzy_decay_ms:1000" # jemalloc tuning for K8s
# Infrastructure (Sticking with your existing tech)
- name: REDIS_URL
value: "redis://redis-service:6379/1"
- name: RABBITMQ_URL
value: "amqp://user:pass@rabbitmq-service:5672"
resources:
requests:
cpu: "500m"
memory: "512Mi"
limits:
cpu: "2" # Allow YJIT to burst during warm-up
memory: "1Gi" # jemalloc helps keep this stable
livenessProbe:
httpGet:
path: /up # Rails 8 default health check
port: 3000
initialDelaySeconds: 10
periodSeconds: 10
---
apiVersion: v1
kind: Service
metadata:
name: rails-service
spec:
selector:
app: rails-app
ports:
- protocol: TCP
port: 80
targetPort: 3000
type: ClusterIP
2. Why these specific settings?
YJIT & CPU Bursts
Rails 8 with YJIT (Yet Another Just-In-Time compiler) performs best when it has room to "warm up." In the first few minutes of a process, YJIT is compiling hot paths into machine code. If you throttle the CPU too strictly (e.g., limit: 500m), the warm-up period takes much longer. Setting a higher limit (e.g., 2) allows the process to optimize quickly and then settle into a lower steady-state CPU usage.
jemalloc & Memory Stability
Standard malloc in Linux can lead to fragmentation in long-running Ruby processes. jemalloc (which we included in your Nix flake) handles memory more efficiently. The MALLOC_CONF environment variable ensures that unused memory is returned to the OS quickly, preventing the "memory creep" common in Rails applications.
3. The "State of the Art" Summary
By combining these pieces, your new pipeline technology looks like this:
Definition: flake.nix defines a bit-for-bit reproducible, distroless environment.
Orchestration: ci/main.ts (Dagger) handles multi-platform builds (M1/x64) and versioning.
Trigger: ci/watch.ts gives you sub-second feedback on your Mac as you code.
Runtime: K8s runs the image with YJIT and jemalloc, providing a 20%+ performance boost over traditional Rails 7/Docker setups.
Final Pro-Tip: The "Zero-Downtime" Poke
In your Dagger updateK8s function, you can add kubectl rollout status deployment/rails-app right after the set image command. This allows Dagger to wait for the new containers to be healthy before it triggers your "Environment Tests," ensuring you never run tests against a half-booted staging environment.
Would you like me to show you how to set up the "Environment Test" suite in Dagger to run Playwright or Capybara against this specific K8s deployment?
Testing Tips and Tricks:
To finalize your Local-First, SOTA Rails 8 Pipeline, we need the "Validation Layer."This is where Dagger truly shines: it can wait for your Kubernetes deployment to finish, and then immediately spin up a transient container to run System Tests (Playwright or Capybara) against your live staging URL—all from your Mac, without a CI server.1. The Deployment Orchestration (with Rollout Check)In your ci/main.ts, we add a verifyDeployment step. This ensures we don't start testing until the Rails 8 pods are actually "Ready."TypeScript /**
* Pushes the image, updates K8s, and waits for the rollout to finish.
*/
@func()
async deployAndVerify(source: Directory, registry: string): Promise<string> {
const version = this.generateCalVer();
const pushAddress = `${registry}/rails-app`;
// 1. Build & Push (Multi-platform)
const img = await this.buildWithNix(source, "linux/amd64");
await img.publish(`${pushAddress}:${version}`);
// 2. Update K8s & Wait
const kubectl = dag.container()
.from("bitnami/kubectl")
.withEnvVariable("KUBECONFIG_DATA", process.env.KUBECONFIG_BASE64 || "")
.withExec(["kubectl", "set", "image", "deployment/rails-app", `web=${pushAddress}:${version}`])
// THE POKE: Wait for rollout to complete (timeout 2m)
.withExec(["kubectl", "rollout", "status", "deployment/rails-app", "--timeout=120s"]);
await kubectl.sync(); // Force execution and wait for exit code 0
return version;
}
2. The "Environment Test" (Playwright vs. Capybara)In 2026, Playwright is the industry standard for speed and reliability, but Rails 8 still has world-class support for Capybara. Dagger allows you to run either as an ephemeral service.Option A: Playwright (The "Clean" Choice)Since your pipeline is already in TypeScript, you can run Playwright natively in Dagger. This is incredibly fast because it doesn't need to load the entire Rails environment—just the browser.TypeScript @func()
async stagingSmokeTest(url: string): Promise<string> {
return await dag.container()
.from("mcr.microsoft.com/playwright:v1.49.0-jammy")
// Pass the staging URL to the test suite
.withEnvVariable("BASE_URL", url)
.withDirectory("/tests", dag.host().directory("./e2e"))
.withWorkdir("/tests")
.withExec(["npx", "playwright", "test"])
.stdout();
}
Option B: Capybara (The "Rails-Native" Choice)If you prefer RSpec/Capybara, Dagger spins up a Ruby container, installs the browser drivers via Nix, and hits your URL.TypeScript @func()
async runCapybara(source: Directory, stagingUrl: string): Promise<string> {
return await dag.container()
.from("nixos/nix")
.withMountedCache("/nix", dag.cacheVolume("nix-store"))
.withDirectory("/src", source)
.withWorkdir("/src")
.withEnvVariable("CAPYBARA_APP_HOST", stagingUrl)
// Run only system tests marked as 'smoke'
.withExec(["nix", "develop", "--command", "bundle", "exec", "rspec", "spec/system", "--tag", "smoke"])
.stdout();
}
3. The Integrated Developer WorkflowWith the Watcher (ci/watch.ts) running in the background of your M1, your "Save" triggers this sequence in BuildKit:Fast Path: If you change a controller/model → Dagger runs test locally (seconds).Infrastructure Path: If you change a configuration/dockerfile → Dagger runs shipIt.Build: Nix creates a tiny OCI image.Push: Pushed directly from your Mac to the registry.Poke: kubectl updates the cluster.Verify: rollout status blocks the script until the pods are up.Test: Playwright hits the new pods to ensure no 500 errors.Summary of your "New Tech Stack"ComponentTechnologyWhy?OrchestratorDagger (TypeScript)Programmable, local-first, parallelized by BuildKit.BuilderNix (nix2container)Bit-for-bit reproducible, smaller than Dockerfiles.RuntimeRails 8 + YJIT + jemalloc20% speed boost, optimized memory management.VersioningCalVer + HashExplicit, chronological, and machine-readable.DeploymentDirect Local-to-K8sEliminates CI server latency and "YAML hell."
docker-compose Tips & Tricks:
To tie this all together, we need a local "backing services" layer. Since you are using Dagger on an M1 Mac, you want these services to be available both for your manual local development (bin/dev) and for the Dagger engine when it runs your test suite.
The "State of the Art" way to do this is to keep your dependencies in a lightweight docker-compose.yml that Dagger can also "hook into" as a Service Binding.
1. The Backing Services Layer (docker-compose.yml)
This file provides the Redis and RabbitMQ infrastructure you need, using Alpine-based images to keep the footprint minimal on your Mac.
YAML
services:
db:
image: postgres:16-alpine
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: password
ports:
- "5432:5432"
redis:
image: redis:7-alpine
ports:
- "6379:6379"
rabbitmq:
image: rabbitmq:3-management-alpine
ports:
- "5672:5672" # AMQP protocol
- "15672:15672" # Management UI
environment:
RABBITMQ_DEFAULT_USER: guest
RABBITMQ_DEFAULT_PASS: guest
volumes:
postgres_data:
2. The Dagger "Service Binding" Strategy
One of Dagger's most powerful features is Service Aliasing. When you run your RSpec tests in Dagger, you don't want to point them at localhost. You want to point them at a transient "Service Container" that Dagger manages.
Here is how you update your ci/main.ts to include these dependencies:
TypeScript
/**
* Run RSpec with Redis and RabbitMQ attached
*/
@func()
async testWithDeps(source: Directory): Promise<string> {
// 1. Define the Services (Dagger manages these lifecycles)
const redis = dag.container().from("redis:7-alpine").withExposedPort(6379).asService();
const rabbit = dag.container().from("rabbitmq:3-alpine").withExposedPort(5672).asService();
const postgres = dag.container().from("postgres:16-alpine")
.withEnvVariable("POSTGRES_PASSWORD", "password")
.withExposedPort(5432)
.asService();
// 2. Run the Test Container with Bindings
return await dag.container()
.from("nixos/nix")
.withServiceBinding("redis", redis) // App sees 'redis:6379'
.withServiceBinding("rabbitmq", rabbit) // App sees 'rabbitmq:5672'
.withServiceBinding("db", postgres) // App sees 'db:5432'
.withEnvVariable("REDIS_URL", "redis://redis:6379/1")
.withEnvVariable("DATABASE_URL", "postgres://postgres:password@db:5432/test")
.withDirectory("/src", source)
.withWorkdir("/src")
.withExec(["nix", "develop", "--command", "bundle", "exec", "rspec"])
.stdout();
}
3. The "Spec" Recap: Why this is the 2026 Meta
By implementing this, you have solved the three biggest headaches in Rails DevOps:
Environmental Drift: Your flake.nix ensures that if it runs on your Mac, it runs in K8s. Period. No "it worked on my machine" because the nix closure is identical.
Pipeline Latency: Because BuildKit and Dagger live on your M1, your "Save-to-Test" loop is nearly instantaneous. You are no longer waiting for a GitHub Action runner to pick up a job.
Deployment Complexity: Your CalVer.Hash scheme and direct kubectl "pokes" remove the need for complex release management software. You are deploying specific points in time of your work.
Final Project Structure
Plaintext
.
├── .dagger/ # Dagger engine config
├── ci/
│ ├── main.ts # The "ShipIt" Logic (Dagger)
│ └── watch.ts # The "Save-to-Action" Watcher
├── k8s/
│ └── rails-app.yaml # YJIT & jemalloc tuned manifest
├── docker-compose.yml # Local dev backing services
├── flake.nix # The "Low-Level" image definition
└── Gemfile # Rails 8 (Propshaft, no Redis/Sidekiq needed)
Next Step
Since you are a co-founder and efficiency is likely your top priority, would you like me to generate the Makefile or justfile aliases so that you and your team can just type just ship to trigger this entire flow? (This makes onboarding new devs to this "SOTA" stack take about 5 minutes).
justfile Tips & Tricks:
To wrap this all up, well use a justfile (the modern, type-safe alternative to make). This is the "User Interface" for your pipeline.By defining these aliases, a new developer on your team only needs to install nix and just, and they are immediately productive.1. The "Command Center" (justfile)This file maps your human intent ("I want to ship this") to the high-performance Dagger engine.Code snippetset shell := ["bash", "-c"]
# The "One-Command Onboarding" - Sets up the Mac for SOTA Rails 8
setup:
@echo "🔧 Installing Nix, Dagger, and Just..."
sh <(curl -L https://nixos.org/nix/install) --daemon
curl -L https://dl.dagger.io/dagger/install.sh | sh
brew install just orbstack # OrbStack is the fastest Docker engine for M1
docker-compose up -d
@echo "✅ Environment ready. Run 'just dev' to start coding."
# Start the 'Save-to-Action' reactive loop
dev:
bun run ci/watch.ts
# Explicitly run the RSpec suite in Dagger (with Redis/Rabbit bindings)
test:
dagger call test-with-deps --source=.
# The 'Big Red Button' - Build, Tag (CalVer), Push, Poke K8s, and Smoke Test
ship:
@echo "🚀 Shipping to Production..."
dagger call ship-it --source=. --registry="your-registry.io"
# Tail the logs of the new Rails 8 pods
logs:
kubectl logs -f -l app=rails-app --tail=100
# Open the Rails 8 YJIT stats (Console) inside a running pod
stats:
kubectl exec -it deployment/rails-app -- rails console -e production -- --command "puts RubyVM::YJIT.runtime_stats"
2. Why this is the "Co-Founder's Choice"As a startup co-founder, you need velocity without technical debt. This "SOTA" stack provides exactly that:Zero "CI Wait" Time: You no longer pay for (or wait for) GitHub Action minutes for standard builds. Your M1 does the work in the background while you're still typing.Binary Parity: Because the Nix Store is the source of truth, "Production" is literally just a copy of the binaries you tested locally.Observability: Rails 8's /up endpoint and YJIT stats (accessible via just stats) give you immediate feedback on whether your $1k/mo K8s nodes are actually performing.3. The "State of the Art" Spec SummaryCategoryComponentYour ImplementationLocal DXjust + chokidarCommand aliases and reactive "save-to-test" loops.Core BuildDagger (TS)Programmable, multi-platform (M1/x64) graph builder.Low-LevelNix (n2c)"Lower than Docker" binary closures with zero OS bloat.Rails PerfYJIT + jemallocOptimized Ruby 3.4 runtime with stable memory.VersioningCalVer + HashNo manual versioning; chronological and traceable.InfraK8s + Redis/RMQClean separation of compute and state.
Secret Management Tips & Tricks:
To secure a "local-first" pipeline, you need to bridge the gap between your Mac's secure storage (Keychain, 1Password, or environment) and the Dagger Engine without ever leaking those strings into your build logs or image layers.
In 2026, the Dagger Secret API is the state-of-the-art way to handle this. It treats secrets as "opaque handles" that are never written to disk or the BuildKit cache.
1. The Secrets Spec (ci/main.ts)
Update your Dagger class to accept Secret types instead of strings. Dagger will then mask these in all output logs automatically.
TypeScript
import { dag, Secret, Directory, object, func } from "@dagger.io/dagger";
@object()
export class RailsSota {
/**
* Securely Ship to K8s using Mac-local credentials
* @param kubeconfig The Kubeconfig file as a Dagger Secret
* @param registryToken The Registry Password/Token as a Dagger Secret
*/
@func()
async shipSecure(
source: Directory,
kubeconfig: Secret,
registryToken: Secret,
registryUser: string = "co-founder"
): Promise<string> {
// 1. Registry Auth (Uses masked secret)
const pushAddress = "your-registry.io/rails-app";
const auth = await dag.host().service("your-registry.io", 443)
.withSecretVariable("REGISTRY_PASSWORD", registryToken)
.asService();
// 2. Build and Publish
const version = this.generateCalVer();
const img = await this.buildWithNix(source, "linux/amd64");
await img.withServiceBinding("registry", auth).publish(`${pushAddress}:${version}`);
// 3. K8s Poke (Mounting the Kubeconfig secret as a file)
await dag.container()
.from("bitnami/kubectl")
.withMountedSecret("/root/.kube/config", kubeconfig) // Mounted in-memory only
.withExec(["kubectl", "set", "image", "deployment/rails-app", `web=${pushAddress}:${version}`])
.sync();
return `Successfully shipped ${version} securely.`;
}
}
2. Invoking Secrets from your Mac (justfile)
You don't want to pass these as environment variables (which can be leaked via ps or shell history). Instead, use the Dagger CLI's secret provider to pull directly from your local environment or files.
Update your justfile:
Code snippet
# Run the secure ship command
ship:
@echo "🔐 Fetching local credentials..."
dagger call ship-secure \
--source=. \
--kubeconfig=file:$KUBECONFIG \
--registry-token=env:REGISTRY_AUTH_TOKEN \
--registry-user="edwards-co"
3. Why this works for a Startup Co-Founder
Zero-Downtime Credential Rotation: If you rotate your registry token on your Mac, the next just ship simply works. You don't have to update a "Secret Manager" in a CI Web UI.
Security by Default: Dagger's Secret type is a first-class object. If you accidentally console.log(registryToken), Dagger will print *** instead of your actual password.
Onboarding Simplicity: When you hire your first engineer, you just give them access to the K8s cluster and the Registry. Their local Mac becomes the "Build Server," and the justfile handles the secure handshake.
Final Architecture Review
You now have a complete, professional-grade DevOps stack for Rails 8:
Nix (flake.nix): Defines the low-level, high-perf binary environment (YJIT/jemalloc).
Dagger (ci/main.ts): Orchestrates the multi-platform build and secure deployment.
Just (justfile): Provides the simple CLI interface for your team.
K8s (k8s/rails-app.yaml): A tuned production environment optimized for Rails 8.
Secrets: A local-first, zero-leak credential system.