Skip to content

Advanced Features

Shipwright includes several advanced features for production-grade autonomous agent operations. These features support resilience, distributed execution, and operational visibility.

Heartbeats

Agent heartbeats provide liveness monitoring for running pipeline jobs. The pipeline writes periodic heartbeat files that can be checked by the daemon, dashboard, and doctor to detect stale or crashed jobs.

How It Works

  1. When a pipeline starts, it begins writing heartbeat files every 60 seconds
  2. Each heartbeat file contains the job ID, timestamp, current stage, and process info
  3. The daemon checks heartbeats to detect stale jobs (no update in 5+ minutes)
  4. The doctor includes heartbeat health in its diagnostic output

Storage

Heartbeat files are stored at ~/.shipwright/heartbeats/<job-id>.json:

{
"job_id": "pipeline-42-1707500000",
"timestamp": "2026-02-09T12:00:00Z",
"stage": "build",
"pid": 12345,
"issue": 42
}

Commands

Terminal window
shipwright heartbeat list # Show all active heartbeats
shipwright heartbeat check <job-id> # Check specific job
shipwright heartbeat clear # Remove stale heartbeats

Integration Points

ComponentHow It Uses Heartbeats
PipelineWrites heartbeats during execution
DaemonChecks for stale jobs, triggers cleanup
DashboardShows live agent status
DoctorReports heartbeat health in diagnostics
StatusDisplays agent heartbeat section

Checkpoints

Pipeline checkpoints save the complete state of a pipeline at a point in time, enabling recovery from failures and the ability to experiment with different approaches from a known-good state.

Saving Checkpoints

Terminal window
# Save current pipeline state with auto-generated name
shipwright checkpoint save
# Save with a specific name
shipwright checkpoint save "before-refactor"

Restoring Checkpoints

Terminal window
# List available checkpoints
shipwright checkpoint list
# Restore from a checkpoint
shipwright checkpoint restore "before-refactor"

Storage

Checkpoints are stored in .claude/pipeline-artifacts/checkpoints/ and include:

  • Pipeline state file (stage statuses, timings, configuration)
  • Current branch and commit reference
  • Artifact references (plan, design, review files)

Use Cases

  • Recovery — Restore after a failed stage without re-running earlier stages
  • Experimentation — Save state, try an approach, restore if it doesn’t work
  • CI Resume — The auto-retry workflow uses checkpoint data to resume from the last successful stage

Remote Machines

Remote machine management enables distributed pipeline execution across multiple hosts. Register worker machines, monitor their health, and let the daemon distribute work across the fleet.

Registering Machines

Terminal window
# Add a worker machine
shipwright remote add builder-1 \
--host 192.168.1.100 \
--path /opt/shipwright \
--user deploy \
--max-workers 4
# Add another machine
shipwright remote add builder-2 \
--host 192.168.1.101 \
--path /opt/shipwright \
--max-workers 8

Machine Registry

The registry is stored at ~/.shipwright/machines.json:

{
"machines": [
{
"name": "builder-1",
"host": "192.168.1.100",
"path": "/opt/shipwright",
"user": "deploy",
"role": "worker",
"max_workers": 4
}
]
}

Health Checks

Terminal window
# Check all registered machines
shipwright remote status
# View registered machines
shipwright remote list
# Remove a machine
shipwright remote remove builder-1

Health checks verify:

  • SSH connectivity
  • Shipwright installation at the specified path
  • System resources (CPU, memory, load average)

Integration with Daemon

When remote machines are registered, the daemon can distribute pipeline jobs across them. The fleet rebalancer monitors load and reassigns work to underutilized machines.

Integration with Doctor

The shipwright doctor command includes a “REMOTE MACHINES” health check section that verifies all registered machines are reachable and properly configured.

Intelligence Modules

Shipwright includes frontier AI capabilities that run as optional plugins. Intelligence defaults to auto (enabled when Claude CLI is available); configure via the intelligence section in .claude/daemon-config.json.

Adversarial Code Review

After the primary agent writes code, a second adversarial agent attempts to find bugs, security vulnerabilities, race conditions, and edge cases. The agents iterate — the primary agent fixes findings, the adversary re-reviews — converging until no critical issues remain or the maximum round count (default: 3) is reached.

Adversarial review integrates as an optional pipeline stage after review, before compound_quality. Enable with intelligence.adversarial_enabled in daemon-config.json.

Terminal window
# Manual adversarial review
shipwright adversarial review "$(git diff HEAD~1)" "context about the change"

Developer Simulation

Before PR submission, Shipwright simulates an internal code review with multiple reviewer personas — security, performance, and maintainability reviewers each raise objections specific to their domain. The implementation agent addresses objections before the PR is created, reducing real PR review cycles.

Enable with intelligence.simulation_enabled in daemon-config.json.

Terminal window
# Manual developer simulation
shipwright developer-simulation review "$(git diff main)"

Architecture Enforcer

Maintains a living architectural model of your codebase — layers, patterns, conventions, and dependencies. On subsequent pipelines, changes are validated against this model. Violations are flagged before PR creation. When legitimate architectural evolution is detected, the model updates automatically.

The model is stored per-repo at ~/.shipwright/memory/<repo-hash>/architecture.json. Enable with intelligence.architecture_enabled in daemon-config.json.

Terminal window
# Build the architectural model
shipwright architecture build
# Validate changes
shipwright architecture validate "$(git diff main)"

Self-Optimization

The self-optimization module learns from every pipeline run and tunes system parameters:

  • Outcome analysis — extracts what worked, what failed, and why after each pipeline
  • Template tuning — adjusts template selection weights based on success/failure rates per issue type
  • Model routing — A/B tests cheaper models on 20% of stages; if success rate holds, makes them the default
  • Iteration estimation — builds prediction models for how many iterations each complexity level needs
  • Memory evolution — prunes stale patterns, strengthens confirmed ones, promotes cross-repo learnings

Data is stored at ~/.shipwright/optimization/. Enable with intelligence.optimization_enabled in daemon-config.json.

Predictive Analytics

Predicts failures before they happen and takes preventative action:

  • Risk assessment — before any pipeline starts, estimates overall risk and identifies high-risk stages
  • Anomaly detection — during pipeline execution, compares metrics against baselines and alerts on deviations
  • AI patrol — enhances grep-based patrol with Claude analysis for holistic codebase review
  • Failure prevention — injects contextual warnings from memory into stages where similar issues have previously failed

Enable with intelligence.prediction_enabled in daemon-config.json.

Pipeline Composer

Generates custom pipeline configurations by adjusting stage timeouts, iteration counts, and model routing based on codebase analysis. Replaces static template selection with dynamic composition tailored to your specific code patterns.

Enable with intelligence.composer_enabled in daemon-config.json.


For a comprehensive overview of all intelligence capabilities, configuration options, and best practices, see the Intelligence guide.