mhost Documentation

Everything you need to deploy, monitor, and scale processes in production.


Installation

Edit on GitHub ✎

mhost ships as a single static binary with zero runtime dependencies. Pick your preferred method:

Homebrew (macOS / Linux)

$ brew install maqalaqil/tap/mhost

curl installer (Linux / macOS)

$ curl -fsSL https://mhostai.com/install.sh | sh

Cargo (from source)

$ cargo install mhost

GitHub Releases (manual binary)

$ wget https://github.com/maqalaqil/mhost/releases/latest/download/mhost-linux-x86_64.tar.gz $ tar xzf mhost-linux-x86_64.tar.gz && mv mhost /usr/local/bin/

Docker

$ docker run --rm -v /var/run/mhost:/var/run/mhost ghcr.io/maqalaqil/mhost:latest

Nix

$ nix run github:maqalaqil/mhost
Verify the install: mhost --version. The daemon starts automatically on first use.

Quick Start

Edit on GitHub ✎

Get your first process running in under 30 seconds.

# 1. Start a process $ mhost start app.js --name my-app # 2. Check status $ mhost status NAME PID STATUS CPU MEM RESTARTS my-app 12345 online 0.3% 48MB 0 # 3. Tail logs $ mhost logs my-app --follow

Your First mhost.toml

Edit on GitHub ✎

Create mhost.toml in your project root, then run mhost start — mhost will auto-discover the file.

toml
# mhost.toml [process.api] command = "node server.js" instances = 2 max_memory = "256MB" [process.api.health.http] url = "http://localhost:3000/health" interval = "15s" timeout = "3s"

How mhost Works

Edit on GitHub ✎

mhost has two components: a persistent daemon and a CLI. The daemon spawns and supervises processes, stores logs in an embedded SQLite database, and exposes a JSON-RPC socket. The CLI communicates with the daemon over a Unix socket at ~/.mhost/mhost.sock.

┌─────────────────────────────────────────────────────────────┐ │ mhost daemon (~/.mhost/mhost.sock) │ │ │ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │ │ │ Process │ │ Health │ │ Brain │ │ Agent │ │ │ │ Manager │ │ Probes │ │ Engine │ │ Loop │ │ │ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │ │ │ │ SQLite (logs + state) Prometheus endpoint │ └─────────────────────────────────────────────────────────────┘ ▲ ▲ │ JSON-RPC / IPC │ HTTP /metrics mhost CLI Grafana / VictoriaMetrics

The daemon starts automatically the first time you run any mhost command. To start it explicitly: mhost daemon start. To stop: mhost daemon stop.

Process Lifecycle

Edit on GitHub ✎

Every process moves through these states:

stopped starting online stopping stopped
online errored starting (auto-restart)
StateDescription
stoppedNot running. Intentionally stopped or never started.
startingDaemon is spawning the process. Health probes not yet checked.
onlineRunning and passing all health probes.
stoppingSIGTERM sent; waiting for graceful shutdown within grace period.
erroredExited with non-zero code or failed health probe. Will auto-restart.

State Directory (~/.mhost/)

Edit on GitHub ✎
~/.mhost/ ├── mhost.sock # IPC socket ├── mhost.pid # Daemon PID ├── db.sqlite # Logs + state (FTS5-indexed) ├── config.toml # Daemon global config ├── processes/ # Per-process state snapshots ├── logs/ # Raw log files (symlinked) ├── deploy/ # Deploy artefacts & rollback history └── brain/ ├── playbooks/ # TOML remediation playbooks └── history.db # Healing event log
Override the directory with the environment variable MHOST_HOME.

Starting Processes

Edit on GitHub ✎
$ mhost start <file|command> [flags]
  • --name <name>Assign a name (default: filename without extension).
  • --instances <N>Number of instances to spawn (cluster mode).
  • --max-memory <MB>Auto-restart when RSS exceeds this limit.
  • --env <KEY=VAL>Inject environment variables (repeatable).
  • --env-file <path>Load variables from a .env file.
  • --cwd <path>Working directory for the process.
  • --interpreter <bin>Override the auto-detected interpreter.
  • --grace-period <dur>Time to wait for graceful shutdown (default: 10s).
  • --no-autorestartDisable automatic restart on crash.
  • --watchWatch files and restart on change (dev mode).

Auto-detect Interpreter

mhost infers the interpreter from the file extension so you can pass a filename directly:

ExtensionInterpreterOverride flag
.js / .mjs / .cjsnode--interpreter bun
.tsts-node--interpreter deno
.pypython3--interpreter pypy3
.rbruby
.sh / .bashbash
.phpphp
(binary)direct exec

Stop, Restart, Delete

$ mhost stop my-app # graceful SIGTERM $ mhost stop my-app --force # SIGKILL immediately $ mhost restart my-app # stop + start $ mhost reload my-app # zero-downtime reload (SIGUSR2) $ mhost delete my-app # stop + remove from registry $ mhost stop all # stop every process

Zero-Downtime Reload

Send SIGUSR2 to trigger a rolling restart with no dropped requests. mhost brings up new instances before tearing down the old ones:

$ mhost reload my-app # rolling reload — zero dropped requests $ mhost reload my-app --wait # wait for health probe to pass before proceeding
Your application must listen for SIGUSR2 and finish in-flight requests before exiting. Most Node.js frameworks support this out of the box.

Index-based Process Targeting

Every process in the registry has a numeric index (shown in mhost list). You can use the index instead of the name in any command:

$ mhost stop 0 # stop the first process by index $ mhost restart 1 # restart second process $ mhost logs 2 # tail logs for third process $ mhost delete 0 # delete process at index 0
Indices are stable within a session but may shift if processes are deleted. Prefer names in scripts for reliability.

Scaling

Scale a named process to N instances at runtime with no downtime — mhost adds or removes workers gracefully:

$ mhost scale my-app 4 # scale to 4 instances $ mhost scale my-app +2 # add 2 more instances $ mhost scale my-app -1 # remove 1 instance

Process Groups & Dependencies

Groups let you start, stop, and restart related processes together. Declare them in your config:

toml
[groups.backend] processes = ["api", "worker", "scheduler"] start_order = ["api", "worker", "scheduler"] [process.worker] depends_on = ["api"] # waits for api to be online
$ mhost start backend # start entire group $ mhost restart backend # rolling restart respecting order

Save & Resurrect

Persist the current process list to disk so mhost can restore it after a system reboot:

$ mhost save # snapshot current processes to ~/.mhost/dump.toml $ mhost resurrect # restore from snapshot (runs at boot via systemd unit) $ mhost startup # generate and install systemd/launchd unit for auto-resurrect

mhost.toml Format

Edit on GitHub ✎

Full configuration reference for a single process:

toml
[process.api] command = "node server.js" # required interpreter = "node" # auto-detected if omitted instances = 2 # default 1 cwd = "/srv/app" # default: process CWD env_file = ".env" max_memory = "512MB" max_restarts = 10 # 0 = unlimited grace_period = "10s" autorestart = true watch = false log_date_format = "YYYY-MM-DD HH:mm:ss" [process.api.env] NODE_ENV = "production" PORT = "${PORT}" [process.api.health.http] url = "http://localhost:${PORT}/health" interval = "10s" timeout = "3s" retries = 3 [process.api.notify] on_crash = true channels = ["telegram", "slack"]
KeyTypeDefaultDescription
commandstringrequiredShell command or path to script.
instancesinteger1Parallel worker count.
max_memorystringe.g. "512MB", "2GB". Auto-restart on breach.
max_restartsinteger0Maximum restart attempts before marking errored.
grace_periodduration"10s"Graceful shutdown window before SIGKILL.
autorestartbooltrueRestart on non-zero exit.
cron_restartstringCron expression for scheduled restarts.
watchbool / [string]falsetrue = watch all source files, or list of globs.

YAML & JSON Alternatives

mhost auto-detects format by file extension. Use --config to specify a path:

$ mhost start --config mhost.yaml $ mhost start --config mhost.json

All three formats support the same keys. TOML is recommended for production due to its comment support.

Environment Variable Expansion

Any string value in config can reference environment variables using ${VAR} syntax. Variables are resolved at daemon startup:

toml
command = "node server.js" env_file = "${APP_ROOT}/.env.production" [process.api.env] DATABASE_URL = "${DATABASE_URL}" PORT = "${PORT:-3000}" # default fallback
Undefined variables without a default will cause mhost to print a warning and leave the value empty. Use ${VAR:-default} to supply fallbacks.

Health Probes

Three probe types are supported. A process is considered errored when the probe fails retries times consecutively.

HTTP Probe

toml
[process.api.health.http] url = "http://localhost:3000/health" interval = "10s" timeout = "3s" retries = 3 expected_code = 200

TCP Probe

toml
[process.db.health.tcp] host = "localhost" port = 5432 interval = "15s" timeout = "2s"

Script Probe

toml
[process.worker.health.script] command = "./scripts/health-check.sh" interval = "30s" timeout = "5s"

Cron Restarts

Schedule automatic restarts using standard cron syntax (UTC by default):

toml
[process.report-gen] command = "python3 generate.py" cron_restart = "0 3 * * *" # restart daily at 03:00 UTC timezone = "America/New_York"

Memory Limits

When a process exceeds max_memory, mhost restarts it and fires a notification on configured channels. Supported suffixes: MB, GB.

toml
[process.api] max_memory = "512MB"

$ mhost logs # all processes, last 100 lines $ mhost logs my-app # specific process $ mhost logs my-app --lines 500 $ mhost logs my-app --since 1h $ mhost logs my-app --since "2026-03-28 00:00"

Live Streaming

$ mhost logs --follow # all processes, live $ mhost logs my-app --follow # single process, live $ mhost logs my-app -f --no-color # pipe-friendly

Log Format

Each line is timestamped and prefixed with the process name and instance ID:

2026-03-28T14:32:01Z [api-0] Server listening on :3000 2026-03-28T14:32:01Z [api-1] Server listening on :3001 2026-03-28T14:32:05Z [worker] {"level":"info","msg":"job complete","id":42}

JSON lines are automatically pretty-printed in the terminal. Pass --raw to disable pretty-printing.

External Log Sinks

Ship logs to external systems by configuring sinks in mhost.toml:

toml
[sinks.graylog] type = "gelf" host = "logs.example.com" port = 12201 [sinks.loki] type = "loki" url = "http://localhost:3100" [sinks.elastic] type = "elasticsearch" url = "http://localhost:9200" index = "mhost-logs" [sinks.syslog] type = "syslog" facility = "local0"

TUI Dashboard

Edit on GitHub ✎
$ mhost monit

Opens a real-time terminal UI showing CPU, memory, restart count, uptime, and log tail for every process. Use arrow keys to navigate, r to restart, s to stop, q to quit.

Web Dashboard

$ mhost dashboard Dashboard running at http://localhost:9615

Configure port and basic auth in ~/.mhost/config.toml:

toml
[dashboard] port = 9615 username = "admin" password = "${DASHBOARD_PASSWORD}"

Health Checks

View current health status at a glance:

$ mhost health NAME PROBE STATUS LAST CHECK LATENCY api http pass 2s ago 12ms worker script pass 28s ago — db-proxy tcp fail 1s ago timeout

Metrics & Prometheus

mhost exposes a Prometheus-compatible /metrics endpoint. Enable it in ~/.mhost/config.toml:

toml
[metrics] enabled = true port = 9161 path = "/metrics"

Exported metrics include: mhost_process_cpu_percent, mhost_process_memory_bytes, mhost_process_restarts_total, mhost_process_status, mhost_health_probe_duration_ms.


Notification Setup

Edit on GitHub ✎

Run the interactive wizard to configure any channel:

$ mhost notify setup ? Which channel? › Telegram ? Bot token: › *** ? Chat ID: › -100123456789 ? Test notification? › Yes ✓ Telegram configured successfully

All Channels

Telegram
Bot API
💬
Slack
Incoming Webhook
📬
Email
SMTP
📲
SMS
Twilio
🔔
Discord
Webhook
📋
PagerDuty
Events API v2
🔗
Webhook
Custom HTTP POST
🛠
Opsgenie
Alert API

Telegram Example

toml
[notifications.telegram] bot_token = "${TELEGRAM_TOKEN}" chat_id = "${TELEGRAM_CHAT_ID}" events = ["crash", "errored", "memory_exceeded"] throttle = "5m"

Slack Example

toml
[notifications.slack] webhook_url = "${SLACK_WEBHOOK}" events = ["crash", "health_fail", "deploy_done"] channel = "#ops-alerts"

Event Types

EventDescription
crashProcess exited unexpectedly.
erroredProcess entered errored state.
health_failHealth probe failed.
health_recoverHealth probe recovered.
memory_exceededRSS exceeded max_memory limit.
restartProcess was restarted (manual or auto).
deploy_startDeployment started.
deploy_doneDeployment completed.
deploy_failedDeployment failed.
brain_healBrain applied an auto-remediation.

Throttling & Escalation

Prevent alert storms with per-channel throttling and escalation policies:

toml
[notifications.telegram] throttle = "5m" # max 1 alert per 5 minutes per process escalation_after = "15m" # escalate if unresolved after 15m escalation_target = "pagerduty" # channel to escalate to

$ mhost ai setup ? Provider › OpenAI ? API key: › sk-*** ? Model: › gpt-4o ✓ AI configured. Run `mhost ai diagnose` to test.
toml
[ai] provider = "openai" # openai | anthropic | ollama | groq | gemini api_key = "${OPENAI_API_KEY}" model = "gpt-4o"

AI Commands

CommandDescription
mhost ai diagnose <app>Analyze recent crashes and suggest root cause.
mhost ai explain <app>Explain what a process does based on its logs and command.
mhost ai fix <app>Suggest configuration fixes for an errored process.
mhost ai logs <app>Summarize recent log activity in plain English.
mhost ai predictPredict which processes are likely to fail in the next hour.
mhost ai optimize <app>Suggest memory and concurrency optimizations.
mhost ai anomalyDetect anomalous behaviour across all processes.
mhost ai compare <app> <before> <after>Compare log patterns before/after a deploy.
mhost ai reportGenerate a daily ops report for all processes.
mhost ai ask "<question>"Free-form question about your running stack.

Prompt Customization

Override any built-in system prompt by placing a file in ~/.mhost/prompts/:

~/.mhost/prompts/ ├── diagnose.txt # override for mhost ai diagnose ├── report.txt # override for mhost ai report └── system.txt # global system prompt prefix

Autonomous Agent Setup

Edit on GitHub ✎
$ mhost agent setup $ mhost agent start # runs in background $ mhost agent status $ mhost agent stop
toml
[agent] enabled = true autonomy = "supervised" # supervised | semi | autonomous poll_interval = "30s" telegram_chat = "${TELEGRAM_CHAT_ID}" max_actions_per_hour = 10

Observe → Think → Act Loop

The agent runs a continuous loop every poll_interval:

1. Observe collect metrics, logs, health probe results 2. Think send context to AI model for analysis 3. Plan AI returns a ranked list of recommended actions 4. Confirm in supervised mode: notify operator and wait for approval 5. Act execute approved actions (restart, scale, rollback, etc.) 6. Learn record outcome; update brain playbooks if successful

Autonomy Levels

LevelBehaviour
supervisedAlways asks for approval via Telegram before acting.
semiActs autonomously for low-risk actions (restart); asks for high-risk ones (rollback, scale-up).
autonomousFull autonomy within configured bounds. Reports actions after the fact.

Telegram Conversation Examples

Agent: api-server has crashed 3 times in 10 min. Likely cause: OOM (RSS 498MB / limit 512MB). Proposed action: increase max_memory to 768MB and restart. Reply YES to approve or NO to skip. You: yes Agent: ✓ Done. api-server restarted (pid 18234). Memory limit updated.

You can also chat with the agent directly: mhost agent ask "why is worker slow today?"


Self-Healing Brain

Edit on GitHub ✎
$ mhost brain status # current health scores $ mhost brain history # past healing events $ mhost brain playbooks # list loaded playbooks $ mhost brain explain crash # explain last crash remediation

Built-in Playbooks

PlaybookTriggerAction
oom-restartRSS > max_memoryRestart process, log incident.
rapid-crash>3 crashes in 5 minStop, notify, wait for manual intervention.
health-recoverProbe fails 3x, then passesClear error state, update health score.
cpu-spikeCPU > 90% for 60sScale out by +1 instance.
zombie-processPID alive, health probe dead 5minSIGKILL, restart.
deploy-rollbackError rate > 10% post-deployAuto-rollback to previous revision.

Add custom playbooks in ~/.mhost/brain/playbooks/my-playbook.toml.

Health Scores

Each process gets a rolling health score (0–100) computed from crash frequency, probe success rate, memory trend, and CPU stability. Scores below 40 trigger notifications; below 20 trigger the brain.

Auto-Learning

When the brain successfully remediates a novel incident, it serializes the diagnosis → action → outcome chain into a new playbook. On the next identical incident the playbook runs locally — no AI API call needed, zero latency, zero cost.


$ mhost dev # start all processes from mhost.toml in dev mode $ mhost dev --watch src/ # restart on file change in src/ $ mhost dev --env-file .env.dev

File Watching

In dev mode, mhost watches files using OS-native events (inotify on Linux, FSEvents on macOS). Configure per-process watch paths in mhost.toml:

toml
[process.api] watch = ["src/**/*.js", "config/*.toml"] ignore = ["node_modules", "dist", "*.log"]

Env File Loading

mhost merges env files in this priority order (later overrides earlier):

.env → .env.local → .env.${NODE_ENV} → process [env] table → OS env

Cloud Fleet Commands

Edit on GitHub ✎
$ mhost cloud add <host> --name prod-1 # add SSH host $ mhost cloud import aws # auto-import EC2 instances $ mhost cloud import digitalocean # auto-import Droplets $ mhost cloud list # list registered hosts $ mhost cloud deploy prod-1 # push config & restart $ mhost cloud exec prod-1 -- mhost status

SSH Configuration

toml
[cloud.hosts.prod-1] host = "10.0.1.5" port = 22 user = "ubuntu" key_file = "~/.ssh/prod.pem" tags = ["production", "api"]

mhost uses your existing ~/.ssh/config by default. The key_file field is optional if the SSH agent is running.

Cloud Providers

ProviderImport commandAuth
AWS EC2mhost cloud import awsAWS_ACCESS_KEY_ID / IAM role
Azure VMsmhost cloud import azureAZURE_SUBSCRIPTION_ID + az login
DigitalOceanmhost cloud import digitaloceanDIGITALOCEAN_TOKEN
Railwaymhost cloud import railwayRAILWAY_TOKEN

AI Cloud Operations

$ mhost ai cloud diagnose prod-1 # diagnose remote server $ mhost ai cloud plan-migration # plan migration between hosts $ mhost ai cloud provision # AI-guided infrastructure setup

Cloud-Native Providers

Direct API integration with 10 cloud providers — no SSH required. Provision, deploy, scale, and manage services via provider APIs.

ProviderServicesAuth Command
AWSECS, EKS, EC2, Lambdamhost cloud auth aws
GCPCloud Run, GKE, GCEmhost cloud auth gcp
AzureAKS, ACI, VMsmhost cloud auth azure
RailwayWeb, Worker, Cronmhost cloud auth railway
Fly.ioMachines, Appsmhost cloud auth fly
VercelServerless, Edgemhost cloud auth vercel
DigitalOceanApp Platform, Dropletsmhost cloud auth digitalocean
CloudflareWorkers, Pagesmhost cloud auth cloudflare
NetlifySites, Functionsmhost cloud auth netlify
SupabaseDatabase, Edge Functionsmhost cloud auth supabase
$ mhost cloud provision --provider railway --name api --type web --image node:20 --port 3000 $ mhost cloud services # list all services $ mhost cloud service api # show service details

Cloud Secrets Management

Manage environment variables and secrets on cloud services without logging into provider dashboards.

$ mhost cloud secrets set api DATABASE_URL "postgres://..." $ mhost cloud secrets list api $ mhost cloud secrets remove api OLD_KEY

Cloud Cost Tracking

Unified cost view across all configured cloud providers. See spending per service and per provider.

$ mhost cloud cost # all providers $ mhost cloud cost --provider railway # single provider

Configuration Drift Detection

Compare your local desired state against live cloud configuration. Optionally auto-fix detected drift.

$ mhost cloud drift # detect only $ mhost cloud drift --fix # detect and fix

Infrastructure Export

Export your cloud services as infrastructure-as-code for version control, migration, or disaster recovery.

FormatCommandOutput
Terraformmhost cloud export terraform.tf files
Docker Composemhost cloud export docker-composedocker-compose.yml
Kubernetesmhost cloud export kubernetesDeployment + Service YAML

Chat Bot Setup

Edit on GitHub ✎
$ mhost bot setup # configure Telegram or Slack bot $ mhost bot start $ mhost bot status

Chat Commands

CommandDescription
/statusList all processes and their states.
/status <app>Detailed status for a single process.
/restart <app>Restart a process.
/stop <app>Stop a process.
/scale <app> <N>Scale to N instances.
/logs <app>Last 20 log lines.
/healthHealth probe summary.
/deploy <app>Trigger a deployment.
/rollback <app>Rollback to previous revision.
/ask <question>Ask the AI agent anything.

Permission System

RoleAllowed actions
adminAll commands including stop, delete, deploy, rollback, scale.
operatorrestart, logs, status, health, scale.
viewerstatus, logs, health (read-only).
toml
[bot.users] "123456789" = "admin" "987654321" = "operator"

Deploy & Rollback

Edit on GitHub ✎
$ mhost deploy # deploy using mhost.toml [deploy] config $ mhost deploy --env prod $ mhost rollback # revert to previous revision $ mhost rollback --rev 3 # revert to specific revision $ mhost deploy history

Deploy Config

toml
[deploy] repo = "git@github.com:you/app.git" branch = "main" path = "/srv/app" keep_releases = 5 strategy = "rolling" # rolling | blue-green | canary

Hook Execution

toml
[deploy.hooks] pre_deploy = ["npm ci", "npm run build"] post_deploy = ["npm run migrate"] post_restart = ["./scripts/smoke-test.sh"]

All hooks run in the deploy path. A non-zero exit aborts the deployment and triggers rollback.


Reverse Proxy Config

Edit on GitHub ✎
toml
[proxy.api] listen = "0.0.0.0:80" upstream = "http://localhost:3000" strategy = "round-robin" tls = true domain = "api.example.com" [proxy.api.headers] X-Forwarded-For = "$remote_addr"

Load Balancing Strategies

StrategyDescription
round-robinDistribute requests evenly across all upstreams.
least-connRoute to the upstream with fewest active connections.
ip-hashConsistent routing based on client IP.
randomRandom selection.
weightedWeighted distribution (set weight per upstream).

TLS / ACME

mhost integrates with Let's Encrypt via the ACME protocol. Set tls = true and provide a domain — certificates are auto-provisioned and renewed.

toml
[proxy.api] tls = true domain = "api.example.com" acme_email = "ops@example.com" acme_dir = "~/.mhost/certs"

Zero-Downtime Reload

Start new instances, wait for health checks to pass, then kill old ones. No dropped requests.

mhost reload api-server

If no health check is configured, falls back to a regular restart. If the new instances fail health checks, they are killed and the old instances are preserved.

Load Testing

Built-in HTTP load testing with configurable concurrency and duration.

# Default: 10 seconds, 10 concurrent workers
mhost bench https://api.example.com

# Custom settings
mhost bench https://api.example.com --duration 30 --concurrency 50

Reports total requests, requests/second, average and p99 latency, and error rate.

Canary Deployments

Scale up a canary instance, monitor for errors over a duration, then auto-promote or rollback.

# Default: 10% traffic, 300s monitoring window
mhost canary api-server

# Custom canary settings
mhost canary api-server --percent 20 --duration 600

Snapshots

Capture and restore full process state. Perfect for safe rollbacks before risky changes.

mhost snapshot create before-deploy    # Save current state
mhost snapshot list                    # List all snapshots
mhost snapshot restore before-deploy   # Restore a snapshot

SSL Certificate Monitoring

Check SSL certificate expiry for one or more URLs. Warns when certificates expire within 30 days.

mhost certs --url https://api.example.com --url https://app.example.com

SLA Uptime Reports

Calculate uptime from brain incident data, compare against your SLA target.

mhost sla api-server                   # Default target: 99.9%
mhost sla api-server --target 99.99    # Four-nines target

Shows uptime percentage, allowed vs actual downtime, incidents count, and budget remaining.

Incident Replay

Replay an incident timeline for a process, showing events, log lines, and health score changes.

mhost replay api-server                # Full timeline
mhost replay api-server --time "3:47am" # Filter around a time

Cloud Cost Estimation

Maps running process memory usage to EC2-equivalent instance types and estimates monthly costs.

mhost cost

Environment Diff

Compare two fleet environments or server configs side-by-side.

mhost diff staging production

Tunnel Sharing

Expose a local process to the internet via tunneling services (ngrok, cloudflared, bore, serveo).

mhost share api-server                 # Auto-detect port
mhost share api-server --port 8080     # Override port

Recipe Runner

Execute a sequence of mhost commands from a text file. One command per line.

mhost run deploy-recipe.txt

PM2 Migration

Auto-convert PM2 configuration to mhost format.

mhost migrate --from pm2    # Reads ~/.pm2/dump.pm2 → generates mhost.toml

REST API

mhost exposes 25 REST endpoints on port 19516 with JSON request/response format.

# Start the API server
mhost api start --port 19516

# List processes
curl -H "Authorization: Bearer mhost_tok_..." http://localhost:19516/api/v1/processes

# Start a new process
curl -X POST -H "Authorization: Bearer mhost_tok_..." \
  -H "Content-Type: application/json" \
  -d '{"script": "server.js", "name": "api"}' \
  http://localhost:19516/api/v1/processes
CategoryEndpointsAuth
ProcessesGET/POST/DELETE /processes/*viewer/operator
LogsGET /logs/:name, /logs/:name/searchviewer
HealthGET /health, /health/:namepublic/viewer
MetricsGET /metrics, /metrics/:nameviewer
SystemPOST /save, /resurrect, /killoperator/admin
TokensGET/POST/DELETE /tokens/*admin
WebhooksGET/POST/DELETE /webhooks/*admin

Authentication

Bearer token authentication with role-based access control. Tokens are hashed with Argon2 before storage.

# Create a token with a specific role
mhost api token create --name my-app --role operator

# List active tokens
mhost api token list

# Revoke a token
mhost api token revoke <id>

Four roles with increasing permissions: viewer (read-only), operator (start/stop/restart), admin (token and webhook management), super_admin (full access).

WebSocket Streaming

Real-time event, log, and metrics streaming over WebSocket at /api/v1/ws.

# Connect with token
wscat -c "ws://localhost:19516/api/v1/ws?token=mhost_tok_..."

# Subscribe to channels
{"type": "subscribe", "channel": "events"}
{"type": "subscribe", "channel": "logs", "process": "api-server"}
{"type": "subscribe", "channel": "metrics"}

# Unsubscribe
{"type": "unsubscribe", "channel": "events"}

Channels: events (process lifecycle), logs (stdout/stderr per process), metrics (CPU/memory snapshots).

Outbound Webhooks

Register HTTP endpoints to receive process events. Payloads are signed with HMAC-SHA256.

# Register a webhook
mhost api webhook add --url https://myapp.com/hook --events crash,restart --secret my-key

# List registered webhooks
mhost api webhook list

# Test a webhook
mhost api webhook test <id>

# View failed deliveries
mhost api webhook failures

Features: configurable retry with exponential backoff, dead letter logging for failed deliveries, per-webhook secret for payload verification.


CLI Reference

Edit on GitHub ✎
CommandDescription
mhost start <app>Start a process or config file.
mhost stop <app|all>Gracefully stop.
mhost restart <app|all>Restart.
mhost reload <app>Zero-downtime reload.
mhost delete <app>Stop and remove from registry.
mhost status [app]Process status table.
mhost logs [app]Show logs.
mhost monitTUI dashboard.
mhost dashboardWeb UI.
mhost scale <app> <N>Scale to N instances.
mhost savePersist process list.
mhost resurrectRestore from snapshot.
mhost startupInstall system init script.
mhost deployRun deployment.
mhost rollbackRollback deployment.
mhost healthHealth probe summary.
mhost notify setupConfigure notifications.
mhost ai <cmd>AI commands.
mhost agent <cmd>Autonomous agent.
mhost brain <cmd>Self-healing brain.
mhost bot <cmd>Chat bot control.
mhost cloud <cmd>Cloud fleet management.
mhost devDev mode with file watching.
mhost bench <url>HTTP load testing.
mhost canary <app>Canary deployment.
mhost snapshot create|list|restoreState snapshots.
mhost replay <process>Incident timeline replay.
mhost linkDependency graph.
mhost costCloud cost estimation.
mhost certs [--url]SSL certificate monitoring.
mhost sla <app>SLA uptime report.
mhost diff <a> <b>Environment comparison.
mhost share <app>Tunnel exposure.
mhost run <file>Recipe runner.
mhost migrate --from <pm>PM2 migration.
mhost playgroundInteractive tutorial.
mhost api start [--port]Start API server.
mhost api stopStop API server.
mhost api statusAPI server status.
mhost api token create|list|revokeToken management.
mhost api webhook add|list|remove|test|failuresWebhook management.
mhost daemon start|stop|statusDaemon lifecycle.

Exit Codes

CodeMeaning
0Success.
1General error.
2Configuration error.
3Process not found.
4Daemon not running.
5IPC communication error.
6Deployment failed.

API Reference

Edit on GitHub ✎

The daemon exposes a JSON-RPC 2.0 API over a Unix socket at ~/.mhost/mhost.sock. You can also enable an HTTP JSON-RPC endpoint.

Socket Location

~/.mhost/mhost.sock # Unix socket (default) http://localhost:9616/rpc # HTTP endpoint (optional)

Example Request

json
{ "jsonrpc": "2.0", "id": 1, "method": "process.status", "params": { "name": "api" } }

RPC Methods

MethodDescription
process.listList all processes.
process.statusGet status for one process.
process.startStart a process.
process.stopStop a process.
process.restartRestart a process.
process.deleteDelete a process.
process.scaleScale a process.
logs.queryQuery logs with filters.
logs.streamSubscribe to live log stream (WebSocket).
health.statusGet health probe results.
metrics.snapshotGet current CPU/memory metrics.
deploy.triggerTrigger a deployment.
deploy.rollbackRoll back to a previous revision.
brain.statusGet brain health scores.
daemon.versionGet daemon version info.
Enable the HTTP endpoint by setting [rpc] http = true in ~/.mhost/config.toml. The socket endpoint is always active when the daemon is running.