Skip to main content

Automation & Agent Integration

Build automated workflows and integrate AI agents with Sparbz Cloud.

Overview

Sparbz Cloud is designed for automation-first workflows. The CLI and API provide features specifically optimized for scripts, CI/CD pipelines, and AI agents like Claude, ChatGPT, and Cursor.

Agent-Friendly Features

The szc CLI includes several features that make it ideal for automation:

Idempotent Operations

Use --if-not-exists to make operations safe to run multiple times:

# Safe to run repeatedly - only creates if doesn't exist
szc database create my-db --engine postgres --if-not-exists
szc namespace create my-app --if-not-exists
szc kafka create my-cluster --if-not-exists

Dry Run Mode

Preview what would happen without making changes:

# Preview creation
szc database create my-db --engine postgres --dry-run

# Preview deletion
szc database delete my-db --dry-run

# Preview stack changes
szc stack apply my-stack --dry-run

Structured JSON Output

Get machine-readable output for parsing:

# Full JSON output
szc database list --json

# Select specific fields
szc database get my-db --json --fields id,name,status

# Parse with jq
szc database list --json | jq '.[] | select(.status == "active")'

Wait for Completion

Block until resources are ready:

# Wait for database to be ready
szc database create my-db --engine postgres --wait

# Wait with custom timeout
szc database create my-db --wait --timeout 600

# Watch deployment progress
szc namespace apply my-ns manifest.yaml --watch

Structured Exit Codes

Exit codes indicate specific failure modes:

CodeMeaning
0Success
1General error
2Resource not found
3Resource already exists
4Validation error
5Authentication error
6Permission denied
7Rate limited
8Timeout
szc database get my-db
if [ $? -eq 2 ]; then
echo "Database not found, creating..."
szc database create my-db --engine postgres
fi

Batch Operations

Process multiple resources at once:

# Create multiple databases from file
cat databases.txt | szc database create --batch

# Delete multiple resources
echo -e "db1\ndb2\ndb3" | szc database delete --batch

# Batch with JSON input
szc database create --batch --batch-format json < databases.json

CI/CD Integration

GitHub Actions

name: Deploy Infrastructure

on:
push:
branches: [main]

jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4

- name: Install szc CLI
run: |
curl -sSL https://cli.sparbz.cloud/install.sh | bash
echo "$HOME/.szc/bin" >> $GITHUB_PATH

- name: Authenticate
env:
SPARBZ_API_KEY: ${{ secrets.SPARBZ_API_KEY }}
run: |
szc auth login --api-key "$SPARBZ_API_KEY"

- name: Create infrastructure
run: |
# Idempotent operations - safe to re-run
szc database create prod-db --engine postgres --tier pro --if-not-exists --wait
szc namespace create prod-app --tier pro --if-not-exists

- name: Deploy application
run: |
szc namespace apply prod-app ./k8s/ --watch

- name: Verify deployment
run: |
szc namespace get prod-app --json --fields status | jq -e '.status == "active"'

GitLab CI

stages:
- infrastructure
- deploy

variables:
SPARBZ_API_KEY: $SPARBZ_API_KEY

.szc-setup: &szc-setup
before_script:
- curl -sSL https://cli.sparbz.cloud/install.sh | bash
- export PATH="$HOME/.szc/bin:$PATH"
- szc auth login --api-key "$SPARBZ_API_KEY"

infrastructure:
stage: infrastructure
<<: *szc-setup
script:
- szc database create prod-db --engine postgres --if-not-exists --wait
- szc namespace create prod-app --if-not-exists

deploy:
stage: deploy
<<: *szc-setup
script:
- szc namespace apply prod-app ./k8s/ --watch
only:
- main

Jenkins Pipeline

pipeline {
agent any

environment {
SPARBZ_API_KEY = credentials('sparbz-api-key')
}

stages {
stage('Setup') {
steps {
sh '''
curl -sSL https://cli.sparbz.cloud/install.sh | bash
export PATH="$HOME/.szc/bin:$PATH"
szc auth login --api-key "$SPARBZ_API_KEY"
'''
}
}

stage('Infrastructure') {
steps {
sh '''
szc database create prod-db --engine postgres --if-not-exists --wait
szc namespace create prod-app --if-not-exists
'''
}
}

stage('Deploy') {
steps {
sh 'szc namespace apply prod-app ./k8s/ --watch'
}
}
}
}

AI Agent Integration

Claude Code Integration

When using Claude Code (Anthropic's AI coding assistant), you can leverage the CLI directly:

# Claude can run these commands to manage infrastructure
szc database list --json
szc database create my-db --engine postgres --if-not-exists --wait
szc database get my-db --json --fields connection_string

Best practices for AI agents:

  • Use --json for all read operations
  • Use --if-not-exists for create operations
  • Use --dry-run to preview changes before applying
  • Use --wait to ensure operations complete

MCP Server Integration

Sparbz Cloud can be integrated as an MCP (Model Context Protocol) server:

{
"mcpServers": {
"sparbz-cloud": {
"command": "szc",
"args": ["mcp", "serve"],
"env": {
"SPARBZ_API_KEY": "szc_prod_..."
}
}
}
}

Available MCP tools:

  • database_list - List all databases
  • database_create - Create a new database
  • database_get - Get database details
  • namespace_list - List namespaces
  • namespace_apply - Apply Kubernetes manifest
  • storage_create - Create storage bucket
  • vault_read - Read secret from Vault
  • vault_write - Write secret to Vault

SDK for AI Agents

For programmatic integration, use the SDKs:

# Python example (coming soon)
from sparbz import SparbzCloud

client = SparbzCloud(api_key="szc_prod_...")

# Create database
db = client.databases.create(
name="my-db",
engine="postgres",
if_not_exists=True
)

# Wait for ready
db = client.databases.wait_ready(db.id)

# Get connection string
creds = client.databases.get_credentials(db.id)
print(f"Connection: {creds.connection_string}")

Scripting Patterns

Environment Setup Script

#!/bin/bash
set -e

ENV=${1:-staging}
echo "Setting up $ENV environment..."

# Create database
echo "Creating database..."
szc database create "${ENV}-db" \
--engine postgres \
--tier starter \
--if-not-exists \
--wait

# Get connection string and store in Vault
echo "Storing credentials in Vault..."
CONNECTION=$(szc database get "${ENV}-db" --json --fields connection_string | jq -r '.connection_string')
szc vault write "secret/data/${ENV}/database" connection_string="$CONNECTION"

# Create namespace
echo "Creating namespace..."
szc namespace create "${ENV}-app" \
--tier starter \
--if-not-exists

# Apply secrets
echo "Applying secrets..."
szc namespace apply "${ENV}-app" - <<EOF
apiVersion: v1
kind: Secret
metadata:
name: database-credentials
type: Opaque
stringData:
DATABASE_URL: "$CONNECTION"
EOF

echo "Environment $ENV is ready!"

Cleanup Script

#!/bin/bash
set -e

ENV=${1:-staging}
FORCE=${2:-false}

echo "Cleaning up $ENV environment..."

if [ "$FORCE" != "--force" ]; then
echo "This will delete all resources in $ENV. Add --force to confirm."
exit 1
fi

# Delete namespace (and all resources in it)
szc namespace delete "${ENV}-app" --force 2>/dev/null || true

# Delete database
szc database delete "${ENV}-db" --force 2>/dev/null || true

# Delete secrets
szc vault delete "secret/data/${ENV}" 2>/dev/null || true

echo "Cleanup complete!"

Health Check Script

#!/bin/bash

# Check all infrastructure components
check_status() {
local resource=$1
local name=$2
local status=$(szc $resource get "$name" --json --fields status 2>/dev/null | jq -r '.status')

if [ "$status" == "active" ]; then
echo "OK: $resource/$name is active"
return 0
else
echo "FAIL: $resource/$name is $status"
return 1
fi
}

FAILED=0

check_status database prod-db || FAILED=1
check_status namespace prod-app || FAILED=1
check_status kafka prod-kafka || FAILED=1

exit $FAILED

Terraform Integration

Use the Sparbz Cloud Terraform provider for infrastructure as code:

terraform {
required_providers {
sparbz = {
source = "sparbz-cloud/sparbz"
version = "~> 1.0"
}
}
}

provider "sparbz" {
api_key = var.sparbz_api_key
}

resource "sparbz_database" "main" {
name = "prod-db"
engine = "postgres"
tier = "pro"
}

resource "sparbz_namespace" "app" {
name = "prod-app"
tier = "pro"
}

resource "sparbz_vault_secret" "db_creds" {
path = "secret/data/prod/database"
data = {
connection_string = sparbz_database.main.connection_string
}
}

Pulumi Integration

import * as sparbz from "@pulumi/sparbz";

const database = new sparbz.Database("prod-db", {
name: "prod-db",
engine: "postgres",
tier: "pro",
});

const namespace = new sparbz.Namespace("prod-app", {
name: "prod-app",
tier: "pro",
});

export const connectionString = database.connectionString;

Webhooks

Configure webhooks to trigger automation on resource events:

# Create webhook
szc webhook create my-hook \
--url https://api.example.com/webhooks/sparbz \
--events database.created,database.deleted,namespace.updated \
--secret my-webhook-secret

Webhook payload:

{
"event": "database.created",
"timestamp": "2024-01-15T10:30:00Z",
"resource": {
"id": "db_abc123",
"type": "database",
"name": "prod-db"
},
"organization": {
"id": "org_xyz789",
"name": "Acme Corp"
}
}

Best Practices

1. Use Idempotent Operations

Always use --if-not-exists for create operations in automation:

# Good - safe to re-run
szc database create my-db --if-not-exists

# Bad - will fail if already exists
szc database create my-db

2. Parse JSON Output

Use --json and jq for reliable parsing:

# Good - machine parseable
STATUS=$(szc database get my-db --json | jq -r '.status')

# Bad - fragile text parsing
STATUS=$(szc database get my-db | grep "Status:" | awk '{print $2}')

3. Handle Errors

Check exit codes and handle failures gracefully:

if ! szc database create my-db --if-not-exists --wait; then
echo "Failed to create database"
exit 1
fi

4. Use Wait Flags

Don't poll manually when --wait is available:

# Good - blocks until ready
szc database create my-db --wait

# Bad - manual polling
szc database create my-db
while [ "$(szc database get my-db --json | jq -r '.status')" != "active" ]; do
sleep 5
done

5. Secure Credentials

Use environment variables or secrets management:

# Good - from environment
export SPARBZ_API_KEY="szc_prod_..."
szc database list

# Bad - in command line (visible in ps/history)
szc auth login --api-key "szc_prod_..."

6. Use Dry Run

Preview changes before applying:

# Preview first
szc stack apply my-stack --dry-run

# Then apply if looks good
szc stack apply my-stack