Skip to main content

MongoDB

Fully managed MongoDB databases with replica sets, automated backups, and scaling.

Overview

Sparbz Cloud MongoDB provides production-ready MongoDB clusters with automatic failover, backups, and monitoring.

Features

  • Replica Sets: 3-node replica sets for high availability
  • Automated Backups: Continuous backups with point-in-time recovery
  • Scaling: Vertical and horizontal scaling on demand
  • Monitoring: Real-time metrics and query analysis
  • Security: Encryption, authentication, and network isolation

Versions

VersionStatusEnd of Life
MongoDB 7.0Current2027
MongoDB 6.0Supported2025

Tiers

TiervCPUMemoryStorageConnections
Starter12 GB10 GB100
Pro28 GB100 GB1000
Enterprise4+32+ GB1 TB+10000+

Getting Started

Create a Database

# Create a MongoDB database
szc database create my-mongo --engine mongodb

# Create with specific version
szc database create my-mongo --engine mongodb --version 7.0

# Create with pro tier
szc database create my-mongo --engine mongodb --tier pro

# Wait for database to be ready
szc database create my-mongo --engine mongodb --wait

Get Connection Details

# Get connection info
szc database get my-mongo

# Get connection string only
szc database get my-mongo --json --fields connection_string

# Get credentials
szc database credentials my-mongo

Connect to Database

# Using mongosh
mongosh "mongodb://<host>:27017/<database>?authSource=admin" \
--username <username> \
--password <password>

# Using connection string
mongosh "mongodb+srv://<username>:<password>@<host>/<database>?retryWrites=true"

Connection Examples

Node.js with mongodb

const { MongoClient } = require('mongodb');

const uri = "mongodb://<username>:<password>@<host>:27017/<database>?authSource=admin&tls=true";
const client = new MongoClient(uri);

async function run() {
await client.connect();
const db = client.db('<database>');
const collection = db.collection('users');

// Insert
await collection.insertOne({ name: 'John', email: 'john@example.com' });

// Query
const user = await collection.findOne({ name: 'John' });
console.log(user);
}

run().finally(() => client.close());

Python with pymongo

from pymongo import MongoClient

uri = "mongodb://<username>:<password>@<host>:27017/<database>?authSource=admin&tls=true"
client = MongoClient(uri)
db = client['<database>']
collection = db['users']

# Insert
collection.insert_one({'name': 'John', 'email': 'john@example.com'})

# Query
user = collection.find_one({'name': 'John'})
print(user)

Go with mongo-driver

import (
"context"
"go.mongodb.org/mongo-driver/mongo"
"go.mongodb.org/mongo-driver/mongo/options"
"go.mongodb.org/mongo-driver/bson"
)

uri := "mongodb://<username>:<password>@<host>:27017/<database>?authSource=admin&tls=true"
client, err := mongo.Connect(context.TODO(), options.Client().ApplyURI(uri))
if err != nil {
log.Fatal(err)
}
defer client.Disconnect(context.TODO())

collection := client.Database("<database>").Collection("users")

// Insert
_, err = collection.InsertOne(context.TODO(), bson.D{
{"name", "John"},
{"email", "john@example.com"},
})

// Query
var result bson.M
err = collection.FindOne(context.TODO(), bson.D{{"name", "John"}}).Decode(&result)

Java with MongoDB Driver

import com.mongodb.client.MongoClients;
import com.mongodb.client.MongoClient;
import com.mongodb.client.MongoDatabase;
import com.mongodb.client.MongoCollection;
import org.bson.Document;

String uri = "mongodb://<username>:<password>@<host>:27017/<database>?authSource=admin&tls=true";
MongoClient client = MongoClients.create(uri);
MongoDatabase db = client.getDatabase("<database>");
MongoCollection<Document> collection = db.getCollection("users");

// Insert
collection.insertOne(new Document("name", "John").append("email", "john@example.com"));

// Query
Document user = collection.find(new Document("name", "John")).first();

Database Management

Scale Resources

# Scale to pro tier
szc database scale my-mongo --tier pro

# Scale storage
szc database scale my-mongo --storage 200

Upgrade Version

# Upgrade to latest minor version
szc database upgrade my-mongo

# Upgrade to specific version
szc database upgrade my-mongo --version 7.0

Delete Database

# Delete database (creates final backup)
szc database delete my-mongo

# Delete without backup
szc database delete my-mongo --skip-final-backup

Backups

Automatic Backups

All databases include continuous backups with configurable retention:

TierRetentionPoint-in-Time
Starter7 daysNo
Pro30 daysYes
Enterprise90 daysYes

Manual Backups

# Create manual backup
szc backup create my-mongo --type database

# List backups
szc backup list --resource my-mongo

# Restore from backup
szc backup restore bkp_abc123 --target my-mongo-restored

Point-in-Time Recovery

# Restore to specific point in time (Pro/Enterprise only)
szc database restore my-mongo --point-in-time "2024-01-15T10:30:00Z"

Indexes

Create Indexes

// In mongosh
db.users.createIndex({ email: 1 }, { unique: true });
db.users.createIndex({ name: 1, created_at: -1 });
db.users.createIndex({ location: "2dsphere" });
db.logs.createIndex({ timestamp: 1 }, { expireAfterSeconds: 604800 });

List Indexes

db.users.getIndexes();

Analyze Index Usage

db.users.aggregate([{ $indexStats: {} }]);

Aggregation Pipeline

// Example: Group and count orders by status
db.orders.aggregate([
{ $match: { created_at: { $gte: ISODate("2024-01-01") } } },
{ $group: { _id: "$status", count: { $sum: 1 }, total: { $sum: "$amount" } } },
{ $sort: { count: -1 } }
]);

// Example: Lookup related documents
db.orders.aggregate([
{ $match: { user_id: ObjectId("...") } },
{ $lookup: {
from: "products",
localField: "product_ids",
foreignField: "_id",
as: "products"
}},
{ $project: { _id: 1, total: 1, "products.name": 1 } }
]);

Monitoring

Available Metrics

  • CPU and memory utilization
  • Storage usage and IOPS
  • Connection count
  • Operations per second (query, insert, update, delete)
  • Replication lag
  • Slow queries

View Metrics

# Get current metrics
szc database metrics my-mongo

# Get metrics for specific period
szc database metrics my-mongo --start "2024-01-15T00:00:00Z" --end "2024-01-16T00:00:00Z"

Profiling

// Enable profiling for slow queries (> 100ms)
db.setProfilingLevel(1, { slowms: 100 });

// View profiled queries
db.system.profile.find().sort({ ts: -1 }).limit(10);

Security

Authentication

All connections require authentication. Users are managed via the CLI:

# Create additional user
szc database user create my-mongo app_user

# Create read-only user
szc database user create my-mongo readonly_user --privileges read

# Rotate password
szc database user rotate-password my-mongo app_user

Roles

RoleDescription
readWriteRead and write to assigned database
readRead-only access to assigned database
dbAdminDatabase administration tasks
userAdminUser management for assigned database

Encryption

  • At rest: AES-256 encryption using managed keys
  • In transit: TLS 1.3 required for all connections

Migration

Import Data

# Import from JSON
mongoimport --uri="mongodb://<connection-string>" \
--collection=users --file=users.json --jsonArray

# Import from BSON dump
mongorestore --uri="mongodb://<connection-string>" \
--archive=backup.archive

Export Data

# Export to JSON
mongoexport --uri="mongodb://<connection-string>" \
--collection=users --out=users.json --jsonArray

# Export to BSON dump
mongodump --uri="mongodb://<connection-string>" \
--archive=backup.archive

Change Streams

Enable real-time notifications for document changes:

const { MongoClient } = require('mongodb');

async function watchChanges() {
const client = new MongoClient(uri);
await client.connect();

const collection = client.db('<database>').collection('orders');
const changeStream = collection.watch();

changeStream.on('change', (change) => {
console.log('Change detected:', change);
});
}

Best Practices

Schema Design

  • Embed data that is frequently accessed together
  • Use references for large, frequently-updated documents
  • Create indexes for common query patterns
  • Use time-series collections for IoT/metrics data

Performance

  • Use projection to limit returned fields
  • Create compound indexes for multi-field queries
  • Use aggregation pipeline for complex queries
  • Enable connection pooling in applications

Security

  • Use dedicated users per application
  • Grant minimal required roles
  • Enable audit logging for compliance
  • Keep public access disabled

Pricing

TierMonthlyIncluded StorageOverage
Starter$1510 GB$0.10/GB
Pro$49100 GB$0.08/GB
EnterpriseCustomCustomCustom

Backup storage is billed separately at $0.05/GB/month.