Deployment Guide

Deploy Pushy to production with Docker, Kubernetes, and AWS.

Local Development

The fastest way to get started with local development using Docker Compose:

Quick Start
# Start all services
docker-compose up -d
# Check service health
docker-compose ps
# View logs
docker-compose logs -f
# Stop services
docker-compose down

Service Ports

API Server
localhost:3000
PostgreSQL
localhost:5432
Redis
localhost:6379
LocalStack
localhost:4566
MailHog SMTP
localhost:1025
MailHog UI
localhost:8025

Docker Deployment

1. Build Docker Image

Create a production-ready Docker image:

Dockerfile
FROM rust:1.75-alpine AS builder
RUN apk add --no-cache musl-dev openssl-dev pkgconfig
WORKDIR /app
COPY Cargo.toml Cargo.lock ./
COPY src ./src
COPY migrations ./migrations
RUN cargo build --release
FROM alpine:3.19
RUN apk add --no-cache ca-certificates openssl
COPY --from=builder /app/target/release/pushy /usr/local/bin/pushy
COPY --from=builder /app/target/release/worker /usr/local/bin/worker
COPY migrations /migrations
EXPOSE 3000
CMD ["pushy"]
Build Commands
# Build the image
docker build -t pushy:latest .
# Tag for registry
docker tag pushy:latest your-registry/pushy:v1.0.0
# Push to registry
docker push your-registry/pushy:v1.0.0

2. Production Docker Compose

docker-compose.prod.yml
version: '3.8'
services:
api:
image: your-registry/pushy:v1.0.0
environment:
- DATABASE_URL=postgresql://pushy:$[DB_PASSWORD]@postgres:5432/pushy
- REDIS_URL=redis://redis:6379
- AWS_ACCESS_KEY_ID=$[AWS_ACCESS_KEY_ID]
- AWS_SECRET_ACCESS_KEY=$[AWS_SECRET_ACCESS_KEY]
- SQS_QUEUE_URL=$[SQS_QUEUE_URL]
ports:
- "3000:3000"
depends_on:
- postgres
- redis
restart: unless-stopped
worker:
image: your-registry/pushy:v1.0.0
command: ["worker"]
environment:
- DATABASE_URL=postgresql://pushy:$[DB_PASSWORD]@postgres:5432/pushy
- REDIS_URL=redis://redis:6379
- AWS_ACCESS_KEY_ID=$[AWS_ACCESS_KEY_ID]
- AWS_SECRET_ACCESS_KEY=$[AWS_SECRET_ACCESS_KEY]
- SQS_QUEUE_URL=$[SQS_QUEUE_URL]
depends_on:
- postgres
- redis
restart: unless-stopped
scale: 3 # Run 3 worker instances
postgres:
image: postgres:15-alpine
environment:
- POSTGRES_USER=pushy
- POSTGRES_PASSWORD=$[POSTGRES_PASSWORD]
- POSTGRES_DB=pushy
volumes:
- postgres_data:/var/lib/postgresql/data
restart: unless-stopped
redis:
image: redis:7-alpine
command: redis-server --maxmemory 512mb --maxmemory-policy allkeys-lru
volumes:
- redis_data:/data
restart: unless-stopped
volumes:
postgres_data:
redis_data:

Kubernetes Deployment

Prerequisites

Kubernetes cluster (EKS, GKE, AKS, or self-managed)
kubectl configured to access your cluster
Helm 3.x installed

1. Current Namespace and Secrets

Pushy is deployed in the pushy namespace in our AWS EKS cluster.

bash
# Verify namespace exists
kubectl get namespace pushy
# Check existing secrets
kubectl get secrets -n pushy
# View current deployments
kubectl get deployments -n pushy
# Check running pods
kubectl get pods -n pushy

2. Deploy with Helm

Current Deployment
# Our LoadBalancer endpoint
http://a9a18d9bb21ed4e6e9b07bbd4b8d9f16-251019334.sa-east-1.elb.amazonaws.com
# Get service details
kubectl get svc pushy-api -n pushy -o wide
# Check deployment rollout status
kubectl rollout status deployment/pushy-api -n pushy
values.production.yaml
replicaCount: 3
image:
repository: your-registry/pushy
tag: v1.0.0
pullPolicy: IfNotPresent
service:
type: LoadBalancer
port: 3000
ingress:
enabled: true
className: nginx
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
hosts:
- host: api.pushy.ar
paths:
- path: /
pathType: Prefix
tls:
- secretName: pushy-tls
hosts:
- api.pushy.ar
postgresql:
enabled: false
external:
host: your-rds-endpoint.amazonaws.com
port: 5432
database: pushy
username: pushy
existingSecret: pushy-db-secret
redis:
enabled: false
external:
host: your-redis-endpoint.cache.amazonaws.com
port: 6379
autoscaling:
enabled: true
minReplicas: 3
maxReplicas: 10
targetCPUUtilizationPercentage: 80
resources:
limits:
cpu: 1000m
memory: 1Gi
requests:
cpu: 500m
memory: 512Mi

3. Verify Deployment

bash
# Check pods
kubectl get pods -n pushy
# Check services
kubectl get svc -n pushy
# Check ingress
kubectl get ingress -n pushy
# View logs
kubectl logs -n pushy -l app.kubernetes.io/name=pushy -f
# Port forward for testing
kubectl port-forward -n pushy svc/pushy 3000:3000

AWS Infrastructure Setup

1. RDS PostgreSQL

Create RDS Instance
# Create DB subnet group
aws rds create-db-subnet-group \
--db-subnet-group-name pushy-db-subnets \
--db-subnet-group-description "Subnet group for pushy database" \
--subnet-ids subnet-xxx subnet-yyy \
--region us-east-1
# Create RDS instance
aws rds create-db-instance \
--db-instance-identifier pushy-db \
--db-instance-class db.t3.medium \
--engine postgres \
--engine-version 15.8 \
--master-username pushy \
--master-user-password "text-blue-400">$DB_PASSWORD \
--allocated-storage 100 \
--storage-encrypted \
--backup-retention-period 7 \
--multi-az \
--db-subnet-group-name pushy-db-subnets \
--vpc-security-group-ids sg-xxx \
--region us-east-1

2. ElastiCache Redis

Create Redis Cluster
# Create cache subnet group
aws elasticache create-cache-subnet-group \
--cache-subnet-group-name pushy-redis-subnets \
--cache-subnet-group-description "Subnet group for pushy redis" \
--subnet-ids subnet-xxx subnet-yyy \
--region us-east-1
# Create Redis cluster
aws elasticache create-cache-cluster \
--cache-cluster-id pushy-redis \
--engine redis \
--cache-node-type cache.r6g.xlarge \
--num-cache-nodes 3 \
--az-mode cross-az \
--cache-subnet-group-name pushy-redis-subnets \
--security-group-ids sg-yyy \
--region us-east-1

3. SQS Queue Setup

Create SQS Queues
# Create main notification queue (FIFO)
aws sqs create-queue \
--queue-name pushy-notifications.fifo \
--attributes '{
"FifoQueue": "true",
"ContentBasedDeduplication": "true",
"MessageRetentionPeriod": "1209600",
"VisibilityTimeout": "300"
}' \
--region us-east-1
# Create dead letter queue
aws sqs create-queue \
--queue-name pushy-notifications-dlq.fifo \
--attributes '{
"FifoQueue": "true",
"MessageRetentionPeriod": "1209600"
}' \
--region us-east-1
# Set up redrive policy
aws sqs set-queue-attributes \
--queue-url https://sqs.us-east-1.amazonaws.com/xxx/pushy-notifications.fifo \
--attributes '{
"RedrivePolicy": "{
\"deadLetterTargetArn\":\"arn:aws:sqs:us-east-1:xxx:pushy-notifications-dlq.fifo\",
\"maxReceiveCount\":3
}"
}'

4. EKS Cluster Setup

Create EKS Cluster
# Create EKS cluster
eksctl create cluster \
--name pushy-cluster \
--region us-east-1 \
--nodegroup-name pushy-nodes \
--node-type t3.large \
--nodes 3 \
--nodes-min 3 \
--nodes-max 10 \
--managed
# Update kubeconfig
aws eks update-kubeconfig --region us-east-1 --name pushy-cluster
# Verify cluster
kubectl get nodes

Production Checklist

Before Going Live

SSL/TLS certificates configured
Database backups scheduled
Monitoring and alerting configured
Log aggregation setup (CloudWatch, ELK)
Auto-scaling policies configured
Security groups reviewed
IAM roles and policies configured
Environment variables secured in Secrets Manager
Load testing completed
Disaster recovery plan documented

Production Monitoring

Set up monitoring with Prometheus and Grafana:

bash
# Install Prometheus operator
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm install prometheus prometheus-community/kube-prometheus-stack -n monitoring
# Access Grafana
kubectl port-forward -n monitoring svc/prometheus-grafana 3000:80
# Default credentials: admin/prom-operator

Ready to troubleshoot issues?

View Troubleshooting Guide