Deployment Guide
Deploy Pushy to production with Docker, Kubernetes, and AWS.
Local Development
The fastest way to get started with local development using Docker Compose:
Quick Start
# Start all servicesdocker-compose up -d# Check service healthdocker-compose ps# View logsdocker-compose logs -f# Stop servicesdocker-compose downService Ports
API Server
localhost:3000
PostgreSQL
localhost:5432
Redis
localhost:6379
LocalStack
localhost:4566
MailHog SMTP
localhost:1025
MailHog UI
localhost:8025
Docker Deployment
1. Build Docker Image
Create a production-ready Docker image:
Dockerfile
FROM rust:1.75-alpine AS builderRUN apk add --no-cache musl-dev openssl-dev pkgconfigWORKDIR /appCOPY Cargo.toml Cargo.lock ./COPY src ./srcCOPY migrations ./migrationsRUN cargo build --releaseFROM alpine:3.19RUN apk add --no-cache ca-certificates opensslCOPY --from=builder /app/target/release/pushy /usr/local/bin/pushyCOPY --from=builder /app/target/release/worker /usr/local/bin/workerCOPY migrations /migrationsEXPOSE 3000CMD ["pushy"]Build Commands
# Build the imagedocker build -t pushy:latest .# Tag for registrydocker tag pushy:latest your-registry/pushy:v1.0.0# Push to registrydocker push your-registry/pushy:v1.0.02. Production Docker Compose
docker-compose.prod.yml
version: '3.8'services: api: image: your-registry/pushy:v1.0.0 environment: - DATABASE_URL=postgresql://pushy:$[DB_PASSWORD]@postgres:5432/pushy - REDIS_URL=redis://redis:6379 - AWS_ACCESS_KEY_ID=$[AWS_ACCESS_KEY_ID] - AWS_SECRET_ACCESS_KEY=$[AWS_SECRET_ACCESS_KEY] - SQS_QUEUE_URL=$[SQS_QUEUE_URL] ports: - "3000:3000" depends_on: - postgres - redis restart: unless-stopped worker: image: your-registry/pushy:v1.0.0 command: ["worker"] environment: - DATABASE_URL=postgresql://pushy:$[DB_PASSWORD]@postgres:5432/pushy - REDIS_URL=redis://redis:6379 - AWS_ACCESS_KEY_ID=$[AWS_ACCESS_KEY_ID] - AWS_SECRET_ACCESS_KEY=$[AWS_SECRET_ACCESS_KEY] - SQS_QUEUE_URL=$[SQS_QUEUE_URL] depends_on: - postgres - redis restart: unless-stopped scale: 3 # Run 3 worker instances postgres: image: postgres:15-alpine environment: - POSTGRES_USER=pushy - POSTGRES_PASSWORD=$[POSTGRES_PASSWORD] - POSTGRES_DB=pushy volumes: - postgres_data:/var/lib/postgresql/data restart: unless-stopped redis: image: redis:7-alpine command: redis-server --maxmemory 512mb --maxmemory-policy allkeys-lru volumes: - redis_data:/data restart: unless-stoppedvolumes: postgres_data: redis_data:Kubernetes Deployment
Prerequisites
Kubernetes cluster (EKS, GKE, AKS, or self-managed)
kubectl configured to access your cluster
Helm 3.x installed
1. Current Namespace and Secrets
Pushy is deployed in the pushy namespace in our AWS EKS cluster.
bash
# Verify namespace existskubectl get namespace pushy# Check existing secretskubectl get secrets -n pushy# View current deploymentskubectl get deployments -n pushy# Check running podskubectl get pods -n pushy2. Deploy with Helm
Current Deployment
# Our LoadBalancer endpointhttp://a9a18d9bb21ed4e6e9b07bbd4b8d9f16-251019334.sa-east-1.elb.amazonaws.com# Get service detailskubectl get svc pushy-api -n pushy -o wide# Check deployment rollout statuskubectl rollout status deployment/pushy-api -n pushyvalues.production.yaml
replicaCount: 3image: repository: your-registry/pushy tag: v1.0.0 pullPolicy: IfNotPresentservice: type: LoadBalancer port: 3000ingress: enabled: true className: nginx annotations: cert-manager.io/cluster-issuer: letsencrypt-prod hosts: - host: api.pushy.ar paths: - path: / pathType: Prefix tls: - secretName: pushy-tls hosts: - api.pushy.arpostgresql: enabled: false external: host: your-rds-endpoint.amazonaws.com port: 5432 database: pushy username: pushy existingSecret: pushy-db-secretredis: enabled: false external: host: your-redis-endpoint.cache.amazonaws.com port: 6379autoscaling: enabled: true minReplicas: 3 maxReplicas: 10 targetCPUUtilizationPercentage: 80resources: limits: cpu: 1000m memory: 1Gi requests: cpu: 500m memory: 512Mi3. Verify Deployment
bash
# Check podskubectl get pods -n pushy# Check serviceskubectl get svc -n pushy# Check ingresskubectl get ingress -n pushy# View logskubectl logs -n pushy -l app.kubernetes.io/name=pushy -f# Port forward for testingkubectl port-forward -n pushy svc/pushy 3000:3000AWS Infrastructure Setup
1. RDS PostgreSQL
Create RDS Instance
# Create DB subnet groupaws rds create-db-subnet-group \ --db-subnet-group-name pushy-db-subnets \ --db-subnet-group-description "Subnet group for pushy database" \ --subnet-ids subnet-xxx subnet-yyy \ --region us-east-1# Create RDS instanceaws rds create-db-instance \ --db-instance-identifier pushy-db \ --db-instance-class db.t3.medium \ --engine postgres \ --engine-version 15.8 \ --master-username pushy \ --master-user-password "text-blue-400">$DB_PASSWORD \ --allocated-storage 100 \ --storage-encrypted \ --backup-retention-period 7 \ --multi-az \ --db-subnet-group-name pushy-db-subnets \ --vpc-security-group-ids sg-xxx \ --region us-east-12. ElastiCache Redis
Create Redis Cluster
# Create cache subnet groupaws elasticache create-cache-subnet-group \ --cache-subnet-group-name pushy-redis-subnets \ --cache-subnet-group-description "Subnet group for pushy redis" \ --subnet-ids subnet-xxx subnet-yyy \ --region us-east-1# Create Redis clusteraws elasticache create-cache-cluster \ --cache-cluster-id pushy-redis \ --engine redis \ --cache-node-type cache.r6g.xlarge \ --num-cache-nodes 3 \ --az-mode cross-az \ --cache-subnet-group-name pushy-redis-subnets \ --security-group-ids sg-yyy \ --region us-east-13. SQS Queue Setup
Create SQS Queues
# Create main notification queue (FIFO)aws sqs create-queue \ --queue-name pushy-notifications.fifo \ --attributes '{ "FifoQueue": "true", "ContentBasedDeduplication": "true", "MessageRetentionPeriod": "1209600", "VisibilityTimeout": "300" }' \ --region us-east-1# Create dead letter queueaws sqs create-queue \ --queue-name pushy-notifications-dlq.fifo \ --attributes '{ "FifoQueue": "true", "MessageRetentionPeriod": "1209600" }' \ --region us-east-1# Set up redrive policyaws sqs set-queue-attributes \ --queue-url https://sqs.us-east-1.amazonaws.com/xxx/pushy-notifications.fifo \ --attributes '{ "RedrivePolicy": "{ \"deadLetterTargetArn\":\"arn:aws:sqs:us-east-1:xxx:pushy-notifications-dlq.fifo\", \"maxReceiveCount\":3 }" }'4. EKS Cluster Setup
Create EKS Cluster
# Create EKS clustereksctl create cluster \ --name pushy-cluster \ --region us-east-1 \ --nodegroup-name pushy-nodes \ --node-type t3.large \ --nodes 3 \ --nodes-min 3 \ --nodes-max 10 \ --managed# Update kubeconfigaws eks update-kubeconfig --region us-east-1 --name pushy-cluster# Verify clusterkubectl get nodesProduction Checklist
Before Going Live
SSL/TLS certificates configured
Database backups scheduled
Monitoring and alerting configured
Log aggregation setup (CloudWatch, ELK)
Auto-scaling policies configured
Security groups reviewed
IAM roles and policies configured
Environment variables secured in Secrets Manager
Load testing completed
Disaster recovery plan documented
Production Monitoring
Set up monitoring with Prometheus and Grafana:
bash
# Install Prometheus operatorhelm repo add prometheus-community https://prometheus-community.github.io/helm-chartshelm install prometheus prometheus-community/kube-prometheus-stack -n monitoring# Access Grafanakubectl port-forward -n monitoring svc/prometheus-grafana 3000:80# Default credentials: admin/prom-operatorReady to troubleshoot issues?
View Troubleshooting Guide