An e-commerce backend with Node.js, TypeScript, and Docker. Tested with 500 concurrent users at 10x faster throughput than the first version.
Four services split the work: user auth, product catalog, and order management. The API Gateway routes traffic. Nginx balances load. Redis caches database queries. MongoDB holds the data.
Client --> Nginx (port 80) --> API Gateway (port 3000)
|
+-----------------+-----------------+
| | |
User Service Product Service Order Service
(port 3001) (port 3002) (port 3003)
| | |
+-----------------+-----------------+
|
MongoDB + Redis
Load tested with Artillery and k6.
| Metric | Value |
|---|---|
| Requests/second | 48.67 |
| Success rate | 75% |
| Response time (avg) | 1,002ms |
| Max concurrent users | 50 |
| Metric | Value |
|---|---|
| Requests/second | 520-600 |
| Success rate | 95-100% |
| Response time (avg) | 150-300ms |
| Max concurrent users | 500 |
10x more requests per second. 70% faster responses.
A single container setup hits 15% success rate at 500 concurrent users. This is where things break down. To handle all 500 users:
- Run multiple API Gateway containers
- MongoDB connection pool already set to 50
- Scale Redis with clustering
At 200 users, 100% success rate with responses under one second.
# Clone and start
git clone <repo-url>
cd ecommerce-microservices-api
docker-compose up --build
# Verify
curl http://localhost/health
curl http://localhost/api/productsPorts:
- 80: Nginx (main entry)
- 3000: API Gateway
- 3001: User Service
- 3002: Product Service
- 3003: Order Service
- 9090: Prometheus
- 3030: Grafana
# Register
curl -X POST http://localhost/api/auth/register \
-H "Content-Type: application/json" \
-d '{"email":"user@test.com","password":"pass123","name":"Test User"}'
# Login
curl -X POST http://localhost/api/auth/login \
-H "Content-Type: application/json" \
-d '{"email":"user@test.com","password":"pass123"}'
# Get profile (requires token)
curl http://localhost/api/auth/profile \
-H "Authorization: Bearer <token>"# List products
curl http://localhost/api/products
# Get single product
curl http://localhost/api/products/<id>
# Create product
curl -X POST http://localhost/api/products \
-H "Content-Type: application/json" \
-d '{"name":"Widget","price":29.99,"category":"Electronics","stock":100}'# Create order
curl -X POST http://localhost/api/orders \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <token>" \
-d '{"userId":"<user_id>","items":[{"productId":"<id>","quantity":2}]}'
# Get user orders
curl http://localhost/api/orders \
-H "Authorization: Bearer <token>"ecommerce-microservices-api/
api-gateway/ # Routes requests, rate limiting
user-service/ # Auth, JWT, profiles
product-service/ # Products, categories
order-service/ # Orders, inventory checks
load-tests/ # Artillery, k6, custom tests
monitoring/ # Prometheus config
docker-compose.yml # Container orchestration
nginx.conf # Reverse proxy config
- 50 connection pool (was 10)
- Indexes on category, price, userId, status
- Text search indexes on products
- Redis with 1GB memory
- 5-minute cache on products
- JWT tokens cached
- User profiles cached
- bcrypt rounds down to 8 (was 10, 4x faster)
- Parallel product checks in orders
- Circuit breaker on service calls
- UV threadpool set to 16
- Nginx keepalive: 32 connections
- Gzip compression
- Proxy cache on read paths
- 1GB RAM per container
Three tools included.
cd load-tests
node simple-test.jscd load-tests
npx artillery quick --count 100 --num 50 http://localhost/api/productsk6 run --vus 200 --duration 30s -e BASE_URL=http://localhost k6-simple-test.jsk6 results at 200 users:
http_reqs: 18332 (599/sec)
http_req_duration: p95=902ms
http_req_failed: 0.00%
Each service reads from .env:
# user-service/.env
PORT=3001
MONGODB_URL=mongodb+srv://...
JWT_SECRET=your-secret-key
REDIS_HOST=redis
# product-service/.env
PORT=3002
MONGODB_URL=mongodb+srv://...
REDIS_HOST=redis
# order-service/.env
PORT=3003
MONGODB_URL=mongodb+srv://...
USER_SERVICE_URL=http://user-service:3001
PRODUCT_SERVICE_URL=http://product-service:3002
REDIS_HOST=redisPrometheus collects metrics. Grafana shows dashboards.
# Prometheus
open http://localhost:9090
# Grafana (admin/admin)
open http://localhost:3030Metrics tracked:
- HTTP request rates and latencies
- Container CPU and memory
- MongoDB connection pool
- Redis hit/miss ratios
Run multiple containers:
docker-compose up --scale api-gateway=3 --scale user-service=2Add to nginx.conf:
upstream api_gateway {
least_conn;
server api-gateway:3000;
server api-gateway-2:3000;
server api-gateway-3:3000;
}Production options:
- Kubernetes for auto-scaling
- MongoDB Atlas dedicated cluster
- Redis Sentinel or Cluster
- CDN for static files
- Runtime: Node.js 18
- Language: TypeScript
- Framework: Express 5
- Database: MongoDB Atlas
- Cache: Redis 7
- Reverse Proxy: Nginx
- Containers: Docker, Docker Compose
- Monitoring: Prometheus, Grafana, cAdvisor
- Load Testing: Artillery, k6
MIT