[]
In a production environment where SpreadJS Collaboration Server must handle high concurrency or achieve high availability, it is common to deploy multiple server instances behind a load balancer.
However, when users are connected to different servers, they may face message-broadcasting issues. For example, a user connected to “Server A” cannot communicate or share real-time updates with a user connected to “Server B” within the same collaboration room.
To solve this, SpreadJS Collaboration supports the integration of a Socket.IO Adapter. This adapter ensures that all collaboration events and messages are distributed across server instances seamlessly, enabling real-time synchronization between all connected clients regardless of the underlying node.
Component | Description |
|---|---|
Node.js | v16 or later |
Redis Server | v7.0 or later, accessible to all collaboration servers |
@mescius/js-collaboration | SpreadJS Collaboration server package |
@socket.io/redis-adapter | Redis adapter for multi-node communication |
Load Balancer | Nginx, Kubernetes Service, or equivalent |
Supports multi-server deployment using Socket.IO Redis Adapter.
Synchronizes real-time collaboration data (workbook updates, selection states, connection events) between servers.
Ensures data consistency and synchronization across all user sessions.
Compatible with load balancing systems (Nginx, Kubernetes, etc.).
The adapter uses Redis Publish/Subscribe (Pub/Sub) to:
Broadcast socket events from one server to others.
Inform all servers of updates in the same collaboration room.
Maintain synchronization between users connected through different instances.
Run the following commands in your server project directory, and ensure your Redis server is running and reachable by all nodes:
npm install @mescius/js-collaboration
npm install redis
npm install @socket.io/redis-adapterInsert the following code in your Collaboration Server setup file (e.g., server.js):
import { Server } from '@mescius/js-collaboration';
import { createClient } from 'redis';
import { createAdapter } from '@socket.io/redis-adapter';
// 1. Initialize Pub/Sub client
const pubClient = createClient({ url: 'redis://localhost:6379' });
const subClient = pubClient.duplicate();
await Promise.all([pubClient.connect(), subClient.connect()]);
// 2. Inject Adapter
const server = new Server({
port: 8080,
socketIoAdapter: createAdapter(pubClient, subClient)
});A load balancer is required to distribute user traffic among multiple server instances. Below are two recommended setups.
Using Nginx as a reverse proxy, you can define the server list through the upstream block and configure the WebSocket protocol upgrade (Upgrade) headers.
Upstream: Define the IP addresses and ports of the backend server nodes.
Upgrade Headers: You must explicitly set the Upgrade and Connection headers to support WebSocket handshakes.
http {
# Define backend server list
upstream nodes {
server 103.32.2.101:3000;
server 103.32.2.102:3000;
server 103.32.2.103:3000;
}
server {
listen 3000;
server_name yourhost.com;
location /collaboration {
proxy_pass http://nodes;
# Enable WebSocket support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Pass real IP (optional)
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
}
}
}Kubernetes provides a scalable and self-healing deployment option using replicas and service exposure.
replicas: Set to 3 or more for multi-instance deployment.
Environment Variables: Pass the Redis and Database addresses via env variables to ensure all Pods connect to the same Redis service.
apiVersion: apps/v1
kind: Deployment
metadata:
name: collaboration-app
spec:
replicas: 3 # Number of replicas for multi-server setup
selector:
matchLabels:
app: collaboration-app
template:
metadata:
labels:
app: collaboration-app
spec:
containers:
- name: collaboration-app
image: collaboration-app:v1 # Replace with your actual image
ports:
- containerPort: 3000
# Configure environment variables to ensure connection to shared resources
env:
- name: DB_HOST
value: "postgres-service" # Name of the K8s DB Service
- name: REDIS_HOST
value: "redis-service" # Name of the K8s Redis Service
---
apiVersion: v1
kind: Service
metadata:
name: collaboration-app-service
spec:
selector:
app: collaboration-app
ports:
- port: 3000
targetPort: 3000
type: LoadBalancer