Lavalink

Lavalink Load Balancing Guide

February 24, 2026
15 min read read
M
Manas

As your Discord music bot grows, a single Lavalink server may not be enough. Load balancing across multiple nodes provides scalability, redundancy, and better performance. This guide covers everything you need to know.

When Do You Need Load Balancing?

Consider multiple Lavalink nodes when:

  • 500+ concurrent players - Single node struggles
  • High availability required - Cannot afford downtime
  • Geographic distribution - Users in multiple regions
  • Resource limits reached - Maxing out CPU/RAM

Load Balancing Strategies

Round Robin

Distributes players evenly across nodes in order.

Pros:

  • Simple to implement
  • Even distribution

Cons:

  • Does not consider node load
  • May overload struggling nodes

Least Connections

Sends new players to the node with fewest active players.

Pros:

  • Better load distribution
  • Adapts to varying loads

Cons:

  • Slightly more complex
  • May not account for resource differences

Weighted

Assigns weights based on node capacity.

Pros:

  • Accounts for different server specs
  • Optimal resource utilization

Cons:

  • Requires manual configuration
  • Needs adjustment when adding nodes

Geographic

Routes players to nearest node.

Pros:

  • Lowest latency for users
  • Better audio quality

Cons:

  • Uneven load distribution
  • Requires multiple regions

Setting Up Multiple Nodes

Node Configuration

Each Lavalink node needs unique identification:

Node 1 (US East):

server:
  port: 2333
  address: 0.0.0.0

lavalink:
  server:
    password: "shared_password"

Node 2 (Europe):

server:
  port: 2333
  address: 0.0.0.0

lavalink:
  server:
    password: "shared_password"

Bot Configuration

Configure your bot to connect to multiple nodes:

Shoukaku (Node.js):

const { Shoukaku, Connectors } = require('shoukaku');

const nodes = [
  {
    name: 'us-east',
    url: 'lavalink-us.example.com:2333',
    auth: 'shared_password'
  },
  {
    name: 'europe',
    url: 'lavalink-eu.example.com:2333',
    auth: 'shared_password'
  }
];

const shoukaku = new Shoukaku(
  new Connectors.DiscordJS(client),
  nodes,
  {
    moveOnDisconnect: true,
    resumable: true,
    resumableTimeout: 30
  }
);

Erela.js:

const { Manager } = require('erela.js');

const manager = new Manager({
  nodes: [
    {
      host: 'lavalink-us.example.com',
      port: 2333,
      password: 'shared_password',
      identifier: 'us-east'
    },
    {
      host: 'lavalink-eu.example.com',
      port: 2333,
      password: 'shared_password',
      identifier: 'europe'
    }
  ],
  autoPlay: true
});

Implementing Load Balancing

Least Players Strategy

function getLeastLoadedNode(shoukaku) {
  const nodes = [...shoukaku.nodes.values()];
  
  return nodes
    .filter(node => node.state === 'CONNECTED')
    .sort((a, b) => a.players.size - b.players.size)[0];
}

// Usage
const node = getLeastLoadedNode(shoukaku);
const player = await node.joinChannel({
  guildId: guildId,
  channelId: channelId,
  shardId: 0
});

Weighted Selection

const nodeWeights = {
  'us-east': 3,    // 3x capacity
  'europe': 2,     // 2x capacity
  'asia': 1        // 1x capacity
};

function getWeightedNode(shoukaku) {
  const nodes = [...shoukaku.nodes.values()]
    .filter(node => node.state === 'CONNECTED');
  
  let totalWeight = 0;
  const weightedNodes = nodes.map(node => {
    const weight = nodeWeights[node.name] || 1;
    const adjustedWeight = weight / (node.players.size + 1);
    totalWeight += adjustedWeight;
    return { node, weight: adjustedWeight };
  });
  
  let random = Math.random() * totalWeight;
  for (const { node, weight } of weightedNodes) {
    random -= weight;
    if (random <= 0) return node;
  }
  
  return weightedNodes[0].node;
}

Geographic Selection

const guildRegions = new Map();

function getRegionalNode(shoukaku, guildId) {
  const region = guildRegions.get(guildId) || 'us-east';
  
  const preferredNode = shoukaku.nodes.get(region);
  if (preferredNode?.state === 'CONNECTED') {
    return preferredNode;
  }
  
  // Fallback to any available node
  return getLeastLoadedNode(shoukaku);
}

// Set region based on voice server
client.on('voiceStateUpdate', (oldState, newState) => {
  if (newState.channel) {
    const region = newState.guild.preferredLocale;
    guildRegions.set(newState.guild.id, mapLocaleToNode(region));
  }
});

Failover Handling

Automatic Node Switching

shoukaku.on('nodeDisconnect', (name, reason) => {
  console.log(`Node ${name} disconnected: ${reason}`);
  
  // Move players to another node
  const disconnectedNode = shoukaku.nodes.get(name);
  if (!disconnectedNode) return;
  
  for (const [guildId, player] of disconnectedNode.players) {
    const newNode = getLeastLoadedNode(shoukaku);
    if (newNode) {
      player.move(newNode.name);
    }
  }
});

Reconnection Logic

shoukaku.on('nodeReconnect', (name) => {
  console.log(`Node ${name} reconnected`);
  // Optionally rebalance players
});

shoukaku.on('nodeError', (name, error) => {
  console.error(`Node ${name} error:`, error);
});

Monitoring Multiple Nodes

Health Checks

setInterval(() => {
  for (const [name, node] of shoukaku.nodes) {
    console.log(`Node ${name}:`, {
      state: node.state,
      players: node.players.size,
      ping: node.stats?.ping || 'N/A'
    });
  }
}, 30000);

Metrics Collection

function collectNodeMetrics(shoukaku) {
  const metrics = [];
  
  for (const [name, node] of shoukaku.nodes) {
    metrics.push({
      name,
      state: node.state,
      players: node.players.size,
      cpu: node.stats?.cpu?.systemLoad || 0,
      memory: node.stats?.memory?.used || 0,
      uptime: node.stats?.uptime || 0
    });
  }
  
  return metrics;
}

Infrastructure Setup

Docker Compose Multi-Node

version: '3.8'

services:
  lavalink-us:
    image: ghcr.io/lavalink-devs/lavalink:4
    ports:
      - "2333:2333"
    environment:
      - _JAVA_OPTIONS=-Xmx2G
    volumes:
      - ./application.yml:/opt/Lavalink/application.yml

  lavalink-eu:
    image: ghcr.io/lavalink-devs/lavalink:4
    ports:
      - "2334:2333"
    environment:
      - _JAVA_OPTIONS=-Xmx2G
    volumes:
      - ./application.yml:/opt/Lavalink/application.yml

Recommended Node Specs

| Scale | Nodes | RAM per Node | Total Capacity | |-------|-------|--------------|----------------| | Medium | 2 | 1GB | ~300 players | | Large | 3 | 2GB | ~750 players | | Enterprise | 5+ | 4GB | 2000+ players |

Best Practices

1. Use Odd Number of Nodes

For voting-based decisions, odd numbers prevent ties.

2. Geographic Distribution

Place nodes in different regions for:

  • Lower latency
  • Disaster recovery
  • Compliance requirements

3. Consistent Configuration

Keep application.yml identical across nodes (except identifiers).

4. Monitor All Nodes

Set up alerting for each node individually.

5. Test Failover

Regularly test node failure scenarios.

Cost Considerations

Self-Hosted

| Nodes | Monthly Cost | |-------|--------------| | 2 | ~₹400-600 | | 3 | ~₹600-900 | | 5 | ~₹1000-1500 |

Managed Hosting

Providers like HeavenCloud offer multi-node setups with:

  • Automatic failover
  • Load balancing included
  • Single dashboard management

Frequently Asked Questions

How many nodes do I need?

Start with 2 for redundancy. Add more based on player count (roughly 1 node per 200-300 concurrent players).

Can nodes have different specs?

Yes, use weighted load balancing to account for differences.

What happens during node failure?

With proper failover, players automatically move to healthy nodes with brief interruption.

Should all nodes be in the same region?

For redundancy, at least one node should be in a different region.

Conclusion

Load balancing Lavalink nodes enables your music bot to scale beyond single-server limits while providing redundancy. Start with two nodes for failover, then add more as your bot grows.

HeavenCloud offers managed Lavalink clusters with built-in load balancing, automatic failover, and unified monitoring for hassle-free scaling.

Start building your community

Deploy high-performance Discord bots, Lavalink nodes, and VPS servers with instant setup on HeavenCloud.