You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
:learning-objective-1: Enable Continuous Data Balancing on a Redpanda cluster
8
+
:learning-objective-2: Check data balancing status using rpk
9
+
:learning-objective-3: Cancel partition balancing moves for a specific node
5
10
6
11
[NOTE]
7
12
====
8
13
include::shared:partial$enterprise-license.adoc[]
9
14
====
10
15
11
-
Continuous Data Balancing continuously monitors your node and rack availability and disk usage. This enables self-healing clusters that dynamically balance partitions, ensuring smooth operations and optimal cluster performance.
16
+
Continuous Data Balancing continuously monitors your node and rack availability and disk usage, dynamically balancing partitions to maintain smooth operations and optimal cluster performance.
12
17
13
-
It also maintains the configured replication level, even after infrastructure failure. Node availability has the highest priority in data balancing. After a rack (with all nodes belonging to it) becomes unavailable, Redpanda moves partition replicas to the remaining nodes. This violates the rack awareness constraint. But after this rack (or a new one) becomes available, Redpanda repairs the rack awareness constraint by moving excess replicas from racks that have more than one replica to the newly-available rack.
18
+
Continuous Data Balancing also maintains the configured replication level, even after infrastructure failure. Node availability has the highest priority in data balancing. After a rack (with all nodes belonging to it) becomes unavailable, Redpanda moves partition replicas to the remaining nodes. This violates the rack awareness constraint. After the rack (or a replacement rack) becomes available, Redpanda repairs the constraint by moving excess replicas from racks that have more than one replica to the newly-available rack.
19
+
20
+
After reading this page, you will be able to:
21
+
22
+
* [ ] {learning-objective-1}
23
+
* [ ] {learning-objective-2}
24
+
* [ ] {learning-objective-3}
14
25
15
26
== Set Continuous Data Balancing properties
16
27
17
-
To enable Continuous Data Balancing, set the `partition_autobalancing_mode` property to `continuous`. You can then customize properties for monitoring your node availability and disk usage.
28
+
To enable Continuous Data Balancing, set the `partition_autobalancing_mode` property to `continuous`. Customize the following properties to monitor node availability and disk usage.
| When a node is unreachable for the specified amount of time, Redpanda acts as if the node had been decommissioned: rebalancing begins, re-creating all of its replicas on other nodes in the cluster. +
24
35
+
25
-
*Note:* The node remains part of the cluster, and it can rejoin when it comes back online. A node that was actually decommissioned is removed from the cluster. +
36
+
The node remains part of the cluster and can rejoin when it comes back online. A node that was actually decommissioned is removed from the cluster. +
| When a node is unavailable for this timeout duration, Redpanda automatically and permanently decommissions the node. This property only applies when `partition_autobalancing_mode` is set to `continuous`. Unlike `partition_autobalancing_node_availability_timeout_sec`, which moves partitions while keeping the node in the cluster, this property removes the node from the cluster entirely. A decommissioned node cannot rejoin the cluster. +
43
+
+
44
+
Only one node is decommissioned at a time. If a decommission is already in progress, automatic decommission does not trigger until it completes. If the decommission stalls (for example, because the node holds the only replica of a partition), manual intervention is required. See xref:manage:cluster-maintenance/nodewise-partition-recovery.adoc[]. +
45
+
+
46
+
By default, this property is null and automatic decommission is disabled.
| When a node fills up to this disk usage percentage, Redpanda starts moving replicas off the node to other nodes with disk utilization below the percentage. +
31
50
+
32
51
Default is 80%.
33
52
|===
34
53
35
-
For information about other modes with `partition_autobalancing_mode`, see xref:./cluster-balancing.adoc[Cluster Balancing].
54
+
For the other `partition_autobalancing_mode` options, see xref:manage:cluster-maintenance/cluster-balancing.adoc[Cluster balancing].
55
+
56
+
== Use data balancing commands
36
57
37
-
== Use Data Balancing commands
58
+
Use the following `rpk` commands to monitor and control data balancing.
38
59
39
60
=== Check data balancing status
40
61
41
62
To see the status, run:
42
63
43
-
`rpk cluster partitions balancer-status`
64
+
[,bash]
65
+
----
66
+
rpk cluster partitions balancer-status
67
+
----
44
68
45
69
This shows the time since the last data balancing, the number of replica movements in progress, the nodes that are unavailable, and the nodes that are over the disk space threshold (default = 80%).
46
70
47
-
It also returns a data balancing status: `off`, `ready`, `starting`, `in-progress`, or `stalled`. If the command reports a `stalled` status, check the following:
71
+
It also returns a data balancing status: `off`, `ready`, `starting`, `in-progress`, or `stalled`. If the command reports a `stalled` status, verify:
48
72
49
73
* Are there enough healthy nodes? For example, in a three node cluster, no movements are possible for partitions with three replicas.
50
74
* Does the cluster have sufficient space? Partitions are not moved if all nodes in the cluster are utilizing more than their disk space threshold.
@@ -55,10 +79,16 @@ It also returns a data balancing status: `off`, `ready`, `starting`, `in-progres
55
79
56
80
To cancel the current partition balancing moves, run:
57
81
58
-
`rpk cluster partitions movement-cancel`
82
+
[,bash]
83
+
----
84
+
rpk cluster partitions movement-cancel
85
+
----
59
86
60
-
To cancel the partition moves in a specific node, add `--node`. For example:
87
+
To cancel partition moves on a specific node, use the `--node` flag. For example:
61
88
62
-
`rpk cluster partitions movement-cancel --node 1`
89
+
[,bash]
90
+
----
91
+
rpk cluster partitions movement-cancel --node 1
92
+
----
63
93
64
-
NOTE: If continuous balancing hasn't been turned off, and if the system is still unbalanced, then it schedules another partition balancing. To stop all balancing, first set `partition_autobalancing_mode` to `off`. Then cancel current data balancing moves.
94
+
NOTE: If continuous balancing is still enabled and the cluster remains unbalanced, Redpanda schedules another partition balancing round. To stop all balancing, first set `partition_autobalancing_mode` to `off`, then cancel the current data balancing moves.
Copy file name to clipboardExpand all lines: modules/manage/pages/cluster-maintenance/decommission-brokers.adoc
+10Lines changed: 10 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,6 +10,16 @@ When you decommission a broker, its partition replicas are reallocated across th
10
10
11
11
CAUTION: When a broker is decommissioned, it cannot rejoin the cluster. If a broker with the same ID tries to rejoin the cluster, it is rejected.
12
12
13
+
== Decommissioning methods
14
+
15
+
There are two ways to decommission brokers in Redpanda:
16
+
17
+
* Manual decommissioning (described in this guide): Use `rpk` commands to explicitly decommission a broker when you need full control over the timing and selection of brokers to remove.
18
+
19
+
* Automatic decommissioning: When xref:manage:cluster-maintenance/continuous-data-balancing.adoc[Continuous Data Balancing] is enabled, you can configure the xref:manage:cluster-maintenance/continuous-data-balancing.adoc#partition_autobalancing_node_autodecommission_timeout_sec[partition_autobalancing_node_autodecommission_timeout_sec] property to automatically decommission brokers that remain unavailable for a specified duration.
20
+
21
+
Both methods permanently remove the broker from the cluster. Decommissioned brokers cannot rejoin.
22
+
13
23
== What happens when a broker is decommissioned?
14
24
15
25
When a broker is decommissioned, the controller leader creates a reallocation plan for all partition replicas that are allocated to that broker. By default, this reallocation is done in batches of 50 to avoid overwhelming the remaining brokers with Raft recovery. See xref:reference:tunable-properties.adoc#partition_autobalancing_concurrent_moves[`partition_autobalancing_concurrent_moves`].
Copy file name to clipboardExpand all lines: modules/manage/pages/kubernetes/k-decommission-brokers.adoc
+10Lines changed: 10 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,6 +15,16 @@ You may want to decommission a broker in the following situations:
15
15
16
16
NOTE: When a broker is decommissioned, it cannot rejoin the cluster. If a broker with the same ID tries to rejoin the cluster, it is rejected.
17
17
18
+
== Decommissioning methods
19
+
20
+
There are two ways to decommission brokers in Redpanda:
21
+
22
+
* Manual decommissioning (described in this guide): Use `rpk` commands or Kubernetes automation to explicitly decommission a broker when you need full control over the timing and selection of brokers to remove.
23
+
24
+
* Automatic decommissioning: When xref:manage:cluster-maintenance/continuous-data-balancing.adoc[Continuous Data Balancing] is enabled, you can configure the xref:manage:cluster-maintenance/continuous-data-balancing.adoc#partition_autobalancing_node_autodecommission_timeout_sec[partition_autobalancing_node_autodecommission_timeout_sec] property to automatically decommission brokers that remain unavailable for a specified duration.
25
+
26
+
Both methods permanently remove the broker from the cluster. Decommissioned brokers cannot rejoin.
0 commit comments