You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Each of these steps can be performed separately and the result can be checked before proceeding to the next step.
67
-
68
-
```mermaid
69
-
graph LR
70
-
A[Fetch]-->B[Calculate]-->C[Apply]
71
-
```
72
-
73
-
1.**Fetch** - Grabs all committed consumer group offsets from your source cluster and dumps them to CSV.
74
-
75
-
2.**Calculate** - Takes a CSV of source offsets and figures out the equivalent offset on the target cluster. It does this by looking for messages that have the source offset stored in a header (offsets differ between clusters, but the header tells us which message is which). Outputs another CSV with the mapped target offsets.
76
-
77
-
3.**Apply** - Takes the transformed CSV and commits those offsets to the target cluster so your consumers can resume right where they left off.
78
-
79
-
### Use Cases
80
-
81
-
-**Cluster Migration**: Move consumer groups from one Kafka cluster to another
82
-
-**Disaster Recovery**: Restore consumer positions after cluster failures
83
-
-**Environment Promotion**: Sync consumer states between dev/staging/production
84
-
-**Data Replication**: Maintain consumer offset consistency across replicated clusters
85
-
86
-
### Prerequisites
87
-
88
-
- Kafka clusters must be accessible via bootstrap servers and credentials
89
-
- Messages on target cluster must contain **source offset information** in headers (for transformation step) the name of the header is configurable
90
-
- Appropriate permissions to read consumer group metadata and commit offsets
91
-
92
63
## Installation
93
64
94
65
### Using Cargo
@@ -128,6 +99,35 @@ cargo build --release
128
99
# Binary will be in target/release/kbridge
129
100
```
130
101
102
+
## How does it work
103
+
104
+
The restoration comes in 3 steps.
105
+
Each of these steps can be performed separately and the result can be checked before proceeding to the next step.
106
+
107
+
```mermaid
108
+
graph LR
109
+
A[Fetch]-->B[Calculate]-->C[Apply]
110
+
```
111
+
112
+
1.**Fetch** - Grabs all committed consumer group offsets from your source cluster and dumps them to CSV.
113
+
114
+
2.**Calculate** - Takes a CSV of source offsets and figures out the equivalent offset on the target cluster. It does this by looking for messages that have the source offset stored in a header (offsets differ between clusters, but the header tells us which message is which). Outputs another CSV with the mapped target offsets.
115
+
116
+
3.**Apply** - Takes the transformed CSV and commits those offsets to the target cluster so your consumers can resume right where they left off.
117
+
118
+
### Use Cases
119
+
120
+
-**Cluster Migration**: Move consumer groups from one Kafka cluster to another
121
+
-**Disaster Recovery**: Restore consumer positions after cluster failures
122
+
-**Environment Promotion**: Sync consumer states between dev/staging/production
123
+
-**Data Replication**: Maintain consumer offset consistency across replicated clusters
124
+
125
+
### Prerequisites
126
+
127
+
- Kafka clusters must be accessible via bootstrap servers and credentials
128
+
- Messages on target cluster must contain **source offset information** in headers (for transformation step) the name of the header is configurable
129
+
- Appropriate permissions to read consumer group metadata and commit offsets
0 commit comments