Skip to content

Commit 12fbe39

Browse files
committed
chore: update README
1 parent 196ecf6 commit 12fbe39

1 file changed

Lines changed: 40 additions & 50 deletions

File tree

README.md

Lines changed: 40 additions & 50 deletions
Original file line numberDiff line numberDiff line change
@@ -60,35 +60,6 @@ my-consumer-group,orders,1,12346
6060
my-consumer-group,payments,0,5678
6161
```
6262

63-
## How does it work
64-
65-
The restoration comes in 3 steps.
66-
Each of these steps can be performed separately and the result can be checked before proceeding to the next step.
67-
68-
```mermaid
69-
graph LR
70-
A[Fetch]-->B[Calculate]-->C[Apply]
71-
```
72-
73-
1. **Fetch** - Grabs all committed consumer group offsets from your source cluster and dumps them to CSV.
74-
75-
2. **Calculate** - Takes a CSV of source offsets and figures out the equivalent offset on the target cluster. It does this by looking for messages that have the source offset stored in a header (offsets differ between clusters, but the header tells us which message is which). Outputs another CSV with the mapped target offsets.
76-
77-
3. **Apply** - Takes the transformed CSV and commits those offsets to the target cluster so your consumers can resume right where they left off.
78-
79-
### Use Cases
80-
81-
- **Cluster Migration**: Move consumer groups from one Kafka cluster to another
82-
- **Disaster Recovery**: Restore consumer positions after cluster failures
83-
- **Environment Promotion**: Sync consumer states between dev/staging/production
84-
- **Data Replication**: Maintain consumer offset consistency across replicated clusters
85-
86-
### Prerequisites
87-
88-
- Kafka clusters must be accessible via bootstrap servers and credentials
89-
- Messages on target cluster must contain **source offset information** in headers (for transformation step) the name of the header is configurable
90-
- Appropriate permissions to read consumer group metadata and commit offsets
91-
9263
## Installation
9364

9465
### Using Cargo
@@ -128,6 +99,35 @@ cargo build --release
12899
# Binary will be in target/release/kbridge
129100
```
130101

102+
## How does it work
103+
104+
The restoration comes in 3 steps.
105+
Each of these steps can be performed separately and the result can be checked before proceeding to the next step.
106+
107+
```mermaid
108+
graph LR
109+
A[Fetch]-->B[Calculate]-->C[Apply]
110+
```
111+
112+
1. **Fetch** - Grabs all committed consumer group offsets from your source cluster and dumps them to CSV.
113+
114+
2. **Calculate** - Takes a CSV of source offsets and figures out the equivalent offset on the target cluster. It does this by looking for messages that have the source offset stored in a header (offsets differ between clusters, but the header tells us which message is which). Outputs another CSV with the mapped target offsets.
115+
116+
3. **Apply** - Takes the transformed CSV and commits those offsets to the target cluster so your consumers can resume right where they left off.
117+
118+
### Use Cases
119+
120+
- **Cluster Migration**: Move consumer groups from one Kafka cluster to another
121+
- **Disaster Recovery**: Restore consumer positions after cluster failures
122+
- **Environment Promotion**: Sync consumer states between dev/staging/production
123+
- **Data Replication**: Maintain consumer offset consistency across replicated clusters
124+
125+
### Prerequisites
126+
127+
- Kafka clusters must be accessible via bootstrap servers and credentials
128+
- Messages on target cluster must contain **source offset information** in headers (for transformation step) the name of the header is configurable
129+
- Appropriate permissions to read consumer group metadata and commit offsets
130+
131131
## Advanced Options
132132

133133
#### Filter by Topics
@@ -144,6 +144,17 @@ kbridge fetch -b localhost:9092 -t topic1 -t topic2 -t topic3
144144
kbridge calculate -b localhost:9093 -H CustomOffsetHeader -i offsets.csv
145145
```
146146

147+
### Dry Run
148+
149+
To see what offsets would be applied without actually committing them,
150+
use the `--dry-run` flag.
151+
152+
```bash
153+
# Calculate and review target offsets before applying
154+
kbridge fetch -b source:9092 | \
155+
kbridge calculate -b target:9093 -l Offset \
156+
kbridge apply -b target:9093 -i - --dry-run
157+
```
147158

148159
#### SASL/SSL (Confluent Cloud)
149160

@@ -199,27 +210,6 @@ kbridge fetch -b <bootstrap-url> \
199210
- Check network connectivity and firewall rules
200211
- Ensure Kafka cluster is running and healthy
201212

202-
### Debug Mode
203-
204-
Enable verbose logging for troubleshooting (see [Logging](#logging) for more options):
205-
206-
```bash
207-
kbridge --verbose fetch -b localhost:9092
208-
```
209-
210-
### Dry Run
211-
212-
To see what offsets would be applied without actually committing them:
213-
214-
```bash
215-
# Calculate and review target offsets before applying
216-
kbridge fetch -b source:9092 | \
217-
kbridge calculate -b target:9093 -l Offset
218-
219-
# Review the CSV file, then apply if satisfied
220-
kbridge apply -b target:9093 -i review_offsets.csv
221-
```
222-
223213
## Logging
224214

225215
By default, kbridge logs at `info` level with internal Kafka client logs suppressed for cleaner output.

0 commit comments

Comments
 (0)