Skip to content

Commit 196ecf6

Browse files
committed
chore: update README
1 parent 9e36f16 commit 196ecf6

1 file changed

Lines changed: 65 additions & 64 deletions

File tree

README.md

Lines changed: 65 additions & 64 deletions
Original file line numberDiff line numberDiff line change
@@ -8,19 +8,73 @@
88
99
A powerful tool for migrating Kafka consumer group offsets between clusters, enabling seamless cluster migrations and disaster recovery scenarios.
1010

11+
## Quick Start
12+
13+
### Basic Commands
14+
15+
```bash
16+
# Show help
17+
kbridge --help
18+
19+
# Show help for specific command
20+
kbridge fetch --help
21+
kbridge calculate --help
22+
kbridge apply --help
23+
```
24+
25+
### Simple Local Example
26+
27+
Against local source (localhost:9092) and target (localhost:9093) clusters:
28+
29+
```bash
30+
# Step 1: Fetch offsets from source cluster
31+
kbridge fetch -b localhost:9092 > source_offsets.csv
32+
33+
# Step 2: Calculate target offsets
34+
kbridge calculate -b localhost:9093 -H Offset -i source_offsets.csv > target_offsets.csv
35+
36+
# Step 3: Apply target offsets (with confirmation prompt)
37+
kbridge apply -b localhost:9093 -i target_offsets.csv
38+
```
39+
40+
### Chained Pipeline
41+
42+
All steps can be chained together for streamlined execution:
43+
44+
```bash
45+
kbridge fetch -b localhost:9092 | \
46+
kbridge calculate -b localhost:9093 -H Offset | \
47+
kbridge apply -b localhost:9093
48+
```
49+
50+
> ⚠️ **Safety First**: Before applying offsets, a confirmation prompt is shown to prevent accidental modifications. The prompt can be skipped by adding the '-y' flag to the apply step (See help section for more info).
51+
52+
### CSV Format
53+
54+
The tool uses CSV format for offset data with the following columns:
55+
56+
```csv
57+
consumer_group,topic,partition,offset
58+
my-consumer-group,orders,0,12345
59+
my-consumer-group,orders,1,12346
60+
my-consumer-group,payments,0,5678
61+
```
62+
1163
## How does it work
1264

1365
The restoration comes in 3 steps.
1466
Each of these steps can be performed separately and the result can be checked before proceeding to the next step.
1567

1668
```mermaid
1769
graph LR
18-
A[Fetch]-->B[Transform]-->C[Apply]
70+
A[Fetch]-->B[Calculate]-->C[Apply]
1971
```
2072

21-
1. **Fetch** source offsets from the source cluster - retrieves committed consumer group offsets
22-
2. **Transform** offsets - maps source offsets to equivalent positions on the target cluster using message headers
23-
3. **Apply** transformed offsets to the target cluster - commits the calculated offsets for consumer groups
73+
1. **Fetch** - Grabs all committed consumer group offsets from your source cluster and dumps them to CSV.
74+
75+
2. **Calculate** - Takes a CSV of source offsets and figures out the equivalent offset on the target cluster. It does this by looking for messages that have the source offset stored in a header (offsets differ between clusters, but the header tells us which message is which). Outputs another CSV with the mapped target offsets.
76+
77+
3. **Apply** - Takes the transformed CSV and commits those offsets to the target cluster so your consumers can resume right where they left off.
2478

2579
### Use Cases
2680

@@ -74,48 +128,22 @@ cargo build --release
74128
# Binary will be in target/release/kbridge
75129
```
76130

77-
## Usage
78-
79-
### Basic Commands
80-
81-
```bash
82-
# Show help
83-
kbridge --help
84-
85-
# Show help for specific command
86-
kbridge fetch --help
87-
kbridge calculate --help
88-
kbridge apply --help
89-
```
90-
91-
### Simple Local Example
131+
## Advanced Options
92132

93-
Against local source (localhost:9092) and target (localhost:9093) clusters:
133+
#### Filter by Topics
94134

95135
```bash
96-
# Step 1: Fetch offsets from source cluster
97-
kbridge fetch -b localhost:9092 > source_offsets.csv
98-
99-
# Step 2: Calculate target offsets
100-
kbridge calculate -b localhost:9093 -H Offset -i source_offsets.csv > target_offsets.csv
101-
102-
# Step 3: Apply target offsets (with confirmation prompt)
103-
kbridge apply -b localhost:9093 -i target_offsets.csv
136+
# Only process specific topics
137+
kbridge fetch -b localhost:9092 -t topic1 -t topic2 -t topic3
104138
```
105139

106-
### Chained Pipeline
107-
108-
All steps can be chained together for streamlined execution:
140+
#### Custom Header Key
109141

110142
```bash
111-
kbridge fetch -b localhost:9092 | \
112-
kbridge calculate -b localhost:9093 -H Offset | \
113-
kbridge apply -b localhost:9093
143+
# Use custom header key for offset mapping
144+
kbridge calculate -b localhost:9093 -H CustomOffsetHeader -i offsets.csv
114145
```
115146

116-
> ⚠️ **Safety First**: Before applying offsets, a confirmation prompt is shown to prevent accidental modifications. The prompt can be skipped by adding the '-y' flag to the apply step (See help section for more info).
117-
118-
### Authentication Examples
119147

120148
#### SASL/SSL (Confluent Cloud)
121149

@@ -148,33 +176,6 @@ kbridge fetch -b <bootstrap-url> \
148176
-o ssl.key.location=/path/to/client-key
149177
```
150178

151-
### Advanced Options
152-
153-
#### Filter by Topics
154-
155-
```bash
156-
# Only process specific topics
157-
kbridge fetch -b localhost:9092 -t topic1 -t topic2 -t topic3
158-
```
159-
160-
#### Custom Header Key
161-
162-
```bash
163-
# Use custom header key for offset mapping
164-
kbridge calculate -b localhost:9093 -H CustomOffsetHeader -i offsets.csv
165-
```
166-
167-
### CSV Format
168-
169-
The tool uses CSV format for offset data with the following columns:
170-
171-
```csv
172-
consumer_group,topic,partition,offset
173-
my-consumer-group,orders,0,12345
174-
my-consumer-group,orders,1,12346
175-
my-consumer-group,payments,0,5678
176-
```
177-
178179
## Troubleshooting
179180

180181
### Common Issues

0 commit comments

Comments
 (0)