Skip to content

Commit 5412a88

Browse files
Fix nemo-evaluator setup
Signed-off-by: Keval Morabia <28916987+kevalmorabia97@users.noreply.github.com>
1 parent 2a279c5 commit 5412a88

2 files changed

Lines changed: 28 additions & 17 deletions

File tree

examples/pruning/minitron/NVIDIA-Nemotron-Nano-9B-v2/README.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -231,25 +231,25 @@ Before running, update the following fields in the yaml:
231231
- `deployment.checkpoint_path` — path to your converted `hf_iter_XXXXXX` checkpoint
232232
- `evaluation.nemo_evaluator_config.config.params.extra.tokenizer` — same path as `checkpoint_path`
233233

234-
Set the required environment variables and run:
235-
236234
> [!TIP]
237235
> Uncomment `limit_samples` under any task to run a small subset and verify the end-to-end eval pipeline before launching full evals.
238236
239237
```bash
240238
pip install "nemo-evaluator-launcher[all]==0.1.90"
241239

242-
# Required environment variables
240+
# Set required environment variables:
243241
export HF_TOKEN=<your_huggingface_token>
244-
export API_KEY=<your_api_key>
245-
export INFERENCE_API_KEY=<your_inference_api_key>
246242
export JUDGE_API_KEY=<your_judge_api_key>
247-
export OPENAI_CLIENT_ID=<your_openai_client_id>
248-
export OPENAI_CLIENT_SECRET=<your_openai_client_secret>
249243
export SLURM_JOB_DIR=<path_to_slurm_job_output_dir>
250244
export HF_HOME=<path_to_huggingface_cache>
251245
export VLLM_CACHE_ROOT=<path_to_vllm_cache>
252246

247+
# Set additional unused but required environment variables:
248+
export API_KEY=xxxxxx
249+
export INFERENCE_API_KEY=xxxxxx
250+
export OPENAI_CLIENT_ID=xxxxxx
251+
export OPENAI_CLIENT_SECRET=xxxxxx
252+
253253
nemo-evaluator-launcher run --config nemo_evaluator.yaml
254254
```
255255

examples/pruning/minitron/NVIDIA-Nemotron-Nano-9B-v2/nemo_evaluator.yaml

Lines changed: 21 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,20 +1,27 @@
1-
# NeMo Evaluator Launcher config for Nemotron-Nano-9B-v2 Pruned 7B evaluation
1+
# NeMo Evaluator Launcher config for Nemotron-Nano-9B-v2 and Pruned variants
2+
# --------------------------------------------------------------------------
3+
# Before running, update the following fields in the yaml:
4+
# - `execution.hostname` — your Slurm login node hostname
5+
# - `execution.account` — your Slurm account
6+
# - `deployment.checkpoint_path` — path to your converted `hf_iter_XXXXXX` checkpoint
7+
# - `evaluation.nemo_evaluator_config.config.params.extra.tokenizer` — same path as `checkpoint_path`
28
#
39
# Usage:
410
# pip install "nemo-evaluator-launcher[all]==0.1.90"
511
#
612
# # Set required environment variables:
713
# export HF_TOKEN=<your_huggingface_token>
8-
# export API_KEY=<your_api_key>
9-
# export INFERENCE_API_KEY=<your_inference_api_key>
1014
# export JUDGE_API_KEY=<your_judge_api_key>
11-
# export OPENAI_CLIENT_ID=<your_openai_client_id>
12-
# export OPENAI_CLIENT_SECRET=<your_openai_client_secret>
1315
# export SLURM_JOB_DIR=<path_to_slurm_job_output_dir>
1416
# export HF_HOME=<path_to_huggingface_cache>
1517
# export VLLM_CACHE_ROOT=<path_to_vllm_cache>
1618
#
17-
# # Update checkpoint_path below, then run:
19+
# # Set additional unused but required environment variables:
20+
# export API_KEY=xxxxxx
21+
# export INFERENCE_API_KEY=xxxxxx
22+
# export OPENAI_CLIENT_ID=xxxxxx
23+
# export OPENAI_CLIENT_SECRET=xxxxxx
24+
#
1825
# nemo-evaluator-launcher run --config nemo_evaluator.yaml
1926
#
2027

@@ -49,9 +56,9 @@ execution:
4956

5057
# Note: Only tp=1 works for Nano (Mamba-based architecture)
5158
deployment:
52-
# Update this to your distilled checkpoint path
53-
checkpoint_path: /path/to/distilled/checkpoint/hf_iter_XXXXXX
54-
served_model_name: nemotron-nano-9b-v2-pruned-7b
59+
# Update this to your Hugging Face checkpoint path (original, pruned or quantized)
60+
checkpoint_path: <hf_model_name_or_path>
61+
served_model_name: Nemotron-Nano-9B-v2
5562
port: 8000
5663
tensor_parallel_size: 1
5764
pipeline_parallel_size: 1
@@ -96,7 +103,7 @@ evaluation:
96103
extra:
97104
tokenizer_backend: huggingface
98105
# Update tokenizer path to match checkpoint_path above
99-
tokenizer: /path/to/distilled/checkpoint/hf_iter_XXXXXX
106+
tokenizer: <hf_model_name_or_path>
100107
env_vars:
101108
API_KEY: API_KEY
102109
HF_TOKEN: HF_TOKEN
@@ -169,6 +176,10 @@ evaluation:
169176
- name: ns_ifeval
170177
env_vars:
171178
HF_TOKEN: HF_TOKEN
179+
# nemo_evaluator_config:
180+
# config:
181+
# params:
182+
# limit_samples: 8
172183

173184
# 7. SciCode
174185
- name: ns_scicode

0 commit comments

Comments
 (0)