Skip to content

Commit

Permalink
update wandb
Browse files Browse the repository at this point in the history
  • Loading branch information
Duy Phung committed Sep 13, 2023
1 parent 362737f commit b762bfb
Show file tree
Hide file tree
Showing 2 changed files with 5 additions and 5 deletions.
2 changes: 1 addition & 1 deletion examples/llama_nemo/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ python convert_llama_to_nemo.py --model_path NousResearch/Llama-2-7b-hf --output
```

### Training:
Example: [wandb](https://wandb.ai/carperai/trlxnemo/runs/6ne5vjxr?workspace=user-pvduy)
Example: [wandb](https://wandb.ai/carperai/trlxnemo/runs/v7592y73?workspace=user-pvduy)

```bash
sbatch dist_train.sh
Expand Down
8 changes: 4 additions & 4 deletions examples/llama_nemo/nemo_llama2_ppo_sentiments.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,9 +33,9 @@ def main(hparams={}):
cfg_name = "llama2-7b"
config = default_config.evolve(
train=dict(
total_steps=2048,
total_steps=1600,
seq_length=256,
batch_size=32,
batch_size=16,
epochs=100,
eval_interval=100,
trainer="NeMoPPOTrainer",
Expand Down Expand Up @@ -71,8 +71,8 @@ def main(hparams={}):
lam=0.95,
cliprange=0.2,
cliprange_value=0.2,
gen_kwargs=dict(temperature=1.0, max_new_tokens=40),
chunk_size=128,
gen_kwargs=dict(temperature=1.0, max_new_tokens=64),
chunk_size=64,
ppo_epochs=4,
),
)
Expand Down

0 comments on commit b762bfb

Please sign in to comment.