Benieuwd om dit met diloco te proberen, zou nog steeds bs=1 doen op de inner optimizer en nog steeds profiteren van data parallelisme.
Micah Goldblum
Micah Goldblum10 jul, 22:12
🚨 Did you know that small-batch vanilla SGD without momentum (i.e. the first optimizer you learn about in intro ML) is virtually as fast as AdamW for LLM pretraining on a per-FLOP basis? 📜 1/n
1,88K