Trendaavat aiheet
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.
🚨 Did you know that small-batch vanilla SGD without momentum (i.e. the first optimizer you learn about in intro ML) is virtually as fast as AdamW for LLM pretraining on a per-FLOP basis? 📜 1/n

Small batch LLM training is thought to be slow per FLOP, motivating gradient accumulation to simulate larger batches, even in small-scale academic runs. We show that a simple rule for scaling Adam hyperparameters allows efficient per-FLOP training down to batch size 1. 4/n

We observe that small batch training is highly robust to optimizer hyperparameters like learning rate and momentum. This means that on a fixed hyperparameter tuning budget, you will find better hyperparameters in the small batch regime. 6/n

341,23K
Johtavat
Rankkaus
Suosikit