Skip to main content

Genesis 1B: Run 2 Extended — 20k → 40k Steps

Author: Robin, Kroonen AI Inc.

Genesis1BRun 2pretrainingrtx-4090training live

🟢 Live — Step ~22,250 / 40,000 (~55.6%), ETA ~6 days

Run 2 hit 20,000 steps on March 31, 2026, then extended to 40,000 steps (~21B tokens, slightly above Chinchilla-optimal). Training continues on 2× RTX 4090, loss ~2.05, throughput ~18,900 tok/s.

Model: Genesis 1B

Parameters1,000M (1.0B)
ArchitectureLlama-style decoder-only transformer
Hidden dim1536
Layers32
Attention heads12 (6 KV heads, GQA)
FFN dim4736 (SwiGLU)
Context length2048
Vocab size49,152
Precisionbfloat16
Positional encodingRoPE (θ=500,000)

Training Configuration

GPUs2× RTX 4090 (PCIe, no NVLink)
Batch size4 per GPU
Gradient accumulation32 steps
Effective batch524,288 tokens/step
Learning rate1e-4 → 1e-5 (cosine decay)
Warmup1,000 steps
OptimizerAdamW (β1=0.9, β2=0.95, wd=0.1)
Activation checkpointingEnabled (per TransformerBlock)
DCP resumeShardedStateDictConfig(offload_to_cpu=True)
CUDA allocatorexpandable_segments:True
VRAM per GPU~20 GB with activation checkpointing
Throughput~19,000 tok/s
Target~21B tokens (40,000 steps, above Chinchilla-optimal ~38,150)
Scriptpretrainv3.py
NCCLNCCL_P2P_DISABLE=1

Run 2: Training Progress (20k → 40k Extension)

Run 2 launched March 24, 2026 with a redesigned 32-layer architecture and reached 20,000 steps on March 31, 2026. Rather than stopping there, the run was extended to 40,000 steps (~21B tokens). The Chinchilla-optimal compute for a 1B parameter model is ~20B tokens (~38,150 steps at 524K tokens/step) — 40,000 steps slightly exceeds that intentionally, producing a stronger pre-trained base. Training is currently live at step ~22,250+.

StepLossGrad Normtok/s
011.137720.0017,425
1,0003.41610.7418,936
2,0003.08660.3018,954
3,0002.55170.2218,948
4,0002.65680.2218,958
5,0002.29710.1718,946
6,0002.28770.1818,935
7,0002.22350.1718,936
8,0002.13250.1618,947
9,0002.28780.1618,830
10,0002.17760.1618,955
11,0002.11640.1618,960
12,0002.24260.1618,967
13,0002.18380.1618,971
14,0002.08640.1718,978
15,0001.95200.1718,975
16,0001.81050.1518,965
17,0002.13010.1618,956
18,0002.15210.1818,869
19,0001.87290.1618,973
20,0001.83690.1718,967
21,000~2.050.1818,894
22,2502.0510.1818,894

Training loss curve

1.193.816.449.0711.7005,00010,00015,00020,000

Full loss curve reconstructed from the local run-0a3gme49.wandb run file, covering step 0 through the end of Run 2.

At 20k steps, loss was 1.8369. After extending to 40k, the learning rate schedule reset to continue cosine decay toward 1e-5, and the run is progressing with healthy gradient norms. Latest checkpoint (step 22,250) shows loss 2.051, which is expected as the continued pre-training explores new data territory at a lower learning rate. Throughput holds steady at ~18,900 tok/s.

Checkpoints are backed up locally every 10 minutes and uploaded to HuggingFace. Try them in the live playground.

The Dataset

~60B tokens, curated from public sources:

All tokenized with a custom SentencePiece BPE tokenizer trained on the corpus itself.

The Road to Genesis 1B v0.1

Pre-training is only the first phase. The full pipeline has four stages:

Phase 1: Pre-training (in progress — extended to 40k)

The initial 20k-step milestone was reached March 31, 2026. The run was immediately extended to 40,000 steps (~21B tokens). Chinchilla-optimal for a 1B model is ~20B tokens (~38,150 steps at 524K tokens/step) — 40,000 steps slightly overshoots intentionally, giving a stronger base. Currently at step ~22,250, with ~17,750 steps remaining. Estimated completion ~April 7, 2026.

Phase 2: SFT (Supervised Fine-Tuning) — in progress

Already underway. Early SFT checkpoints have been produced on top of the pre-trained base. Full SFT will run once the 40k pre-training base is complete. The approach is inspired by Constitutional AI: define a set of principles and train the model to follow them. The goal is a model with genuine personality, not a model optimized for refusal rates.

Phase 3: DPO (Direct Preference Optimization)

Refine taste and style. Train the model to prefer interesting, thoughtful responses over generic safe ones. Preference pairs are constructed to reward curiosity and penalize hedging.

Phase 4: Continued pre-training cycles

Continue pre-training to 40,000 steps (~10.5B tokens), then run SFT and DPO again from the stronger base. Repeat at 60,000 and 80,000+ steps. Each cycle produces a better pre-trained foundation, which produces a better aligned model.

The 60B token corpus means zero data repetition even at extended step counts. Every token the model sees is genuinely new data.

Run 1: What Happened (Historical)

📜 Run 1 History — Click to expand (steps 0–8,500, March 17–24)

Run 1 used a different architecture: 20 layers, dim 2048, 16 heads, batch size 1. It achieved 6,500 tok/s and was on track for ~13 days to 20k steps. Two critical failures occurred:

1. FSDP Checkpoint Deadlock

Checkpoint saves hung indefinitely due to NCCL ALLGATHER over PCIe without NVLink. Fixed by switching to DCP sharded checkpoints.

2. Optimizer State Bug (Silent)

The DCP resume path only loaded model weights, not AdamW optimizer state. This produced a false recovery — loss looked healthy for ~50 steps, then diverged. The fix: load optimizer state alongside model weights with try/except fallback.

These failures led to the Run 2 redesign. See the full postmortems: FSDP Deadlock · Optimizer State Bug

Run 1 Loss Data

StepLossStepLoss
011.173,4002.73
2004.873,6002.42
4004.343,8002.45
6003.554,0002.25
8003.034,2002.35
1,0003.274,4002.19
1,2003.024,6002.46
1,4003.024,8002.10
1,6002.945,0002.39
1,8002.745,5002.26
2,0002.546,0002.20
2,2002.366,5002.15
2,4002.447,0001.90
2,6002.547,5001.69
2,8002.628,0001.53
3,0002.688,5001.42

Try It Yourself

The model is ready to inspect. Select a checkpoint and generate text to see how it evolved across the run:

Powered by HuggingFace ZeroGPU, free inference on NVIDIA H200

Contact

If you are a founder, independent researcher, or small lab working on multi-GPU local training and have encountered similar checkpoint or synchronization failures on consumer hardware, reach out at [email protected].