Detailed technical documentation for each DeepMind weather model
Learning skillful medium-range global weather forecasting β’ Science 2023
GraphCast uses an encode-process-decode GNN architecture operating on a multi-scale icosahedral mesh.
Key Innovation: Multi-mesh retains edges from all refinement levels (M0-M6), enabling both local detail and global information flow in a single message-passing step.
Diffusion-based ensemble forecasting for medium-range weather β’ Nature 2024
GenCast is a diffusion model adapted to spherical Earth geometry. It learns to generate probabilistic weather trajectories by iteratively denoising.
Key Innovation: Uses sparse transformer attention (neighbors only) in the processor, enabling efficient computation while capturing local atmospheric dynamics.
Skillful joint probabilistic weather forecasting from marginals β’ 2025
FGN generates ensembles via learned functional perturbations β injecting low-dimensional noise directly into the architecture.
Key Insight: Trained only on per-location CRPS (marginals), but the shared global noise forces the model to learn realistic spatial correlations (joints).
The spherical representation powering all three models
Traditional lat-lon grids have fundamental problems for global weather modeling:
Icosahedral meshes solve all of these with near-uniform node spacing.
| Level | Nodes | Resolution |
|---|---|---|
| M0 | 12 | ~7,000 km |
| M1 | 42 | ~3,500 km |
| M2 | 162 | ~1,750 km |
| M3 | 642 | ~875 km |
| M4 | 2,562 | ~440 km |
| M5 | 10,242 | ~220 km |
| M6 | 40,962 | ~110 km |