GFS 24h avg error 4.2°F
ECMWF 24h avg error 3.3°F
HRRR 24h avg error 4.8°F
ICON 24h avg error 3.8°F
Meld 24h avg error 2.1°F
Models tracked 4
US cities validated 30
Training data 6 years
Methodology Ensemble blending
Consumer price $0
Ads None. Ever.
GFS 24h avg error 4.2°F
ECMWF 24h avg error 3.3°F
HRRR 24h avg error 4.8°F
ICON 24h avg error 3.8°F
Meld 24h avg error 2.1°F
Models tracked 4
US cities validated 30
Training data 6 years
Methodology Ensemble blending
Consumer price $0
Ads None. Ever.
Four models. One answer.

Every model.
One verdict.

WeatherMeld tracks every major weather model's verified performance across 30 US cities, weights them dynamically by demonstrated accuracy, and blends them into a single confidence-scored forecast. The Meld. Free for everyone. No ads. No degraded tier. No exceptions.

Meld Score · Chicago O'Hare · 24h lead (illustrative) Not live data
0
/ 100
ECMWF
34%
GFS
28%
ICON
24%
HRRR
14%
Founding principles
Rule 01
The forecast is never downgraded
Every user receives the full Meld. No premium tier with better accuracy. No free tier with worse accuracy. The forecast is identical for everyone.
Rule 02
No ads. No data selling. Ever.
The consumer product is funded entirely by B2B API clients who pay for volume, SLA, and integration — not a better forecast. Your attention is not the product.
Rule 03
Show your work
Every forecast shows which models contributed, their weights, and their verified accuracy in your location. The Model Leaderboard is public. No black boxes.

Not a forecast.
A verdict.

Every major weather model makes different predictions for the same moment. They're all wrong sometimes. The question is which one is least wrong — where, and when.

WeatherMeld tracks each model's performance across 30 US cities over six years, segmented by location, lead time, season, and weather type. Weights update continuously. The model that's been right in Chicago in January gets more say in Chicago in January.

The result is the Meld: a probability-weighted composite with a confidence score derived from inter-model agreement.

4
Models blended per forecast
30
US cities validated
6
Years of training data
01
Ingest
Four model runs arrive 4× daily — GFS, HRRR, ECMWF, ICON — parsed to the variables that matter: temperature, precipitation, wind, humidity, cloud cover.
02
Score
Each prior forecast is verified against observed conditions. RMSE, Brier Score, and a composite skill metric are computed per model, per location, per lead time.
03
Weight
A rolling skill window feeds a softmax weighting function. Recent performance matters more. A model accurate last week outweighs one accurate last year.
04
Blend
Weights are applied to current model outputs. The result is the Meld: a single forecast with a confidence score derived from inter-model agreement.
05
Deliver
Free to consumers. Programmatic access for B2B clients. Same forecast, different delivery. Volume, SLA, and integration are what B2B pays for — not accuracy.