Fetch Event Outputs
How to retrieve a single ADAM event and follow the dynamic fields it carries to derived products — population impact tables, hazard rasters, shapefile bundles.
This is ADAM's signature value-add on top of standard OGC Features: each event feature links to the artefacts produced by the hazard module for that specific event.
The shape of dynamic fields
For earthquake and cyclone event collections, a single feature looks roughly like:
{
"type": "Feature",
"id": "eq-2026-04-15-12345",
"geometry": { "type": "Point", "coordinates": [34.5, 38.1] },
"properties": {
"published_at": "2026-04-15T13:42:11Z",
"mag": 6.7,
"depth": 23,
"source": "USGS",
"iso3": "TUR",
"place": "central Turkey",
"population_table_url": "https://.../eq-2026-04-15-12345/population.json",
"shakemap_url": "https://.../eq-2026-04-15-12345/shakemap.tif",
"shapefile_url": "https://.../eq-2026-04-15-12345/shapefile.zip"
},
"links": [ ... ]
}
The extra URLs beyond the standard GeoJSON properties are what we call dynamic fields. They are absolute — just follow them with any HTTP client.
Exact field names
Dynamic field names vary by collection and evolve over time. Do not hard-code them. Treat properties as a dictionary and pick up whatever URLs it carries.
Worked example — a single earthquake
Step 1 — find the event
Either browse the live HTML viewer and grab an id, or filter programmatically:
import httpx
BASE = "https://api.adam.geospatial.wfp.org/api"
COLLECTION = "adam.adam_eq_events"
recent = httpx.get(
f"{BASE}/collections/{COLLECTION}/items",
params={"limit": 1, "sortby": "-published_at", "filter": "mag >= 6.0"},
).json()
event_id = recent["features"][0]["id"]
Step 2 — fetch the full feature
feature = httpx.get(f"{BASE}/collections/{COLLECTION}/items/{event_id}").json()
props = feature["properties"]
Step 3 — pick up the dynamic fields
# Anything in `properties` that looks like a URL is a dynamic field
dynamic_urls = {k: v for k, v in props.items() if isinstance(v, str) and v.startswith("http")}
for field, url in dynamic_urls.items():
print(f"{field:<30} → {url}")
Step 4 — consume the products
Each URL points to a specific artefact type. Typical patterns:
# JSON tables
if "population_table_url" in props:
table = httpx.get(props["population_table_url"]).json()
# ... pandas.DataFrame.from_records(table) etc.
# Downloadable binaries — stream them
if "shapefile_url" in props:
with httpx.stream("GET", props["shapefile_url"]) as r:
with open(f"{event_id}.zip", "wb") as f:
for chunk in r.iter_bytes():
f.write(chunk)
# Rasters (e.g. ShakeMaps) — typically Cloud-Optimised GeoTIFFs
# Read directly with rasterio or rio-tiler; no local download needed.
Cyclone events — same pattern, richer links
Tropical storm events in adam.adam_ts_events surface the same pattern with different artefacts: rainfall rasters, population impact tables by municipality, WFP facility exposure reports.
Related geometries (forecast tracks, impact buffers, forecast nodes) live in sibling collections. You can join them by the shared event identifier — query adam.adam_ts_tracks filtered on the event id from adam.adam_ts_events.
When fields are missing
Events are processed over time. On a newly-detected event, some dynamic fields may be null or absent while the hazard module completes its run. Revisit the feature a few minutes later; the properties are enriched as artefacts become available.
Defensive coding
- Do not assume field names — iterate
propertiesand pattern-match URLs. - Check for
null— fields may be empty on recent events. - Stream binaries — shapefiles and rasters can be multi-MB.
- Cache by event id — once a feature's artefacts are finalised, they do not change.
Next steps
- Query Features — find the events you want to drill into
- Collections Reference — which collections carry dynamic fields
Live references
- Single feature endpoint on Swagger →
item_collections__collectionId__items__itemId__get - Browse earthquake events live → /api/collections/adam.adam_eq_events/items
- Browse cyclone events live → /api/collections/adam.adam_ts_events/items