← All writing
March 31, 2026
GIS
GPUWebGLDeck.glLeafletPerformanceGIS DevelopmentVisualization
gis

GPU vs CPU — What It Actually Means for Your Map

At some point, every GIS mapper hits a wall.

You're building something that matters — a damage assessment map with 40,000 points, a shelter monitor with real-time feeds, an evacuation flow visualization with routes across five states — and the map starts to choke. Points render slowly. Pan and zoom lag. The browser freezes when you try to filter. You add a loading spinner and apologize to your stakeholders.

This is not a data problem. It's a rendering problem. And understanding why it happens — and why some tools don't have it — changes what you build and how.


Two Processors, Two Jobs

Your computer has two processors doing very different work.

The CPU (Central Processing Unit) is the generalist. It runs your operating system, executes JavaScript, opens files, handles network requests. A modern laptop CPU has 8 to 16 cores, which means it can do 8 to 16 things simultaneously. It's fast, intelligent, and completely capable of handling complex logic.

The GPU (Graphics Processing Unit) is the specialist. It was built specifically to calculate the color of millions of pixels simultaneously. A modern GPU doesn't have 8 cores — it has thousands. Not because it's smarter than the CPU, but because calculating the color of one pixel is simple, and there are millions of pixels on your screen that all need updating at the same time.

This difference matters enormously for mapping.


How Leaflet and ArcGIS Dashboards Work

Leaflet, the most widely deployed web mapping library ever built, renders on the CPU. When you add a layer with 10,000 points, Leaflet loops through each point in JavaScript, calculates where it goes on screen, and draws it — one at a time, or in small batches.

ArcGIS Dashboards and many standard web map renderers do the same thing. The work happens in JavaScript, which means it happens on the CPU.

This works beautifully up to a point. For a few hundred features, the CPU handles it without visible effort. For a few thousand, you start to notice. Around 10,000 features, pan and zoom begin to feel sluggish. Past 50,000, you're in real trouble.

The wall is not a bug. It's a consequence of asking a tool designed for the CPU to handle a volume of parallel work that the CPU was never built for.


How Deck.gl Works

Deck.gl, built by Uber's data visualization team, takes a different approach entirely. It sends your data to the GPU using WebGL — a browser API that gives JavaScript direct access to the GPU's parallel processing power.

When you add 50,000 shelter points to a Deck.gl layer, Deck.gl doesn't loop through them in JavaScript. It packages the entire dataset as a block of data and ships it to the GPU in one operation. The GPU then calculates the position and color of every point simultaneously, across thousands of cores, in a single pass.

The result: Deck.gl renders 50,000 points in approximately the same time it renders 500. The GPU doesn't notice the difference, because it's doing all the work in parallel regardless of the count.


The Practical Ceiling

ToolRendererComfortable limit
LeafletCPU / SVG~10,000 features
ArcGIS JS SDKCPU + limited GPU~100,000 features
Deck.glPure GPU10,000,000+ features

For most Red Cross work today — shelter rosters, feeding site locations, open damage assessments — you're well under these limits with any tool. The wall only appears when you start working with data at scale: FEMA damage records for a major hurricane (hundreds of thousands of structures), census tract analysis across a region, or real-time feeds that accumulate over days.

When you hit that scale, Leaflet falls over and Deck.gl doesn't notice.


What WebGL Is (and Isn't)

WebGL is the bridge between JavaScript and the GPU. It's built into every modern browser — Chrome, Firefox, Safari — with no installation required. When a mapping library uses WebGL, it's using this bridge to send rendering instructions directly to the GPU instead of handling them in JavaScript.

You never write WebGL directly. It operates at an extremely low level — closer to machine code than to the JavaScript you write every day. Libraries like Deck.gl write the WebGL for you. You write this:

new deck.ScatterplotLayer({
  data: shelters,
  getPosition: d => [d.lon, d.lat],
  getRadius: 5000,
  getFillColor: [237, 27, 46]
})

And Deck.gl translates that into WebGL instructions, ships them to the GPU, and the GPU draws all your shelter circles in parallel. The layer configuration you write is a description of what you want rendered. The GPU figures out how.


The Scenario You Already Know

Here's the wall as a GIS mapper:

You're on a response deployment. Leadership wants a live map of all damage assessments coming in from field teams — updated every 15 minutes, color-coded by damage category, filterable by county. Day one, you have 800 records. The Leaflet-based dashboard works fine. Day three, you have 18,000 records. Filtering now takes four seconds. Day five, you have 65,000. The map is borderline unusable, and you're rebuilding it the night before the morning brief.

That's the CPU wall.

The GPU version of that same map — Deck.gl rendering 65,000 points with live filtering — responds in milliseconds regardless of record count, because the GPU is computing all 65,000 positions and colors simultaneously every time you move the slider.


What This Doesn't Mean

Using Deck.gl doesn't mean abandoning AGOL or your existing GIS workflow. Your data still lives in AGOL. Your Feature Layers are still the authoritative source. queryFeatures() is still the call that gets the data out.

The only thing that changes is what happens after the data arrives in the browser. Instead of handing it to a Leaflet layer or an ESRI map view, you hand it to Deck.gl. The GPU takes over from there.

Your data stays where it is. The rendering gets a different engine.


Related: The AGOL Bridge: queryFeatures() to Deck.gl