← Back to writingFrontend / 2026.04

Beyond Markers: Trip Replay, Spiderfied Clusters, and Dual Data Sources on Google Maps

Dropping a few markers on a map is straightforward. I thought the maps feature would be one of the simpler parts of the app. I was wrong.

By the time it was fully built the maps page handled 200+ vehicles clustered across South Africa, depot yards where 20 trucks parked on the exact same GPS coordinates and stacked into unclickable piles, trip history replayed as polylines with camera snapshots aligned to GPS points by timestamp - and the whole thing had to work against both Firebase Realtime Database and a REST API simultaneously because we were mid-migration and couldn't do a hard cutover.

My first version was a single component that did all of this together - marker rendering, clustering, trip replay, data fetching, UI state, snapshot alignment, polling. Around 2,000 lines. It worked. It was also the most fragile thing in the codebase. Every bug fix was an archaeology expedition. Changing trip replay logic risked breaking marker clustering. Adding a new data source meant touching code that also handled video overlays. I eventually hit a point where I was genuinely afraid to open the file.

I decomposed it into six focused services. This article is what I learned building all of it.

The service architecture - what the 2,000-line component became

After the refactor the component became a thin orchestrator. The actual work lives in six services:

  • MapRenderingService (~2,400 lines) - marker creation, clustering, spiderfier initialisation, polyline drawing. Owns the google.maps.Map instance.
  • MapsDataService - data source abstraction. Firebase or REST, the rest of the app doesn't know or care.
  • MapService - HTTP calls to the BFF with dual-server failover. Tracking, GPS, geocoding.
  • TripReplayFacade - orchestrates rendering: polylines, trip markers, snapshot attachment.
  • TripReplayService - data loading for trip replays.
  • MapUiStateStore - loading indicators, progress bars, UI toggles. One BehaviorSubject, one source of truth.

A developer working on trip replay now doesn't need to understand how clustering works. A developer adding a map layer doesn't need to touch the data source abstraction. That isolation was the entire point of the refactor - and the reason I should have structured it that way from the start.

Marker clustering - and why I stopped trying to update in-place

Two hundred individual markers on a map of South Africa tell you nothing. They overlap, obscure each other, and make the data unreadable. I use @googlemaps/markerclusterer with the SuperCluster algorithm:

private createClusterer(markers: google.maps.Marker[]): MarkerClusterer { return new MarkerClusterer({ map: this.map!, markers, algorithm: new SuperClusterAlgorithm({ maxZoom: 10 }), }); }

maxZoom: 10 means clusters dissolve at zoom level 10. I tried higher values - 12, then 14 - but South African depot yards where 20+ vehicles park overnight in the same yard were still unreadable at zoom 11 and 12. At zoom 10 the transition from cluster to individual markers feels natural.

Fleet operators toggle between views - all vehicles, trucks only, trackers only. Each toggle needs to rebuild the cluster with a filtered marker subset. My first attempt was updating the existing cluster in-place with addMarkers() and removeMarkers(). SuperCluster's spatial index got stale - markers appeared in wrong clusters after filtering, clusters persisted at zoom levels where they should have dissolved. I wasted an afternoon chasing what I thought was a rendering bug before I understood the problem.

The fix was blunt: destroy and recreate every time:

showVehicleMarkers(): void { this.markerCluster?.clearMarkers(); const vehicleMarkers = this.allMarkers.filter(m => m.type === 'vehicle'); this.markerCluster = new MarkerClusterer({ map: this.map, markers: vehicleMarkers, algorithm: new SuperClusterAlgorithm({ maxZoom: 10 }), }); }

Heavier than in-place updates. Correct every time.

The spiderfier - when 20 trucks park in the same spot

Clustering handles density at overview zoom. At close zoom, when you're looking at a specific depot, you hit a different problem: 15 trucks parked at the same yard have markers sitting on the exact same GPS coordinates. Clicking the pile selects whichever marker happens to be on top - usually not the one the operator wanted.

I watched a fleet manager click the same pile five times trying to find a specific truck before I understood this needed a proper fix. The Overlapping Marker Spiderfier library spreads stacked markers into a spiral when clicked. I load it lazily because it only matters at close zoom and most users spend most of their time at overview:

private async initializeSpiderfier() { if (!this.map) return; try { const OverlappingMarkerSpiderfier = ( await import("overlapping-marker-spiderfier") ).default; const options = { legWeight: 1.5, keepSpiderfied: true, nearbyDistance: 50, circleFootSeparation: 50, lineToCenter: true, circleSpiralSwitchover: 5, spiralFootSeparation: 9, minZoomLevel: 12, }; this.oms = new OverlappingMarkerSpiderfier(this.map, options); } catch (error) { // Retry once - dynamic import can fail on slow connections } }

minZoomLevel: 12 means spiderfication only activates where stacking is actually a problem. circleSpiralSwitchover: 5 means small groups use a circle - cleaner visually - and larger groups use a spiral that scales better for big depot yards.

This library is also the source of the passive event listener warnings that required a global monkey-patch in main.ts. The spiderfier calls preventDefault() on touch events without marking them passive, which Chrome flags as a scroll performance violation on Android. I couldn't patch the library, so I intercepted addEventListener at the prototype level to force { passive: true } on all touch events before Angular even boots - covered in the cross-platform article.

Dual data sources - migrating from Firebase to REST without a hard cutover

The fleet originally used Firebase Realtime Database for live vehicle positions. Firebase gives you real-time push updates via WebSocket - a vehicle moves, the map updates instantly, no polling needed. It was the right choice early on.

Then we needed to migrate to the REST API through the BFF. More control, better backend integration, one fewer external dependency. The problem was doing it safely across a live fleet. A hard cutover on all clients simultaneously wasn't acceptable - if REST had problems we needed to roll back per-client without a code deployment.

The MapsDataService makes both sources interchangeable behind a single Observable:

@Injectable({ providedIn: "root" }) export class MapsDataService { get deviceSnapshots$(): Observable<VehicleSnapshot[]> { return this.sharedService.getVehicleDataSource().pipe( switchMap((mode) => mode === "firebase" ? this.getFirebaseDeviceSnapshots$() : this.getRestDeviceSnapshots$() ), shareReplay({ bufferSize: 1, refCount: true }) ); } get vehicleSnapshots$(): Observable<VehicleSnapshot[]> { return this.deviceSnapshots$.pipe( map(snapshots => snapshots.filter(s => s.mdvr !== "0")) ); } get trackerSnapshots$(): Observable<VehicleSnapshot[]> { return this.deviceSnapshots$.pipe( map(snapshots => snapshots.filter(s => s.mdvr === "0")) ); } }

The shareReplay({ bufferSize: 1, refCount: true }) matters. Multiple parts of the app subscribe to deviceSnapshots$ - the map for markers, the sidebar vehicle list, the trip replay panel. Without it each subscriber triggers an independent Firebase listener or REST call. With it, one stream serves all consumers and cleans up when the last subscriber unsubscribes.

The switchMap on the data source mode means toggling mid-session just works - the old Firebase listener tears down, REST polling starts, and every subscriber gets the new data without knowing anything changed. We migrated clients one at a time and rolled back two of them when we found edge cases. No incidents.

The duplicate markers problem

Firebase data sometimes had two records for the same physical truck - same registration, different device numbers. This happens when tracking hardware gets replaced in the field but the old record isn't cleaned up. The operator sees two markers for one truck and calls asking why their truck is in two places simultaneously.

export function dedupeVehiclesByRegNumber( snapshots: VehicleSnapshot[] ): VehicleSnapshot[] { const deduped = new Map<string, VehicleSnapshot>(); for (const snapshot of snapshots) { if (!snapshot.reg_nr) { deduped.set(`${snapshot.device_nr}-tracker`, snapshot); continue; } const key = `${snapshot.reg_nr}_${ snapshot.mdvr === "0" ? "tracker" : "vehicle" }`; if (!deduped.has(key)) { deduped.set(key, snapshot); } } return Array.from(deduped.values()); }

GPS-only trackers are keyed by device number since they have no registration. Vehicles with registrations are keyed by registration plus type, first-seen wins. The key insight was deduping by business key - the registration plate the operator cares about - not the technical key that changes every time hardware gets swapped.

Trip replay - the feature fleet managers use most

Fleet managers use trip replay constantly - reviewing a route after an incident, verifying a delivery, investigating a speeding report. Select a vehicle, a date and a time window, and the map renders the historical path as a polyline with clickable markers at each GPS point. Each marker can have camera snapshots attached so the manager can see what the camera recorded at that location.

The architecture splits this into loading and rendering:

// Loading async loadTripReplay(params: TripReplayParams, progressCallback) { const result = await this.mapService.getTracking(params).toPromise(); const snaps = await this.snapshotService.getSnapshots(params); return { tripData: result, snaps, date: params.date }; } // Rendering async renderTripReplay(config) { this.mapRenderingService.resetMap(); const tripMarkers = config.tripData.map((point, i) => { const marker = new google.maps.Marker({ position: { lat: parseFloat(point.latitude), lng: parseFloat(point.longitude) }, map: this.map, icon: this.getTripPointIcon(i, config.tripData.length), }); const infoWindow = this.buildTripInfoWindow(point); marker.addListener('click', () => infoWindow.open(this.map, marker)); return marker; }); this.drawTripPolyline(config.tripData); this.attachSnapshotsToTripPoints(config.tripData, config.snaps); const bounds = new google.maps.LatLngBounds(); config.tripData.forEach(p => bounds.extend({ lat: parseFloat(p.latitude), lng: parseFloat(p.longitude) }) ); this.map.fitBounds(bounds); }

The snapshot alignment problem

The hardest part of trip replay was aligning camera snapshots to GPS points. GPS units and cameras run on different internal clocks and log independently. A GPS unit might record a position at 14:23:15 and the camera might capture a snapshot at 14:23:47 - same physical location, 32 seconds apart in the data.

I didn't anticipate this. My first version used strict timestamp matching and showed almost no snapshots. I kept checking whether snapshots were even loading until I printed both arrays side by side and saw they were consistently 30-90 seconds apart on every trip. The data was there. The matching logic was wrong.

The fix was an expanding time window:

attachSnapshotsToTripPoints(tripPoints, snapshots): void { for (const point of tripPoints) { const pointTime = this.calculateTripPointDate(point); for (let windowMinutes = 1; windowMinutes <= 10; windowMinutes++) { const matched = snapshots.filter(snap => { const snapTime = new Date(snap.timestamp); const diff = Math.abs(pointTime - snapTime) / 60000; return diff <= windowMinutes; }); if (matched.length > 0) { point.snapshots = matched; point.channels = Object.keys(matched[0]) .filter(k => k.startsWith('chn')); break; } } } }

Start at 1 minute, expand to 10. A snapshot within 1 minute is a strong match. One 8 minutes away is weaker but more useful than nothing. The break on first match means we always use the tightest available window. The channels extraction pulls camera channel identifiers so the operator can switch between front, rear, left and right camera views at each trip point.

The polyline is cyan - #00FFFF - which contrasts well against the satellite imagery fleet operators prefer over the road view:

drawTripPolyline(points): void { const path = points.map(p => ({ lat: parseFloat(p.latitude), lng: parseFloat(p.longitude) })); this.tripPolyline = new google.maps.Polyline({ path, geodesic: true, strokeColor: '#00FFFF', strokeOpacity: 0.8, strokeWeight: 3, map: this.map }); }

Real-time polling in REST mode - zoom-aware intervals

When using the REST data source the map polls for updated positions since REST doesn't push. I made the polling interval dynamic based on zoom:

startRealtimeCoordinatePolling(): void { if (this.getVehicleDataSourceValue() === 'firebase') return; const zoomLevel = this.map.getZoom(); const interval = zoomLevel > 12 ? 10000 : 30000; this.pollingSubscription = timer(0, interval).pipe( mergeMap(() => this.updateVehiclesInBounds(), 5) ).subscribe(); }

Overview zoom updates every 30 seconds. Zoomed in updates every 10. The mergeMap(..., 5) concurrency cap prevents request pile-up - at most 5 position requests in-flight simultaneously regardless of how slow the server responds.

At overview zoom a truck moving 500 metres in 30 seconds is visually indistinguishable. At close zoom that movement is immediately obvious. The interval adapts to when freshness actually matters rather than hammering the server uniformly.

Switching between trip mode and real-time mode

The maps page operates in two mutually exclusive modes. I tested what happens when both are active simultaneously exactly once, by accident while debugging. Markers jumped between current GPS coordinates and 6-month-old trip data. The transition is now explicit and guarded:

async getTracking() { this.unsubscribeFromDeviceSnapshots(); this.follow = false; try { const result = await this.tripReplayService.loadTripReplay(params); const facade = this.getActiveTripReplayFacade(); await facade.renderTripReplay({ tripData: result.tripData, ... }); } catch (error) { // Handle failure } } cancelTrips() { this.mapRenderingService.resumeRealtimeTracking(); this.subscribeToDeviceSnapshots(); }

unsubscribeFromDeviceSnapshots() before entering trip mode. subscribeToDeviceSnapshots() when leaving it. The explicit pairing is the only thing standing between a clean user experience and flickering markers.

What the maps page ended up being

The feature I thought would take a week took significantly longer. The monolith I built first taught me more about what the feature needed than any planning would have - I just paid a high maintenance price for that knowledge while it lived in one file.

After the decomposition a developer can open TripReplayFacade and understand trip rendering without touching MapRenderingService. A bug in snapshot alignment doesn't require understanding how the spiderfier works. The maps page is still the most complex part of the app - it has to be, given what it does - but it's complex in a manageable way rather than a terrifying one. That distinction took me longer to appreciate than it should have.

This article is part of a series on building a fleet telematics platform.

Tech used in this article

  • TypeScript
  • Angular
  • Firebase