How We Built an EV Charging Platform for 5,000+ Stations

06 Apr 2026

How We Built an EV Charging Platform for 5,000+ Stations

Most case studies read like press releases. This one doesn't. Here's the real story behind the architecture decisions, OCPP headaches, and scaling problems we solved while building an EV charging management platform that now handles over 5,000 stations across Europe.

We've been building EV charging infrastructure software at MTechZilla for over four years now.

What started as a single client engagement with a Swiss eMobility platform turned into one of the deepest technical investments we've made as an agency. The platform now manages over 5,000 charging stations, handles real-time telemetry from thousands of concurrent connections, and processes payments across multiple European markets.

This is the technical story of how we built it. Not the sanitised version. The real one, including the parts where we got things wrong.

The Problem We Were Solving

When client came to us, they had a clear problem and a tight timeline. They needed a Charging Station Management System (CSMS) that could onboard hardware from multiple manufacturers, manage charging sessions in real time, handle payments, and give operators a dashboard to monitor their entire network.

That sounds simple enough until you dig into the constraints.

The platform needed to support OCPP - the Open Charge Point Protocol - which is the industry standard for how chargers talk to backend systems. Client's network included hardware from multiple vendors, each with their own interpretation of the OCPP spec.

The system had to handle real-time WebSocket connections from thousands of chargers simultaneously, with sub-second response times for session authorization. And it needed to work across multiple European countries, each with different payment regulations, energy pricing models, and grid requirements.

They didn't want a white-label product. They wanted a custom platform they owned and could evolve.

Why We Chose Node.js, React, and AWS

We get asked about our tech stack choice on this project a lot, so here's the reasoning.

Node.js for the backend

It was a deliberate choice, not a default one. OCPP communication runs over WebSockets - persistent, bidirectional connections between each charger and the server. Node's event-driven, non-blocking architecture handles thousands of concurrent WebSocket connections more efficiently than thread-per-connection models.

When you're managing 5,000+ chargers that each maintain a persistent connection and send status updates every few seconds, that matters. We also needed fast JSON parsing since OCPP 1.6J uses JSON over WebSocket, and Node handles that natively without serialisation overhead.

React for the operator dashboard and driver-facing interfaces

The operator dashboard needed to display real-time charger status across the entire network - thousands of stations, each with live status updates, session data, and alerts. React's component model and state management made it possible to build dense, data-heavy interfaces that update in real time without re-rendering the entire page.

We used server sent events on the frontend so operators could see a charger go from "Available" to "Charging" the moment it happened.

AWS for infrastructure

Because of the managed WebSocket support through API Gateway, the ability to scale compute independently of storage, and the European region availability that mattered for data residency.

We ran the core OCPP gateway on EC2 instances behind a Network Load Balancer - not serverless - because WebSocket connections are long-lived and don't map well to Lambda's execution model. Supporting services like session processing, billing, and reporting ran as containerized microservices on ECS.

OCPP Integration: What the Docs Don't Tell You

OCPP looks straightforward on paper. The spec defines message types -BootNotification, Heartbeat, StartTransaction, StopTransaction, StatusNotification - and you implement handlers for each one. In theory, any OCPP-compliant charger should work with any OCPP-compliant backend.

In practice, it's a mess.

Every manufacturer interprets the spec differently. The OCPP specification leaves room for ambiguity, and manufacturers fill that ambiguity with their own assumptions. One charger brand would send StatusNotification before StartTransaction. Another would send them in the opposite order. A third would sometimes skip StatusNotification entirely during fast session transitions.

Our code had to handle all three behaviors and produce consistent session records regardless.

OCPP 1.6 and 2.0.1 coexist in the wild, and they're not compatible. Most deployed chargers still run OCPP 1.6. Newer hardware ships with 2.0.1, which is a fundamentally different protocol with different message structures, security models, and functional blocks.

We had to build and maintain two parallel protocol handlers, with a normalisation layer that converted both into a unified internal event model. The alternative - forcing all chargers onto one version - wasn't realistic given the hardware mix in client's network.

Firmware bugs are your problem, not the manufacturer's. We encountered chargers that would drop WebSocket connections silently under high load, chargers that sent malformed JSON payloads when battery temperature hit certain thresholds, and chargers that would enter a stuck state after a failed payment authorization and refuse to reset without a manual reboot command.

None of this was in any spec document. We built a defensive protocol layer that validated every incoming message, handled malformed payloads gracefully, and implemented automatic reconnection logic with exponential backoff.

We also built a remote diagnostics system that let operators trigger firmware resets and configuration updates without sending a technician.

Timing and ordering are unreliable. OCPP messages arrive over WebSocket, which guarantees ordering within a single connection. But chargers sometimes reconnect mid-session, which creates a new connection with a new ordering context.

We had to implement session reconciliation logic that could reconstruct a complete session from out-of-order or duplicate messages, handling edge cases like a charger reporting StopTransaction before the server had processed StartTransaction because the start message was delayed on a flaky cellular connection.

Scaling Real-Time Monitoring to 5,000+ Chargers

When we started, the network had a few hundred chargers. The architecture that worked at 300 chargers started showing cracks at 1,500 and would have fallen over entirely at 5,000.

The WebSocket connection problem

Each charger maintains a persistent WebSocket connection to the backend. At 5,000 chargers sending heartbeats every 30 seconds plus status updates, transaction messages, and meter values, you're handling tens of thousands of messages per minute.

A single Node.js instance can handle a surprising number of WebSocket connections - we tested up to around 10,000 on a well-provisioned instance - but you can't put all your eggs in one process.

We implemented connection distribution across multiple gateway instances behind a Network Load Balancer, with sticky sessions based on charger identity. A Redis-backed pub/sub layer allowed any service in the cluster to send a command to any charger, regardless of which gateway instance held the connection.

The database wasn't the bottleneck we expected

We assumed PostgreSQL would be the first thing to choke under write load. It wasn't. The real bottleneck was our initial approach to session state management, where we were hitting the database on every single OCPP message to update session state.

The fix was an in-memory state machine for active sessions, with periodic batch writes to Postgres and immediate writes only for critical state transitions like transaction start and stop. This dropped our database write load by roughly 80% and made the system dramatically more responsive.

Real-time dashboard updates at scale required a different approach

At a few hundred chargers, we could push every status change to every connected dashboard.

At 5,000, that's a firehose. We moved to a subscription model where each operator's dashboard subscribes only to the chargers they manage, with server-side filtering so we're not shipping irrelevant updates across the wire.

For the network-wide overview, we switched from individual charger updates to aggregated statistics refreshed every few seconds, which gives operators the same at-a-glance picture without the bandwidth cost.

Meter values were the sleeper scaling challenge

Chargers send energy meter readings periodically during a session - typically every 15 to 60 seconds. At 5,000 chargers with an average of maybe 20% active at any given time, that's a thousand sessions each generating a meter reading every 15-60 seconds.

Over months, this produces an enormous volume of time-series data. We moved meter values out of our primary Postgres instance into a time-series optimized store with automatic downsampling for historical data.

Recent meter values stayed at full resolution for billing accuracy. Anything older than 90 days got downsampled to one-minute intervals. Anything older than a year got downsampled further.

Payments Were Harder Than the Protocol

We expected OCPP to be the hardest part of this project. It wasn't. Payments were.

EV charging payments sound simple - someone plugs in, charges their car, and pays for the energy consumed.

But the payment flow has to account for pre-authorization (placing a hold before the session starts, then capturing the actual amount after), partial charges when a session ends prematurely, multiple pricing models (per kWh, per minute, flat fee, or combinations), roaming between networks where a user from one operator charges on another operator's station, and VAT calculations that vary by country.

We also had to handle the OCPI (Open Charge Point Interface) protocol for roaming interoperability - letting drivers from partner networks use client's chargers and vice versa.

OCPI is a separate protocol from OCPP with its own credential exchange, tariff structures, and session CDR (Charge Detail Record) formats. Implementing OCPI on top of OCPP meant building a translation layer that could map internal session data into OCPI-compliant CDRs and handle tariff lookups from external sources in real time.

The payment edge cases were relentless.

What happens when a charger loses connectivity mid-session and the pre-auth hold expires before the session ends? What about sessions where the car stops drawing power but the cable stays plugged in - do you charge for idle time? What about currency conversion for cross-border roaming? Every edge case we solved revealed two more.

Results

Here are the numbers, but we want to be honest about what they represent: these are production metrics from a system that took years to get right, not something we shipped in the first quarter.

The platform manages over 5,000 charging stations across multiple European markets. Charger availability (the percentage of time a station is operational and ready to charge) averages above 98%, which is meaningfully above the industry average. Session authorization - the time between a driver tapping their card and the charger starting to deliver power - runs under two seconds in the 95th percentile.

The system handles peak loads during commute hours without degradation, processing thousands of concurrent sessions.

The operator dashboard handles real-time monitoring across the full network with sub-second update latency for individual charger status changes.

Billing reconciliation runs nightly with automated anomaly detection that flags sessions where the charged amount doesn't match expected meter readings within tolerance.

What We'd Do Differently

If we were starting this project today, there are a few things we'd change.

We'd start with OCPP 2.0.1 as the primary protocol from day one and treat 1.6 as a compatibility layer rather than the other way around. We built 1.6 first because that's what most chargers supported at the time. But 2.0.1 has a better security model, better device management, and a cleaner message structure.

Now that OCPP 2.0.1 has been accepted as IEC standard 63584 and 2.1 was released in January 2025 with V2X support, the migration path is clear. Building around the newer spec from the start would have saved us months of refactoring.

We'd invest in a proper protocol testing harness earlier. We built one eventually, but for the first year we were testing against real chargers only.

A charger simulator that could replay manufacturer-specific quirks, inject malformed messages, and simulate network failures would have caught issues weeks before they hit production.

We'd separate the OCPP gateway from the business logic more aggressively. Our initial architecture had too much session logic in the protocol handling layer.

When we needed to change business rules - like how we calculated idle fees - it required touching code that also handled raw WebSocket connections. The cleaner separation we have now, where the gateway produces normalized events and the business logic consumes them, should have been the design from the start.

If You're Building in This Space

EV charging infrastructure software is one of those domains where the complexity is invisible from the outside. It looks like "connect to charger, start session, charge money." Once you're inside it, you're dealing with unreliable hardware, ambiguous protocols, real-time systems at scale, cross-border payments, and regulatory requirements that vary by country.

We've spent four years learning these lessons so our clients don't have to learn them the expensive way. If you're building a CSMS, a fleet charging management system, or any software that needs to talk OCPP, we've been through the hard parts already.

Take a look at our EV charging platform work or our EV & eMobility services page for more on what we've built. Or if you want to talk architecture, reach out directly - we're happy to get into specifics.

MTechZilla is a custom software development agency with over four years of deep experience in EV charging infrastructure, OCPP, OCPI, and real-time charging network management. Learn more about our team on the Life at MTechZilla page.