
As electric vehicles continue to redefine the automotive landscape, the intelligence behind their batteries has become a critical differentiator. In this interview, Anna Fedorova, Senior Battery Algorithms Engineer at Iveco Group, shares her perspective on how battery algorithms have evolved from rigid, rule-based systems into adaptive, data-driven intelligence at the core of modern Battery Management Systems (BMS).
Reflecting on the early days, Anna highlights how static lookup tables and reactive safety logic limited performance, forcing conservative safety buffers and leaving much of the battery’s true potential untapped. Today, advances in embedded computing, state estimation, and cloud connectivity have transformed BMS into dynamic systems capable of real-time decision-making.
Looking ahead to VECS 2026 next week, she explores how technologies such as physics-informed AI, digital twins, and predictive analytics are set to push battery performance, safety, and longevity even further – while also addressing the complex engineering challenges that remain on the road to fully intelligent, scalable battery systems.
Battery algorithms have evolved significantly over the years. Looking back, what were the key limitations of the early rule-based battery algorithms, and how did they shape the foundations of today’s Battery Management Systems?
The early days of Battery Management Systems (BMS) were defined by Determinism. We treated the battery like a simple tank of fuel, when in reality, it is a complex, breathing electrochemical reactor.
Early algorithms relied almost exclusively on static Look-Up Tables (LUTs). These were essentially “cheat sheets” created in a lab under perfect conditions. They were snapshots in time. A LUT created for a brand-new cell is functionally a lie by the time that cell reaches 500 cycles. Because these rules couldn’t adapt to the non-linear nature of aging, engineers had to deal with massive “safety buffers”. This meant you were often carrying around 20% more battery than you could actually use, simply because the algorithm wasn’t smart enough to trust the cell’s edges.
Early safety logic was purely reactive. It operated on a “Threshold and Trip” basis: if the voltage exceeded a limit, the contactors opened. This approach ignored the rate of change and the thermal lag of the system. By the time a rule-based system detected a thermal runaway event via a simple temperature probe, the internal chemical reaction was often already past the point of no return. This shaped our modern obsession with State of Power (SOP). We realized we couldn’t just wait for a limit to be hit, we had to calculate in real-time the maximum current the battery could accept right now without violating a future limit.
The failures of these early “If-Then” structures forced industry to move toward State Estimation and Model-Based Control.
Because we couldn’t “see” inside the battery to measure the lithium concentration in the anode, we had to invent the math to infer it. This led to the integration of Kalman Filters and Observers. We stopped asking “What is the voltage?” and started asking “Based on the voltage and current I see, what must the internal state of the chemistry be?”
The limitations of static tables paved the way for the Equivalent Circuit Model (ECM). We began representing the battery as a network of resistors and capacitors that evolve over time. This allowed the BMS to become a “living” entity that updates its own parameters as the battery ages, effectively maintaining a “Digital Twin” of the hardware.
Today’s foundational concepts, like State of Function (SOF), exist because early rule-based systems were too conservative. We learned that the “foundation” of a good BMS isn’t a set of hard limits, but a dynamic, multi-dimensional envelope that balances fast charging, power delivery, and cycle life in real-time.
Modern BMS rely heavily on model-based and data-driven approaches. What are the most important technological advances that have enabled today’s intelligent battery algorithms to improve performance, safety, and battery lifetime?
The shift from “basic monitoring” to “intelligent management” was driven by three specific technological convergences: the leap in embedded compute power, the refinement of State-of-X (SOX) Estimation, and the birth of Cloud-to-Vehicle (C2V) architectures.
In the early days, we relied on “counting” (Coulomb Counting) and “looking up” (Static Tables). Today, the most significant advance is our move toward Recursive State Observers, specifically Extended and Unscented Kalman Filters (EKF/UKF). We are no longer just measuring voltage, we are using that voltage to “correct” a digital simulation of the battery’s internal chemical state in real-time. This has virtually eliminated “SOC drift”. It allows the algorithm to understand the difference between a voltage drop caused by a heavy acceleration load and a voltage drop caused by a depleted cell.
We’ve moved from 8-bit or 16-bit microcontrollers to automotive-grade 32-bit and 64-bit processors with dedicated floating-point units. This hardware allows us to move beyond simple electrical models and start implementing Reduced-Order Electrochemical Models (ROMs). Instead of modeling a battery as a resistor, we can now approximate the actual lithium-ion concentration gradients and the growth of the Solid Electrolyte Interphase (SEI) layer. This is the “holy grail” of battery life. By calculating the internal “stress” on the anode in real-time, the algorithm can prevent lithium plating during fast charging, directly extending the battery’s life by years without needing to be overly conservative with power limits.
The “Intelligence” of a modern battery no longer lives solely within the car, it lives in the cloud. High-speed telematics and 5G connectivity have enabled the Cloud-Based Digital Twin. We can now stream high-frequency data from a single pack and compare it against the performance of 100,000 other packs in the field. This allows for proactive safety and health. We use Machine Learning in the cloud to identify “statistical outliers”, i.e., cells that are aging slightly faster than the fleet average, months before they would ever trigger a traditional voltage-based fault code. We then push updated parameters back to that specific vehicle via an Over-the-Air (OTA) update to optimize its remaining life.
Why this matters for performance and safety?
We’ve moved to dynamic operating envelopes. Instead of a “hard” power limit, the BMS calculates a sliding scale of what the battery can handle based on its current internal resistance and temperature. This is how we achieve 15-25 minute fast-charging without a “one-size-fits-all” restriction.
We are moving from “Fault Detection” to “Fault Prediction”. By analyzing micro-variations in internal resistance over thousands of cycles, our algorithms can flag a potential internal short or dendritic growth long before a thermal event occurs.
We now practice Degradation-Conscious Control. The algorithms can subtly shift the usable State of Charge window as the battery ages, ensuring that the driver sees consistent range and performance even as the chemistry naturally degrades.
Data has become central to battery optimization. How are real-world vehicle data and machine learning currently being used to enhance battery monitoring, prediction, and control?
In the modern BMS landscape, data has moved from being a “diagnostic byproduct” to the “primary architect” of our algorithms. We are using a combination of field data and machine learning to close the loop between how a battery should work and how it is actually used.
One of the most fascinating insights we’ve gained from data analytics relates to the Human-Machine Interface (HMI). By analyzing fleet data from light-duty vehicles, we’ve identified a clear “psychological barrier” in battery discharge. Much like mobile phone users, EV drivers rarely “hypermile” down to 0% or even 20% State of Charge (SOC). Instead, a vast majority of users experience range anxiety or a “safety reflex” that leads them to plug in at 40% or 50%. This discovery fundamentally changes how we optimize the State of Health (SOH). Since the battery spends a disproportionate amount of its life in the “high-SOC” bracket (40% to 100%), we have shifted our degradation models to focus on the chemical stresses unique to that range, rather than the deep-discharge stresses we used to prioritize in lab testing.
We are now using Machine Learning (ML) to perform “Differential Analysis” across thousands of identical battery packs. By streaming data to the cloud, we can create a statistical “baseline” for a specific cell chemistry. If one vehicle’s pack shows a slightly higher rate of internal resistance growth or a subtle voltage divergence during a 40%-50% charge cycle compared to the rest of the fleet, the ML model flags it as an outlier. This allows us to transition from “Scheduled Maintenance” to “Condition-Based Monitoring”, identifying potential cell defects or sensor drifts long before they trigger a traditional dashboard warning light.
Real-world data is also being used to train “Virtual Sensors” that replace expensive or impossible-to-place physical hardware. We use different algorithms to estimate internal “hidden” temperatures at the center of a jellyroll, which a physical thermistor cannot reach. By combining these estimates with our physics-based models, we can push the battery closer to its true physical limits during fast charging without risking safety. This “Hybrid Modeling” approach uses data to fill the gaps where traditional physics equations become too computationally expensive to solve in real-time.
Finally, data allows us to move away from “one-size-fits-all” control logic. Machine Learning (ML) can categorize a driver’s profile, are they a “performance” driver in a cold climate or a “commuter” in a temperate zone? The BMS can then dynamically tune its control parameters. For the 40%-charging “phone-style” user, the algorithm might prioritize cooling strategies that minimize high-voltage calendar aging, whereas for a long-distance driver, it might prioritize maximizing usable energy density.
Looking ahead, technologies such as AI, digital twins, and predictive analytics are gaining traction. Which of these developments do you believe will have the biggest impact on next-generation battery management systems?
While all these technologies are transformative, I believe the biggest impact will come from the integration of Physics-Informed AI within a Digital Twin framework. We are moving away from “Black Box” AI (which can be unpredictable) toward a system that understands the laws of electrochemistry while leveraging the speed of machine learning.
Pure AI can find patterns, but it doesn’t “know” that a battery can’t have negative resistance. The next generation of algorithms will use Physics-Informed Neural Networks. These models use the underlying differential equations of lithium-ion diffusion as a constraint for the AI. This means the algorithm can predict rare “edge-case” failures (like internal short circuits or rapid lithium plating) with much higher reliability than a standard data-driven model. It gives us the “why” behind the “what”, which is critical for automotive safety certification.
The Digital Twin will evolve from a simple simulation into a persistent, cloud-based identity for every individual battery pack. By maintaining a Digital Twin from the moment a cell leaves the factory, we can track its “stress history”. If our data shows the “40% charging barrier” (the psychological behavior we discussed earlier), the Digital Twin adjusts the battery’s expected life trajectory in real-time. This allows for precision charging, tailoring the current profile not just to the battery type, but to that specific, aging pack’s internal health.
Predictive analytics will shift the BMS from a passive monitor to an active optimizer. Imagine a BMS that predicts a cooling system degradation three weeks before it happens. It can proactively adjust the thermal management strategy or the power limits to prevent the battery from ever entering a high-stress thermal zone. This “proactive mitigation” will be the key to achieving the “million-mile battery”.
If I had to pinpoint the single most significant result of these advancements, it is the elimination of the “Safety Oversizing” penalty. Currently, we oversize battery packs and limit their performance because we don’t perfectly understand their internal state at Year 8 or Year 10. By using AI and Digital Twins to gain “X-ray vision” into the cell’s internal chemistry throughout its life, we can: use smaller packs more aggressively because we can precisely manage their limits. We can provide a “Birth-to-Death” data certificate that proves a battery is still healthy enough for grid storage after its life in a vehicle. And we can safely push the boundaries of C-rates by knowing exactly when the anode is at risk of plating.
From an automotive industry perspective, what are the biggest challenges engineers still face when developing smarter and more adaptive battery algorithms for future electric vehicles?
While our lab simulations and cloud-based AI models look incredible on paper, translating those into a million-vehicle fleet introduces three massive, non-trivial challenges: the computational gap, scattered data vs. privacy, and the “Black Box” safety validation.
We can run a high-fidelity electrochemical model on a powerful workstation in seconds, but a vehicle’s battery management unit is not a supercomputer. We are constantly fighting a war between model accuracy and computational cost. If an algorithm is too complex, it drains the very battery it’s trying to save or requires expensive, high-spec automotive chips that drive up the vehicle’s price. The real “art” right now is model order reduction, taking a complex set of partial differential equations and stripping them down to their mathematical essence so they can run on a low-power microcontroller in real-time without losing the “physics” of the battery.
Automotive engineering is built on the foundation of Functional Safety. Every line of code must be predictable and testable. Machine Learning and adaptive algorithms are, by definition, “evolving”. How do you certify a safety-critical system that might change its behavior six months after the car leaves the factory? We face a massive challenge in creating “Explainable AI”. Regulators and safety engineers are wary of “Black Box” models. We have to prove that even if the algorithm “learns” and adapts, it will never violate the fundamental safety limits of the cell. We are essentially trying to build a “straitjacket” for AI that allows it to be smart but prevents it from being unpredictable.
As we discussed with the 40%-50% charging behavior, humans are the most unpredictable variable in our equations. Real-world data is “noisy” and “sparse”. If a user only ever charges their car from 40% to 60%, the algorithm never sees the top or bottom of the voltage curve. It’s like trying to finish a puzzle with half the pieces missing. We have to develop algorithms that are robust enough to maintain accuracy even when the “excitation” of the system is low. We can’t force a customer to drive their car to 5% just so the BMS can “re-learn” its capacity. Building algorithms that can accurately estimate health from tiny, “boring” snapshots of data is the current frontier of the industry.
Finally, there is the challenge of scale and environment. An algorithm tuned for a light-duty vehicle in the temperate climate of California may behave entirely differently in a heavy-duty truck in a Canadian winter. Lithium-ion dynamics change drastically at sub-zero temperatures. We are struggling to create “Universal Algorithms”. Currently, a lot of engineering time is still spent on “calibration”, manually tweaking parameters for different cell chemistries and pack architectures. Moving toward a truly “chemistry-agnostic” BMS that can self-calibrate to any cell is the ultimate goal, but we aren’t there yet.
The biggest challenge isn’t just making the algorithm “smarter”, it’s making it scalable, certifiable, and efficient. We have the brains (the AI), now we need to build the “nervous system” that can handle the messy, unpredictable reality of the open road.
The opinions shared in the interview are Anna Fedorova’s personal views and should not be interpreted as representing the company.
