N2 Soc

Download as pdf or txt
Download as pdf or txt
You are on page 1of 96

ECE5720: Battery Management and Control 3–1

Battery State Estimation

3.1: Preliminary definitions

■ A battery management system must estimate quantities that

! Describe the present battery pack condition, but

! May not be directly measured.

■ States are quantities that change quickly (e.g., state-of-charge,


diffusion voltage, hysteresis voltage).
■ Parameters are quantities that change slowly (e.g., cell capacities,
resistances, aging effects).
■ These quantities are typically updated in a program loop that looks
something like:

key off: store data


key on: initialize

meas. voltage estimate estimate balance compute


current state of state of power
temperature charge (SOC) health (SOH) cells limits

loop once each measurement interval while pack is active

■ This chapter considers battery state estimation; the next chapter


considers battery health estimation.

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–2
State-of-charge (SOC) estimation

■ An estimate of all battery-pack cells’ SOC is an important input to


balancing, energy, and power calculations.
■ While we might be interested in estimating the entire battery-model
state, we first focus on estimating state-of-charge only.

! We’ll see some simple methods that lack robustness.

! Then, we examine methods that estimate the entire battery-model


state, enabling some more advanced applications.
■ Recall, SOC is something like a dashboard fuel gauge that reports a
value from “Empty” (0 %) to “Full” (100 %).
■ But, while there exist sensors to accurately measure a gasoline level
in a tank, there is (presently) no sensor available to measure SOC.
■ Further, accurate SOC estimates provide the following benefits:

Longevity: If a gas tank is over-filled or run empty, the tank is fine.


■ However, over-charging or over-discharging a battery cell may cause
permanent damage and result in reduced lifetime.
■ An accurate SOC estimate may be used to avoid harming cells by not
permitting current to be passed that would cause damage.

Performance: Without a good SOC estimator, one must be overly


conservative when using the battery pack to avoid over/undercharge
due to trusting the poor estimate.
■ With a good estimate, especially one with known error bounds, one
can aggressively use the entire pack capacity.

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–3

Reliability: Poor estimators behave differently for different use profiles.


■ A good SOC estimator is consistent and dependable for any driving
profile, enhancing overall power-system reliability.
Density: Accurate SOC and battery state information allows the battery
pack to be used aggressively within the design limits, so the pack
does not need to be over-engineered.
■ This allows smaller, lighter battery packs.
Economy: Smaller battery systems cost less. Warranty service on a
reliable system costs less.

A careful definition of state-of-charge


■ Chapter 1 introduced an electrochemical definition of state-of-charge.
cs;max
■ We defined the present lithium concentration
!100%
stoichiometry as ! D cs;avg=cs;max .
■ This stoichiometry is intended to remain between
!0% and !100%.
■ Then, cell SOC is computed as:
´k D .!k " !0%/=.!100% " !0%/. !0%
■ The issue addressed here is that there is (presently) no direct way to

measure the concentrations that would allow us to calculate the SOC.


■ So, we must infer or estimate the SOC using measurements of only
cell terminal voltage and cell current.
■ We’ve already noticed that while cell OCV is closely related to SOC,
the terminal voltage is a poor predictor of OCV unless the cell is in
electrochemical equilibrium (and hysteresis is negligible).
Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–4
■ So, how can we know the true SOC to evaluate our estimators? How
can we know true SOC for any other purpose?

KEY POINT : We are aided by some definitions that can calibrate our tests.
DEFINITION: A cell is fully charged when its open circuit voltage (OCV)
reaches vh.T /, a manufacturer specified voltage that may be a
function of temperature T .
■ e.g., vh.25 ıC/ D 4:2 V for LMO; vh .25 ıC/ D 3:6 V for LFP.
■ A common method to bring a cell to a fully charged state is to
execute a constant-current charge profile until the terminal voltage
is equal to vh.T /, followed by a constant-voltage profile until the
charging current becomes infinitesimal.
■ We define the SOC of a fully charged cell to be 100 %.

DEFINITION: A cell is fully discharged when its OCV reaches vl .T /, a


manufacturer specified voltage that may be a function of temperature.
■ e.g., vl .25 ıC/ D 3:0 V for LMO; vl .25 ıC/ D 2:0 V for LFP.
■ A cell may be fully discharged by executing a constant-current
discharge profile until its terminal voltage is equal to vl .T /,
followed by a constant-voltage profile until the discharge current
becomes infinitesimal.
■ We define the SOC of a fully discharged cell to be 0 %.

DEFINITION: The total capacity Q of a cell is the quantity of charge


removed from a cell as it is brought from a fully charged state to a
fully discharged state.

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–5
■ While the SI unit for charge is coulombs (C), it is more common in
practice to use units of ampere hours (Ah) or milliampere hours
(mAh) to measure the total capacity of a battery cell.
■ The total capacity of a cell is not a fixed quantity: it generally
decays slowly over time as the cell degrades.

DEFINITION: The discharge capacity QŒrate" of a cell is the quantity of


charge removed from a cell as it is discharged at a constant rate from
a fully charged state until its loaded terminal voltage reaches vl .T /.
■ Because the discharge capacity is determined based on loaded
terminal voltage rather than open circuit voltage, it is strongly
dependent on the cell’s internal resistance, which itself is a
function of rate and temperature.
■ Hence, the discharge capacity of a cell is rate dependent and
temperature dependent.
■ Because of the resistive i.t/ # R0 drop, discharge capacity is less
than total capacity unless the discharge rate is infinitesimal.
■ Likewise, the SOC of the cell is nonzero when the terminal voltage
reaches vl .T / at a non-infinitesimal rate.
■ The discharge capacity of a cell at a particular rate and
temperature is not a fixed quantity: it also generally decays slowly
over time as the cell degrades.

DEFINITION: The nominal capacity Qnom of a cell is a manufacturer-


specified quantity that is intended to be representative of the 1C-rate
discharge capacity Q1C of a particular manufactured lot of cells at
room temperature, 25 ıC.

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–6
■ The nominal capacity is a constant value.
■ Since the nominal capacity is representative of a lot of cells and
the discharge capacity is representative of a single individual cell,
Qnom ¤ Q1C in general, even at beginning of life.
■ Also, since Qnom is representative of a discharge capacity and not
a total capacity, Qnom ¤ Q.

DEFINITION: The residual capacity of a cell is the quantity of charge that


would be removed from a cell if it were brought from its present state
to a fully discharged state.
DEFINITION: The state-of-charge of the cell is the ratio of the residual
capacity to the total capacity of the cell.
■ These definitions are consistent with the relationships
Z
1 t
´.t/ D ´.0/ " #.t/i.t/ dt, and ´kC1 D ´k " #k ik $t=Q
Q 0
that we have already seen.

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–7
3.2: Some approaches to estimate state of charge

Poor, voltage-based methods to estimate SOC


■ Measure cell terminal voltage under load v.t/ and look up on “SOC
versus OCV” curve

! Misses effects of i.t/ # R0 losses, diffusion voltages, hysteresis


! Wide flat areas of OCV curve dilute accuracy of estimate

■ The “Tino” method assumes a cell model v.t/ D OCV.´.t// " i.t/R0
and then looks up v.t/ C i.t/R0 on “SOC versus OCV” curve
! Better, but still misses diffusion voltages, hysteresis
True SOC and voltage-based estimate
■ Example shows that Tino 100
SOC estimate
True SOC
estimate is very noisy.
SOC and estimate (%)

80

■ Filtering helps, but adds delay, 60

which must be accounted for. 40

■ Hysteresis is another 20

complicating factor. 0
0 100 200 300 400 500
Time (min)

■ Even though its estimates are noisy, we’ll find an application for the
Tino method in the next chapter of notes.

Poor, current-based method to estimate SOC


■ Coulomb counting keeps track of charge in, out of cells via
Z
1 t
Ó .t/ D Ó .0/ " b imeas .%/ d%
Q 0
imeas .t/ D itrue .t/ C inoise .t/ C ibias .t/ C inonlin .t/ C isd .t/ C ileakage .t/.

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–8

! Okay for short periods of operation when initial conditions are


known or can be frequently “reset”;
! Subject to drift due to current sensor’s fluctuations, current-sensor
bias, incorrect capacity estimate, other losses;
■ Uncertainty/error bounds grow over time, increasing without bound
until estimate is “reset”.

Model-based state estimation


■ An alternative to a voltage-only method or a current-only method is to
somehow combine the approaches.
■ Model-based state estimators implement algorithms that use sensor
measurements to infer the internal hidden state of a dynamic system.
Process Noise Sensor Noise
True System
Input Measured Output
State

System Model
Predicted Output
State Est.

! A mathematical model of the system is assumed known.


! Same input propagated through true system and model.
! Measured and predicted outputs compared; error used to update
model’s estimate of the true state:
◆ Output error due to: state, measurement, model errors;
◆ Update must be done carefully to account for all of these.
■ Under some specific conditions, the Kalman filter (a special case of
sequential probabilistic inference) gives the optimal state estimate.
Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–9
■ We will look at the linear Kalman filter and some of its variants
throughout the remainder of this chapter.

Sequential probabilistic inference


■ We start by assuming a general, possibly nonlinear, model
xk D f .xk"1; uk"1; wk"1 /

yk D h.xk ; uk ; vk /,
where uk is a known (deterministic/measured) input signal, wk is a
process-noise random input, and vk is a sensor-noise random input.
■ We note that f .$/ and h.$/ may be time-varying, but we generally omit
the time dependency from the notation for ease of understanding.
SEQUENTIAL PROBABILISTIC INFERENCE: Estimate the present state xk
of a dynamic system using all measurements Yk D fy0; y1; $ $ $ ; yk g .
yk"2 yk"1 yk

Observed fY jX .yk j xk /
Unobserved
xk"2 xk"1 xk
fX jX .xk j xk"1 /

■ The observations allow us to “peek” at what is happening in the true


system. Based on observations and our model, we estimate the state.
■ However, process-noise and sensor-noise randomness cause us
never to be able to compute the state exactly.
■ So, to be able to talk about the sequential-probabilistic-inference
solution, we first must look at some topics in vector random variables
and random processes.

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–10
3.3: Review of probability
■ By definition, noise is not deterministic—it is random in some sense.
■ So, to discuss the impact of noise on the system dynamics, we must
understand “random variables” (RVs).
! Cannot predict exactly what we will get each time we measure or
sample the random variable, but
! We can characterize the probability of each sample value by the
“probability density function” (pdf).
■ For a brief review, define random vector X and sample vector x0 as
2 3 2 3
X1 x1
6 7 6 7
6 X2 7 6 x2 7
X D6 7
6 ::: 7 , x0 D 6 7
6 ::: 7 ,
4 5 4 5
Xn xn
where X1 through Xn are scalar random variables and x1 through xn
are scalar constants.
■ X described by (scalar function) joint pdf fX .x/ of vector X.
! fX .x0 / means fX .X1 D x1 ; X2 D x2 ; $ $ $ ; Xn D xn /.
! That is, fX .x0 / dx1 dx2 $ $ $ dxn is the probability that X is between
x0 and x0 C dx.
■ Properties of joint pdf fX .x/:
1. Z
fX .x/ % 0 Z8 x.
1Z 1 1
2. $$$ fX .x/ dx1 dx2 $ $ $ dxn D 1.
"1 "1 "1 Z
Z Z 1
1 1
3. xN D EŒX" D $$$ xfX .x/ dx1 dx2 $ $ $ dxn.
"1 "1 "1
Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–11

4. Correlation matrix:
†X D EŒXX T " (outer product)
Z 1Z 1 Z 1
D ::: xx T fX .x/ dx1 dx2 $ $ $ dxn.
"1 "1 "1
5. Covariance matrix: Define X N Then,
e D X " x.
†Xe D EŒ.X " x/.X
N N T"
" x/
Z 1Z 1 Z 1
D $$$ .x " x/.x
N N T fX .x/ dx1 dx2 $ $ $ dxn .
" x/
"1 "1 "1
†Xe is symmetric and positive-semi-definite (psd). This means
y T †Xe y % 0 8 y.
■ Notice that correlation D covariance for zero-mean random vectors.
■ The covariance entries have specific meaning:
.†Xe /i i D &X2 i

.†Xe /ij D 'ij &Xi &Xj D .†Xe /j i .


! The diagonal entries are the variances of each vector component;
! The correlation coefficient 'ij is a measure of linear dependence
between Xi and Xj . j'ij j & 1.

■ There are infinite variety in pdfs.


■ However, we use only the
(multivariable) Gaussian pdf
when defining the Kalman filter.
■ All noises and the state vector
itself are assumed to be
Gaussian random vectors.

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–12
■ The Gaussian or normal pdf is defined as, where we say
X ' N .x;
N †Xe /
! "
1 1 T "1
fX .x/ D 1=2
exp " .x " N
x/ †Xe .x " x/
N :
.2(/ j†Xe j
n=2 2
j†Xe j D det.†Xe /; †"1
e requires positive-definite †X
X e.

■ Contours of constant fX .x/ are hyper-ellipsoids, centered at x,


N
directions governed by †Xe .

Properties of jointly-distributed RVs

INDEPENDENCE : Iff jointly-distributed RVs are independent, then

fX .x1; x2; : : : ; xn/ D fX1 .x1/fX2 .x2/ $ $ $ fXn .xn/.

■ Joint distribution can be split up into the product of individual


distributions for each RV.

! “The particular value of the random variable X1 has no impact on


what value we would obtain for the random variable X2.”

UNCORRELATED : Two jointly-distributed R.V.s X1 and X2 are


uncorrelated if their second moments are finite and

cov.X1; X2/ D EŒ.X1 " xN 1 /.X2 " xN 2/" D 0

which implies '12 D 0.


■ Uncorrelated means that there is no linear relationship between RVs.

MAIN POINT #1: If jointly-distributed RVs X1 and X2 are independent then


they must also be uncorrelated. Independence implies uncorrelation.
However, uncorrelated RVs are not necessarily independent.
Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–13

MAIN POINT #2: If jointly normally distributed RVs are uncorrelated, then
they are independent. This is a (very) special case.
MAIN POINT #3: We can define a conditional pdf
fX;Y .x; y/
fXjY .xjy/ D
fY .y/
as the probability that X D x given that Y D y has happened.

NOTE : The marginal probability fY .y/ may be calculated as


Z 1
fY .y/ D fX;Y .x; y/ dx.
"1
For each y, integrate out the effect of X.
DIRECT EXTENSION :

fX;Y .x; y/ D fXjY .xjy/fY .y/

D fY jX .yjx/fX .x/,

Therefore,
fY jX .yjx/fX .x/
fXjY .xjy/ D .
fY .y/
■ This is known as Bayes’ rule. It relates the posterior probability to the
prior probability.
■ It forms a key step in the Kalman filter derivation.

MAIN POINT #4: We can define conditional expectation as what we


expect the value of X to be given that Y D y has happened
Z 1
EŒX D xjY D y" D EŒXjY " D xfXjY .xjY / dx.
"1

■ Note: Conditional expectation is critical. The Kalman filter is an


algorithm to compute EŒxk j Yk ", where we define Yk later.
Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–14

MAIN POINT #5: Central Limit Theorem.


X
■ If Y D Xi and the Xi are independent and identically distributed
i
(IID), and the Xi have finite mean and variance, then Y will be
approximately normally distributed.
■ The approximation improves as the number of summed RVs gets
large.
■ Since the state of our dynamic system adds up the effects of lots of
independent random inputs, it is reasonable to assume that the
distribution of the state tends to the normal distribution.
■ This leads to the key assumptions for the derivation of the Kalman
filter, as we will see:
! We will assume that the state xk is a normally distributed random
vector;
! We will assume that the process noise wk is a normally distributed
random vector;
! We will assume that the sensor noise vk is a normally distributed
random vector;
! We will assume that wk and vk are uncorrelated with each other.

■ Even when these assumptions are broken in practice, the Kalman


filter works quite well.
■ Exceptions tend to be with very highly nonlinear systems, for which
particle filters must sometimes be employed to get good estimates.
MAIN POINT #6: A linear combination of Gaussian RVs results in a
Gaussian RV.

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–15
3.4: Overview of vector random (stochastic) processes
■ A stochastic or random process is a family of random vectors indexed
by a parameter set (“time” in our case).
! For example, we might refer to a random process Xk for generic k.
! The value of the random process at any specific time k D m is a
random variable Xm.
■ Usually assume stationarity.
! The statistics (i.e., pdf) of the RV are time-shift invariant.
! Therefore, EŒXk " D xN for all k and EŒXk1 Xk2 " D RX .k1 " k2 /.
T

Properties and important points

1. Autocorrelation: RX .k1; k2 / D EŒXk1 XkT2 ". If stationary,


T
RX .%/ D EŒXk XkC% ".
■ Provides a measure of correlation between elements of the
process having time displacement %.
■ RX .0/ D &X2 for zero-mean X.
■ RX .0/ is always the maximum value of RX .%/.
2. Autocovariance: CX .k1; k2 / D EŒ.Xk1 " EŒXk1 "/.Xk2 " EŒXk2 "/T ". If
stationary,
CX .%/ D EŒ.Xk " x/.X
N N T ".
kC% " x/

3. White noise: Some processes have a unique autocorrelation:


(a) Zero mean,
(b) RX .%/ D EŒXk XkC%
T
" D SX ı.%/ where ı.%/ is the Dirac delta.
ı.%/ D 0 8 % ¤ 0.
Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–16
■ Therefore, the process is uncorrelated in time.
■ Clearly an abstraction, but proves to be a very useful one.
4
White noise 0.2
Correlated noise
0.15
2
0.1

Value
Value

0.05
0
0

−0.05
−2
−0.1

−4 −0.15
0 200 400 600 800 1000 0 200 400 600 800 1000
Time Time

4. Shaping filters: We will assume that the noise inputs to the dynamic
systems are white (Gaussian) processes.
■ Pretty limiting assumption, but one that can be easily fixed ➠ Can
use second linear system to “shape” the noise as desired.
Previous Picture New Picture
White yk Shaped yk
noise wk G.´/ G.´/
noise w1;k

White w1;k
H.´/ G.´/ yk
noise w2;k Shaped

■ Therefore, we can drive our linear system with noise that has a
desired characteristics by introducing a shaping filter H.´/ that
itself is driven by white noise.
■ The combined system GH.´/ looks exactly the same as before,
but the system G.´/ is not driven by pure white noise any more.

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–17
■ Analysis augments original system model with filter states.
Original system has
xkC1 D Axk C Bw w1;k

yk D C xk .
■ Shaping filter with white input and desired output statistics has
xs;kC1 D As xs;k C Bs w2;k

w1;k D Cs xs;k .
■ Combine into one system:
2 3 2 32 3 2 3
xkC1 A Bw Cs xk 0
4 5D4 54 5C4 5 w2;k
xs;kC1 0 As xs;k Bs
2 3
h i xk
yk D C 0 4 5.
xs;k
■ Augmented system just a larger-order system driven by white
noise.
5. Gaussian processes: We will work with Gaussian noises to a large
extent, which are uniquely defined by the first- and second central
moments of the statistics ➠ Gaussian assumption not essential.
■ Our filters will always track only the first two moments.
NOTATION : Until now, we have always used capital letters for random
variables. The state of a system driven by a random process is a
random vector, so we could now call it Xk . However, it is more
common to retain the standard notation xk and understand from the
context that we are discussing an RV.

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–18
3.5 Sequential-probabilistic-inference solution
■ In the notation that follows,
! The superscript “"” indicates a predicted quantity based only on
past measurements.
! The superscript “C” indicates an estimated quantity based on both
past and present measurements.
! The decoration “O” indicates a predicted or estimated quantity.
! The decoration “Q” indicates an error: the difference between a
true and predicted or estimated quantity.
! The symbol “†” is used to denote the correlation between the two
arguments in its subscript (autocorrelation if only one is given).
†xy D EŒxy T " and †x D EŒxx T ".

! Furthermore, if the arguments are zero mean (as they often are in
the quantities we talk about), then this represents covariance.
†xQ yQ D EŒxQ yQ T "

D EŒ.xQ " EŒx"/. Q T ",


Q yQ " EŒy"/
for zero-mean xQ and y.
Q
■ We choose to find a state estimate that minimizes the “mean-squared
error”
# $% %2 &'
MMSE
xO k .Yk / D arg min E %xk " xO k %2 j Yk
C
xO k
( $ &)
D arg min E .xk " xO kC /T .xk " xO kC / j Yk
xO k
( $ &)
D arg min E xkT xk " 2xkT xO kC C .xO kC /T xO kC j Yk .
xO k

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–19
■ We solve for xO kC by differentiating the cost function and setting the
result to zero
d $ &
0 D C E xkT xk " 2xkT xO kC C .xO kC/xO kC j Yk .
dxO k
■ To do so, note the following identities from vector calculus,
d T d T d T
Y X D Y; X Y D Y; and X AX D .A C AT /X.
dX dX dX
■ Then,

$ C
& C
$ &
0 D E " 2.xk " xO k / j Yk D 2xO k " 2E xk j Yk
$ &
xO kC D E xk j Yk .

■ We proceed by assuming that all RVs have a Gaussian distribution.


We will find that the result has a predict/correct mechanism.
■ So, with malice aforethought, we define prediction error xQ k" D xk " xO k"
$ &
where xO k" D E xk j Yk"1 .
! Error is always “truth minus prediction” or “truth minus estimate.”
! We can’t compute error in practice, since truth value is not known.
! But, we can prove statistical results using this definition that give
an algorithm for estimating the truth using measurable values.
■ Also, define the measurement innovation (what is new or unexpected
$ &
in the measurement) as yQk D yk " yOk where yOk D E yk j Yk"1 .
■ Both xQ k" and yQk can be shown to be zero mean using the method of
$ $ && $ &
iterated expectation: E E X j Y D E X .
$ "& $ & $ $ && $ & $ &
E xQ k D E xk " E E xk j Yk"1 D E xk " E xk D 0
$ & $ & $ $ && $ & $ &
E yQk D E yk " E E yk j Yk"1 D E yk " E yk D 0.

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–20
■ Note also that xQ k" is uncorrelated with past measurements as they
have already been incorporated into xO k"
$ " & $ $ & & $ "&
E xQ k j Yk"1 D E xk " E xk j Yk"1 j Yk"1 D 0 D E xQ k .

■ Therefore, on one hand we can write the relationship


$ & $ & $ &
E xQ k" j Yk D E xk j Yk " E xO k" j Yk .
„ ƒ‚ … „ ƒ‚ …
xO kC xO k"

■ On the other hand, we can write


$ & $ & $ &
E xQ k" j Yk D E xQ k" j Yk"1 ; yk D E xQ k" j yk .

■ So,
$ &
xO kC D xO k" CE xQ k" j yk ,
which is a predict/correct sequence of steps, as promised.
$ " &
Q k j yk ? We can show that, generically, when x and y
■ But, what is E x

are jointly Gaussian distributed,


$ & $ & "1
( $ &)
E x j y D E x C †xQ yQ †yQ y " E y .

■ Applying this to our problem, when yk D yQk C yOk , we get


$ & $ & ( $ &)
E xQ k" j yk D E xQ k" C †" xQ y;k
Q † "1
Q
y;k y k " E yk
$ "& " "1
( $ &)
D E xQ k C †xQ y;k
Q †y;k Q yQk C yOk " E yQk C yOk
" "1
( )
D 0 C †xQ y;k
Q †y;k
Q yQk C yOk " .0 C yOk /

D †" "1
Q †y;k yQk .
„ ƒ‚ Q …
xQ y;k
Lk

■ Putting all of the pieces together, we get the general update equation:
xO kC D xO k" C Lk yQk .
Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–21
■ Note that Lk is a function of †C Q , which may be computed as
x;k
C
$ C C T
&
Q D E .xk " x
†x;k O k /.xk " xO k /
$˚ "
*˚ "
*T &
D E .xk " xO k / " Lk yQk .xk " xO k / " Lk yQk
$˚ *˚ *T &
D E xQ k" " Lk yQk xQ k" " Lk yQk
$ & $ " T& T
D †" Q
x;k " Lk E Q
y .
k kQ
x " T
/ " E xQ k yQk Lk C Lk †y;k T
Q Lk
„ ƒ‚ … „ ƒ‚ …
†y;k
Q Lk
T Lk †y;k
Q

D †" Q Lk .
Q " Lk †y;k
x;k
T

Forest and trees


■ To be perfectly clear, the output of this process has two parts:
1. The state estimate. At the end of every iteration, we have
computed our best guess of the present state value, which is xO kC .
2. The covariance estimate. The covariance matrix †C Q gives the
x;k
uncertainty of xO kC , and can be used to compute error bounds.

■ Summarizing, the generic Gaussian sequential probabilistic inference recursion becomes:


( )
xO kC D xO k! C Lk yk " yOk D xO k! C Lk yQk

†C
Q
x;k D †! Q Lk ,
Q " Lk †y;k
x;k
T

where
$ & $ & $ &
xO k! D E xk j Yk!1 Q D E .xk " x
†!
x;k O k! /.xk " xO k! /T D E .xQ k! /.xQ k! /T
$ & $ & $ &
xO kC D E xk j Yk Q D E .xk " x
†C
x;k O kC /.xk " xO kC /T D E .xQ kC /.xQ kC /T
$ & $ & $ &
Ó k D E ´k j Yk!1 Q D E .yk " y
†y;k Ok /.yk " yOk /T D E .yQk /.yQk /T
$ &
Lk D E .xk " xO k! /.yk " yOk /T †!1 !
Q D †xQ y;k
y;k
!1
Q .
Q †y;k

■ Note that this is a linear recursion, even if the system is nonlinear(!)

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–22
3.6: The six-step process

■ Generic Gaussian probabilistic inference solution can be divided into


two main steps, each having three sub-steps.

General step 1a: State prediction time update.


■ Each time step, compute an updated prediction of the present value
of xk , based on prior information and the system model
"
$ & $ &
xO k D E xk j Yk"1 D E f .xk"1; uk"1; wk"1/ j Yk"1 .

General step 1b: Error covariance time update.


■ Determine the predicted state-estimate error covariance matrix †" Q
x;k
based on prior information and the system model.
"
$ " " T&
■ We compute †
Q D E .x
x;k Q k /.xQ k / , where xQ k" D xk " xO k" .

General step 1c: Predict system output yk .


■ Predict the system’s output using prior information
$ & $ &
yOk D E yk j Yk"1 D E h.xk ; uk ; vk / j Yk"1 .

General step 2a: Estimator gain matrix Lk .


■ Compute the estimator gain matrix Lk by evaluating Lk D †" "1
Q .
Q †y;k
xQ y;k

General step 2b: State estimate measurement update.


■ Compute the posterior state estimate by updating the prediction using
the Lk and the innovation yk " yOk

xO kC D xO k" C Lk .yk " yOk /.

General step 2c: Error covariance measurement update.


Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–23
■ Compute the posterior error covariance matrix
$ C C T&
†CQ
x;k D E .xQ k /.xQ k /

D †" Q Lk .
Q " Lk †y;k
x;k
T

KEY POINT : The estimator output comprises the state estimate xO kC and
error covariance estimate †C
Q .
x;k

■ That is,rwe have high confidence that the truth lies within
# '
C C
xO k ˙ 3 diag †x;k
Q .
■ The estimator then waits until the next sample interval, updates k,
and proceeds to step 1a.

Step 1a: State


$ estimate
& time update
$ &
xO k" D E xk j Yk"1 D E f .xk"1; uk"1; wk"1 / j Yk"1 .

Prediction
Step 1b: Error
$
covariance&
time
$
update
†"Q D E .x
x;k Q k" /.xQ k"/T D E .xk " xO k" /.xk " xO k" /T ".
Step 1c: Estimate
$
system
& $
output &
yOk D E yk j Yk"1 D E h.xk ; uk ; vk / j Yk"1 .
Step 2a: Estimator gain matrix
Lk D †" "1
Q .
Q †y;k
xQ y;k

Correction
Step 2b: State estimate measurement update
xO kC D xO k" C Lk .yk " yOk /.
Step 2c: Covariance estimate measurement update
†C "
Q D †x;k
x;k Q Lk .
Q " Lk †y;k
T

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–24
Optimal application to linear systems: The Kalman filter

■ In the next section, we take the general solution and apply it to the
specific case where the system dynamics are linear.
■ Linear systems have the desirable property that all pdfs do in fact
remain Gaussian if the stochastic inputs are Gaussian, so the
assumptions made in deriving the filter steps hold exactly.
■ The linear Kalman filter assumes that the system being modeled can
be represented in the “state-space” form

xk D Ak"1xk"1 C Bk"1uk"1 C wk"1

yk D Ck xk C Dk uk C vk .

■ We assume that wk and vk are mutually uncorrelated white Gaussian


random processes, with zero mean and covariance matrices with
known value:
8 8
<† e ; n D kI <† ; n D kI
T w T vQ
EŒwn wk " D EŒvn vk " D
:0; n ¤ k: :0; n ¤ k,

and EŒwk x0T " D 0 for all k.


■ The assumptions on the noise processes wk and vk and on the
linearity of system dynamics are rarely (never) met in practice, but the
consensus of the literature and practice is that the method still works
very well.

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–25
3.7: Deriving the linear Kalman filter

■ We now apply the general solution to the linear case, and derive the
linear Kalman filter.
■ An attempt to aid intuition is also given as we proceed.

KF step 1a: State estimate time update.


■ Here, we compute the predicted state
$ &
xO k" D E f .xk"1; uk"1; wk"1 / j Yk"1
$ &
D E Ak"1xk"1 C Bk"1 uk"1 C wk"1 j Yk"1
$ & $ & $ &
D E Ak"1xk"1 j Yk"1 C E Bk"1uk"1 j Yk"1 C E wk"1 j Yk"1
C
D Ak"1xO k"1 C Bk"1 uk"1,

by the linearity of expectation, noting that wk"1 is zero-mean.

INTUITION: When predicting the present state given only past


measurements, the best we can do is to use the most recent state
estimate and system model, propagating the mean forward in time.

KF step 1b: Error covariance time update.


■ First, we note that the prediction error is xQ k" D xk " xO k" , so

xQ k" D xk " xO k"


C
D Ak"1xk"1 C Bk"1 uk"1 C wk"1 " Ak"1xO k"1 " Bk"1uk"1
C
D Ak"1xQ k"1 C wk"1 .

■ Therefore, the covariance of the prediction error is


"
$ " " T&
†xQ k D E .xQ k /.xQ k /
Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–26
$ C C T
&
D E .Ak"1xQ k"1 C wk"1/.Ak"1xQ k"1 C wk"1/
$ C C T T C T T
D E Ak"1xQ k"1 .xQ k"1 / Ak"1 C wk"1.xQ k"1 / Ak"1
C T T
&
CAk"1xQ k"1wk"1 C wk"1 wk"1

D Ak"1†C
Q
x;k"1 A T
e.
k"1 C †w

■ The cross terms drop out of the final result since the white process
noise wk"1 is not correlated with the state at time k " 1.
INTUITION: When estimating the error covariance of state prediction,
! The best we can do is to use the most recent covariance estimate
and propagate it forward in time.
C
! For stable systems, Ak"1 †x;k"1
Q ATk"1 is contractive, meaning that
the covariance gets “smaller.” The state of stable systems always
decays toward zero in the absence of input, or toward a known
trajectory if uk ¤ 0. As time goes on, this term tells us that we tend
to get more and more certain of the state estimate.
! On the other hand, †w
e adds to the covariance. Unmeasured inputs

add uncertainty to our estimate because they perturb the trajectory


away from the known trajectory based only on uk .
KF step 1c: Predict system output.
■ We predict the system output as
$ &
yOk D E h.xk ; uk ; vk / j Yk"1
$ &
D E Ck xk C Dk uk C vk j Yk"1
$ & $ & $
D E Ck xk j Yk"1 C E Dk uk j Yk"1 C E vk j Yk"1"

D Ck xO k" C Dk uk ,
Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–27
since vk is zero-mean.
INTUITION: yOk is our best guess of the system output, given only past
measurements.
! The best we can do is to predict the output given the output
equation of the system model, and our best guess of the system
state at the present time.
KF step 2a: Estimator (Kalman) gain matrix.
■ To compute the Kalman gain matrix, we first need to compute several
covariance matrices: Lk D †" "1
Q . We first find †y;k
Q †y;k
xQ y;k Q .

yQk D yk " yOk

D Ck xk C Dk uk C vk " Ck xO k" " Dk uk

D Ck xQ k" C vk
$ &
†y;k
Q D E .Ck xQ k" C vk /.Ck xQ k" C vk /T
$ &
D E Ck xQ k" .xQ k" /T CkT C vk .xQ k" /T CkT C Ck xQ k" vkT C vk vkT

D Ck †"
Q Ck C †vQ .
x;k
T

■ Again, the cross terms are zero since vk is uncorrelated with xQ k" .
■ Similarly,
$ &
EŒxQ k" yQkT " DE xQ k" .Ck xQ k"
C vk / T

$ " " T T " T


&
D E xQ k .xQ k / Ck C xQ k vk

D †"
Q Ck .
x;k
T

■ Combining,
Lk D †" T "
Q Ck ŒCk †x;k
x;k
T "1
Q Ck C †vQ " .

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–28

INTUITION: Note that the computation of Lk is the most critical aspect


of Kalman filtering that distinguishes it from a number of other
estimation methods.
! The whole reason for calculating covariance matrices is to be able
to update Lk .
! Lk is time-varying. It adapts to give the best update to the state
estimate based on present conditions.
C
! Recall that we use Lk in the equation xO k D xO k" C Lk .yk " yOk /.
"
! The first component to Lk , †xQ y;k
Q , indicates relative need for
correction to xO k and how well individual states within xO k are
coupled to the measurements.
! We see this clearly in †" "
Q Ck .
Q D †x;k
xQ y;k
T

! †"
Q tells us about state uncertainty at the present time, which we
x;k
hope to reduce as much as possible.
◆ A large entry in †"
Q means that the corresponding state is very
x;k
uncertain and therefore would benefit from a large update.
◆ A small entry in †"
Q means that the corresponding state is very
x;k
well known already and does not need as large an update.
! The CkT term gives the coupling between state and output.

◆ Entries that are zero indicate that a particular state has no direct
influence on a particular output and therefore an output
prediction error should not directly update that state.
◆ Entries that are large indicate that a particular state is highly
coupled to an output so has a large contribution to any
measured output prediction error; therefore, that state would
benefit from a large update.
Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–29
! †yQ tells us how certain we are that the measurement is reliable.
◆ If †yQ is “large,” we want small, slow updates.
◆ If †yQ is “small,” we want big updates.
◆ This explains why we divide the Kalman gain matrix by †yQ .
! The form of †yQ can also be explained.
◆ The Ck †" xQ Ck part indicates how error in the state contributes to
T

error in the output estimate.


◆ The †vQ term indicates the uncertainty in the sensor reading due
to sensor noise.
◆ Since sensor noise is assumed independent of the state, the
uncertainty in yQk D yk " yOk adds the uncertainty in yk to the
uncertainty in yOk .
KF step 2b: State estimate measurement update.
■ This step computes the a posteriori state estimate by updating the a
priori estimate using the estimator gain and the output prediction
error yk " yOk
xO kC D xO k" C Lk .yk " yOk /:
INTUITION: The variable yOk is what we expect the measurement to be,
based on our state estimate at the moment.
! Therefore, yk " yOk is what is unexpected or new in the
measurement.
! We call this term the innovation. The innovation can be due to a
bad system model, state error, or sensor noise.
! So, we want to use this new information to update the state, but
must be careful to weight it according to the value of the
information it contains.
Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–30

! Lk is the optimal blending factor, as we have already discussed.

KF step 2c: Error covariance measurement update.


■ Finally, we update the error covariance matrix.

†C "
Q D †x;k
x;k Q " Lk †y;k
Q Lk
T

# 'T
D †"
Q
x;k " Lk †y;k "T
Q †y;k
Q
"
†xQ y;k
Q

D †" "
Q " Lk Ck †x;k
x;k Q

D .I " Lk Ck /†"
Q .
x;k

INTUITION: The measurement update has decreased our uncertainty


in the state estimate.

! A covariance matrix is positive semi-definite, and Lk †y;k


Q Lk is also
T

a positive-semi-definite form, and we are subtracting this from the


predicted-state covariance matrix; therefore, the resulting
covariance is “lower” than the pre-measurement covariance.

COMMENT : If a measurement is missed for some reason, then skip steps


2a–c for that iteration. That is, set Lk D 0 and xO kC D xO k" and
†CQ D †x;k
x;k
"
Q .

KEY POINT : Repeating from before, recall that the estimator output
comprises the state estimate xO kC and error covariance estimate †C
Q .
x;k

■ That is,rwe have high confidence that the truth lies within
# '
C C
xO k ˙ 3 diag †x;k
Q .

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–31
3.8: Visualizing the Kalman filter

■ The Kalman filter equations naturally form the following recursion

Initialization
xO 0C , †C
Q
x;0 next time sample: increment k
1a 2c
C
xO k" D Ak"1xO k"1 C Bk"1uk"1 †C "
Q D .I " Lk Ck /†x;k
x;k Q

1b 2b
C
†"
Q D Ak"1 †x;k"1
x;k Q ATk"1C†we Meas. xO kC D xO k" C Lk .yk " yOk /
Meas.
uk yk
1c 2a
yOk D Ck xO k" C Dk uk Lk D†" T " T
Q Ck C†vQ "
Q Ck ŒCk †x;k
x;k
"1

Prediction Correction

■ “Simple” to perform on a digital computer. However, note that our cell


models are nonlinear, so we cannot apply the (linear) Kalman filter to
them directly.
OCV versus SOC for four cells at 25°C
■ To demonstrate the KF steps, 4
Open-circuit voltage (V)

we’ll develop and use a crude


cell model 3.5

1
´kC1 D 1 $ ´k " ik 3
3600 $ Q
2.5
voltk D 3:5 C 0:7 # ´k " R0ik .
0 20 40 60 80 100
State of charge (%)

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–32
■ Notice that we have linearized the OCV relationship, and omitted the
diffusion and hysteresis voltages.
■ This model still isn’t linear because of the “3.5” in the output equation,
so we debias the measurement via yk D voltk " 3:5 and use the model
1
´kC1 D 1 $ ´k " ik
3600 $ Q
yk D 0:7 # ´k " R0ik .
■ Define state xk ( ´k and input uk ( ik .
■ For the sake of example, we will use Q D 10000=3600 and R0 D 0:01.
■ This yields a state-space description with A D 1, B D "1 # 10"4,
C D 0:7, and D D "0:01. We also model †we D 10"5, and †vQ D 0:1.
■ We assume no initial uncertainty so xO 0C D 0:5 and †C
Q D 0.
x;0

Iteration 1: (let i0 D 1, i1 D 0:5 and v1 D 3:85)

C
xO k" D Ak"1 xO k"1 C Bk"1 uk"1 xO 1" D 1 # 0:5 " 10"4 # 1 D 0:4999
C
†"
Q D Ak"1 †x;k"1
x;k Q ATk"1 C †w
e †"
Q D 1 # 0 # 1 C 10
x;1
"5
D 10"5

yOk D Ck xO k" C Dk uk yO1 D 0:7 # 0:4999 " 0:01 # 0:5 D 0:34493

Lk D †" T " T
Q Ck C†vQ "
Q Ck ŒCk †x;k
x;k
"1
L1 D 10"5 # 0:7Œ0:72 # 10"5 C 0:1""1
D 6:99966#10"5
xO kC D xO k" C Lk .yk " yOk / xO 1C D 0:4999 C 6:99966#10"5.0:35 " 0:34493/
(where yk D 3:85 " 3:5) D 0:4999004
†C "
Q D .I " Lk Ck /†x;k
x;k Q †C
Q D .1 " 6:99966#10
x;1
"5
# 0:7/ # 10"5
D 9:9995#10"6
p
■ Output: q
O D 0:4999 ˙ 3 9:9995 # 10"6 D 0:4999 ˙ 0:0094866.
Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–33
Iteration 2: (let i1 D 0:5, i2 D 0:25, and v2 D 3:84)

C
xO k" D Ak"1 xO k"1 C Bk"1 uk"1 xO 1" D 0:4999004 " 10"4 # 0:5 D 0:49985
C
†"
Q D Ak"1 †x;k"1
x;k Q ATk"1 C †w
e †"
Q D 9:9995#10
x;2
"6
C 10"5 D 1:99995#10"5

yOk D Ck xO k" C Dk uk yO2 D 0:7#0:49985 " 0:01 # 0:25 D 0:347395

Lk D †" T " T
Q Ck C†vQ "
Q Ck ŒCk †x;k
x;k
"1
L2 D 1:99995#10"5 # 0:7Œ1:99995#10"5 # 0:72 C 0:1""1
D 0:00013998
xO kC D xO k" C Lk .yk " yOk / xO 2C D 0:49985 C 0:00013998.0:34 " 0:347395/
(where yk D 3:84 " 3:5) D 0:499849
†C "
Q D .I " Lk Ck /†x;k
x;k Q †C
Q D .1 " 0:00013998 # 0:7/ # 1:99995#10
x;2
"5

D 1:99976#10"5
p
■ Output: q
O D 0:4998 ˙ 3 1:99976 # 10"5 D 0:4998 ˙ 0:013416.
■ Note that covariance (uncertainty) converges, but it can take time

†"
Q D 1#10
x;1
"5
†"
Q D 1:99995#10
x;2
"5
†"
Q D 2:99976#10
x;3
"5

†C
Q D0
x;0 †C
Q D 9:99951#10
x;1
"6
†C
Q D 1:99976#10
x;2
"5
†C
Q D 2:99931#10
x;3
"5

■ Covariance increases during propagation and is then reduced by


each measurement.
■ Covariance still oscillates at steady state between †"
Q
x;ss and †C
Q .
x;ss
q
■ Estimation error bounds are ˙3 †C C
Q for 99 % assurance of
x;k
accuracy of estimate.
■ The plots below show a sample of the Kalman filter operating. We
shall look at how to write code to evaluate this example shortly.
Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–34

C
Kalman filter in action Error covariance †"
Q and †x;k
x;k Q
0.6 1.44

0.5

Covariance × 103
0.4
1.43
0.3 True
State

Estimate
0.2 Error
1.42
0.1
†"
xQ
0
†C
xQ
−0.1 1.41
300 400 500 600 700 800 900 1000 300 400 500 600 700 800 900 1000
Iteration Iteration

■ Note that Kalman filter does not perform especially well since †vQ is
quite large.
■ However, these are the best-possible results, since the KF is the
optimum MMSE estimator.

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–35
3.9: Matlab code for the Kalman filter steps

■ It is straightforward to convert the Kalman filter steps to MATLAB.


However, great care must be taken to ensure that all “k” and “k C 1”
indices (etc) are kept synchronized.
■ In the example, below, we simulate the true system and implement a
Kalman filter on its input–output data.
■ To simulate the true system, we must be able to create non-zero
mean Gaussian noise with covariance †Ye . How to do so?
■ That is, we want Y ' N .y;
N †Ye / but randn.m returns X ' N .0; I /.
■ Try y D yN C AT x where A is square with the same dimension as †Ye ;
AT A D †Ye . (A is the Cholesky decomposition of positive-definite
symmetric matrix †Ye ).
ybar = [1; 2];
covar = [1, 0.5; 0.5, 1]; 5000 samples with mean [1;2] and
A = chol(covar); covar [1,0.5; 0.5,1]
5
x = randn([2, 1]);
4
y = ybar + A'*x;
3
y coordinate

■ When †Ye is non-positive definite 2

(but also non-negative definite) 1

[L,D] = ldl(covar); 0
x = randn([2,5000]); −1
y = ybar(:,ones([1 5000])) −4 −2 0 2 4 6
x coordinate
+(L*sqrt(D))*x;

■ The code below is in MATLAB. Coding a KF in another language is no


more challenging, except that you will need to write (or find) code to
do the matrix manipulations.
% Initialize simulation variables
SigmaW = 1; % Process noise covariance

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–36
SigmaV = 1; % Sensor noise covariance
A = 1; B = 1; C = 1; D = 0; % Plant definition matrices
maxIter = 40;

xtrue = 0; % Initialize true system initial state


xhat = 0; % Initialize Kalman filter initial estimate
SigmaX = 0; % Initialize Kalman filter covariance
u = 0; % Unknown initial driving input: assume zero

% Reserve storage for variables we want to plot/evaluate


xstore = zeros(length(xtrue),maxIter+1); xstore(:,1) = xtrue;
xhatstore = zeros(length(xhat),maxIter);
SigmaXstore = zeros(length(xhat)^2,maxIter);

for k = 1:maxIter,
% KF Step 1a: State estimate time update
xhat = A*xhat + B*u; % use prior value of "u"

% KF Step 1b: Error covariance time update


SigmaX = A*SigmaX*A' + SigmaW;

% [Implied operation of system in background, with


% input signal u, and output signal z]
u = 0.5*randn(1) + cos(k/pi); % for example...
w = chol(SigmaW)'*randn(length(xtrue));
v = chol(SigmaV)'*randn(length(C*xtrue));
ytrue = C*xtrue + D*u + v; % z is based on present x and u
xtrue = A*xtrue + B*u + w; % future x is based on present u

% KF Step 1c: Estimate system output


yhat = C*xhat + D*u;

% KF Step 2a: Compute Kalman gain matrix


SigmaY = C*SigmaX*C' + SigmaV;
L = SigmaX*C'/SigmaY;

% KF Step 2b: State estimate measurement update


xhat = xhat + L*(ytrue - yhat);

% KF Step 2c: Error covariance measurement update


SigmaX = SigmaX - L*SigmaY*L';

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–37
% [Store information for evaluation/plotting purposes]
xstore(:,k+1) = xtrue; xhatstore(:,k) = xhat;
SigmaXstore(:,k) = SigmaX(:);
end

figure(1); clf;
plot(0:maxIter-1,xstore(1:maxIter)','k-',...
0:maxIter-1,xhatstore','b--', ...
0:maxIter-1,xhatstore'+3*sqrt(SigmaXstore)','m-.',...
0:maxIter-1,xhatstore'-3*sqrt(SigmaXstore)','m-.'); grid;
legend('true','estimate','bounds');
title('Kalman filter in action');
xlabel('Iteration'); ylabel('State');

figure(2); clf;
plot(0:maxIter-1,xstore(1:maxIter)'-xhatstore','b-',...
0:maxIter-1,3*sqrt(SigmaXstore)','m--',...
0:maxIter-1,-3*sqrt(SigmaXstore)','m--'); grid;
legend('Error','bounds',0); title('Error with bounds');
xlabel('Iteration'); ylabel('Estimation Error');

■ The plots below show a sample of the Kalman filter operating for an
example where
xk D xk"1 C uk C wk

yk D xk C vk
and †we D †vQ D 1.
Kalman filter in action Error with bounds
true
10 2
estimate
Estimation Error

bounds
1
5
State

0 Error
−1 bounds

−2
−5
0 10 20 30 40 0 10 20 30 40

Iteration Iteration
Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–38
3.10: Practical considerations

Improving numeric robustness


C
■ Within the filter, the covariance matrices †"
Q and †x;k
x;k Q must remain

1. Symmetric, and
2. Positive definite (all eigenvalues strictly positive).
■ It is possible for both conditions to be violated due to round-off errors
in a computer implementation.
■ We wish to find ways to limit or eliminate these problems.

Dealing with loss of symmetry


■ The cause of covariance matrices becoming un-symmetric or
non-positive definite must be due to either the time update or
measurement update equations of the filter.
■ Consider first the time update equation:
C
†"
Q D A†x;k"1
x;k Q AT C †we :
! Because we are adding two positive-definite quantities together,
the result must be positive definite.
! A “suitable implementation” of the products of the matrices will
avoid loss of symmetry in the final result.
■ Consider next the measurement update equation:
†C "
Q D †x;k
x;k
"
Q " Lk Ck †x;k
Q :

■ Theoretically, the result is positive definite, but due to the subtraction


operation it is possible for round-off errors in an implementation to
result in a non-positive-definite solution.
Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–39
■ A better solution is the Joseph-form covariance update.
†C "
Q D ŒI " Lk Ck " †x;k
x;k
T T
Q ŒI " Lk Ck " C Lk †vQ Lk :

! This may be proven correct via

†C "
Q D ŒI " Lk Ck " †x;k
x;k
T T
Q ŒI " Lk Ck " C Lk †vQ Lk

D †" "
Q " Lk Ck †x;k
x;k
"
Q " †x;k
T T
Q Ck Lk C Lk Ck †x;k
" T T T
Q Ck Lk C Lk †vQ Lk
( ) T
D †"
Q
x;k " L C †"
k k x;k
Q " †"
Q
x;k C T T
k Lk C Lk C † "
k x;k
Q C T
k C †vQ Lk

D †" "
Q " Lk Ck †x;k
x;k
"
Q " †x;k
T T
Q Ck Lk C Lk †y;k
Q L
T

# '
" " " T T " T "1 T
D †x;k
Q " Lk Ck †x;k
Q " †x;k
Q Ck Lk C †x;k
Q Ck †y;k
Q †y;k
Q L

D †" "
Q .
Q " Lk Ck †x;k
x;k

■ Because the subtraction occurs in the “squared” term, this form


guarantees a positive definite result.
■ If we still end up with a negative definite matrix (numerics), we can
replace it by the nearest symmetric positive semidefinite matrix.1
■ Omitting the details, the procedure is:
! Calculate singular-value decomposition: † D USV T .
! Compute H D V S V T .
! Replace † with .† C †T C H C H T /=4.

■ However, there are still improvements that may be made. We can:


! Reduce the computational requirements of the Joseph form,
! Increase the precision of the numeric accuracy.
! ECE5550 goes into more detail, incl. “square-root” Kalman filters.
1
Nicholas J. Higham, “Computing a Nearest Symmetric Positive Semidefinite Matrix,”
Linear Algebra and its Applications, vol. 103, pp. 103–118, 1988.

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–40
Measurement validation gating
■ Sometimes the systems for which we would like a state estimate have
sensors with intermittent faults.
■ We would like to detect faulty measurements and discard them (the
time update steps of the KF are still implemented, but the
measurement update steps are skipped).
■ The Kalman filter provides an elegant theoretical means to
accomplish this goal. Note:
"
! The measurement covariance matrix is †y;k Q Ck C †vQ ;
T
Q D Ck †x;k

! The measurement prediction itself is yOk D Ck xO k" C Dk uk ;


! The innovation is yQk D yk " yOk .

■ A measurement validation gate can be set up around the


measurement using normalized estimation error squared (NEES)
ek2 D yQkT †"1
Q y
y;k Qk :
■ NEES ek2 varies as a Chi-squared distribution with m degrees of
freedom, where m is the dimension of yk .
■ If ek2 is outside of the bounding values from the Chi-squared
distribution for a desired confidence level, then the measurement is
discarded. (See appendix for Chi-squared test.)
■ Note: If a number of measurements are discarded in a short time
interval, it may be that the sensor has truly failed, or that the state
estimate and its covariance has gotten “lost.”
■ It is sometimes helpful to “bump up” the covariance †˙
Q , which
x;k
simulates additional process noise, to help the Kalman filter to
reacquire.
Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–41
Nonlinear Kalman filters
■ Our cell models are nonlinear, so the standard Kalman-filter recursion
does not apply directly.
■ We now generalize to the nonlinear case, with system dynamics
described as
xk D f .xk"1; uk"1; wk"1 /

yk D h.xk ; uk ; vk /,
where uk is a known (deterministic/measured) input signal, wk is a
process-noise random input, and vk is a sensor-noise random input.
■ We note that f .$/ and h.$/ may be time-varying, but we generally omit
the time dependency from the notation for ease of understanding.
■ There are three basic generalizations to KF to estimate the state of a
nonlinear system
! Extended Kalman filter (EKF): Analytic linearization of the model at
each point in time. Problematic, but still popular.
! Sigma-point (Unscented) Kalman filter (SPKF/UKF): Statistical/
empirical linearization of the model at each point in time. Much
better than EKF, at same computational complexity.
! Particle filters: The most precise, but often thousands of times
more computations required than either EKF/SPKF. Does not
assume Gaussian distributions but approximates distributions via
histograms and uses Monte-Carlo integration techniques to find
probabilities, expectations, and uncertainties.
■ In this section, we present the EKF and SPKF. Particle filters are
beyond the scope of this course.
Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–42
3.11: The extended Kalman filter (EKF)
■ The EKF makes two simplifying assumptions when adapting the
general sequential inference equations to a nonlinear system:
! When computing estimates of the output of a nonlinear function,
EKF assumes EŒfn.x/" ) fn.EŒx"/, which is not true in general;
! When computing covariance estimates, EKF uses Taylor-series
expansion to linearize the system equations around the present
operating point.
■ Here, we will show how to apply these approximations and
assumptions to derive the EKF equations from the general six steps.
EKF step 1a: State prediction time update.
■ The state prediction step is approximated as
xO k" D EŒf .xk"1; uk"1; wk"1 / j Yk"1"
C
) f .xO k"1 ; uk"1; wN k"1 /,
where wN k"1 D EŒwk"1". (Often, wN k"1 D 0.)
■ That is, we approximate the expected value of the new state by
C
assuming that it is reasonable to simply propagate xO k"1 and wN k"1
through the state equation.
EKF step 1b: Error covariance time update.
■ The covariance prediction step is accomplished by first making an
approximation for xQ k" .
xQ k" D xk " xO k"
C
D f .xk"1; uk"1; wk"1 / " f .xO k"1 ; uk"1; wN k"1/.
Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–43
■ The first term is expanded as a Taylor series around the prior
C
operating “point” which is the set of values fxO k"1 ; uk"1; wN k"1 g
ˇ
C df .xk"1; uk"1; wk"1 / ˇˇ C
xk ) f .xO k"1; uk"1; wN k"1 / C ˇ .xk"1 " xO k"1 /
dxk"1 C
xk"1 DxO k"1
„ ƒ‚ …
Defined as AOk"1
ˇ
df .xk"1; uk"1; wk"1/ ˇˇ
C ˇ .wk"1 " wN k"1 /.
dwk"1 wk"1 DwN k"1
„ ƒ‚ …
Defined as BO k"1
# '
■ This gives xQ k" ) AOk"1xQ k"1 C BO k"1 w
C f
k"1 .

■ Substituting this to find the prediction-error covariance:

†" Q k"/.xQ k"/T "


Q D EŒ.x
x;k

) AOk"1†C
Q
x;k"1 AOTk"1 C BO k"1 †we BO k"1
T
.

■ Note, by the chain rule of total differentials,


@f .xk"1; uk"1; wk"1 /
df .xk"1; uk"1; wk"1 / D dxk"1 C
@xk"1
@f .xk"1; uk"1; wk"1 /
duk"1 C
@uk"1
@f .xk"1; uk"1; wk"1 /
dwk"1
@wk"1
df .xk"1; uk"1; wk"1 / @f .xk"1; uk"1; wk"1 /
D C
dxk"1 @xk"1
@f .xk"1; uk"1; wk"1 / duk"1
C
@uk"1 dxk"1
„ƒ‚…
0

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–44
@f .xk"1; uk"1; wk"1 / dwk"1
@wk"1 dxk"1
„ƒ‚…
0

@f .xk"1; uk"1; wk"1 /


D .
@xk"1
■ Similarly,
df .xk"1; uk"1; wk"1 / @f .xk"1; uk"1; wk"1/
D .
dwk"1 @wk"1
■ The distinction between the total differential and the partial differential
is not critical at this point, but will be in the next section of notes when
we look at parameter estimation using extended Kalman filters.
EKF step 1c: Output estimate.
■ The system output is estimated to be
yOk D EŒh.xk ; uk ; vk / j Yk"1"

) h.xO k"; uk ; vN k /,
where vN k D EŒvk ".
■ That is, it is assumed that propagating xO k" and the mean sensor noise
is the best approximation to estimating the output.
EKF step 2a: Estimator gain matrix.
■ The output prediction error may then be approximated
yQk D yk " yOk D h.xk ; uk ; vk / " h.xO k"; uk ; vN k /
using again a Taylor-series expansion on the first term.
ˇ
" dh.xk ; u k k ˇˇ
; v / "
yk ) h.xO k ; uk ; vN k / C ˇ .xk " O
x k /
dxk "
xk DxO k
„ ƒ‚ …
Defined as CO k

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–45
ˇ
dh.xk ; uk ; vk / ˇˇ
C ˇ .vk " vN k /.
dvk
„ ƒ‚ vk DvN…k
Defined as DO k

■ Note, much like we saw in Step 1b,


dh.xk ; uk ; vk / @h.xk ; uk ; vk /
D
dxk @xk
dh.xk ; uk ; vk / @h.xk ; uk ; vk /
D .
dvk @vk
■ From this, we can compute such necessary quantities as
O "Q CO kT C DO k †vQ DO kT ;
Q ) Ck †x;k
†y;k

†" Q k"/.CO k xQ k" C DO k vQ k /T "


Q ) EŒ.x
xQ y;k

D †" OT
Q Ck .
x;k

■ These terms may be combined to get the Kalman gain


$ &
T "1
Lk D †" Q
x;k
O
C T O
k C †" OT
k x;k
Q C k C O
D † O
D
k vQ k :

EKF step 2b: State estimate measurement update.


■ This step computes the posterior state estimate by updating the
prediction using the estimator gain and the innovation yk " yOk

xO kC D xO k" C Lk .yk " yOk /:

EKF step 2c: Error covariance measurement update.


■ Finally, the updated covariance is computed as

†C
Q
x;k D †"
Q
x;k " Lk † Q
y;k Lk.
T

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–46
3.12: An EKF example, with code

■ Consider an example of EKF in action, with the following dynamics:


p
xkC1 D f .xk ; uk ; wk / D 5 C xk C wk

yk D h.xk ; uk ; vk / D xk3 C vk

with †we D 1 and †vQ D 2.


■ To implement EKF, we must determine AOk , BO k , CO k , and DO k .
ˇ (p ) ˇˇ
O @f .xk ; uk ; wk / ˇˇ @ 5 C xk C wk ˇ 1
Ak D ˇ D ˇ D q
@xk @x ˇ
C
xk DxO k k
xk DxO kC 2 5 C xO kC
ˇ (p ) ˇˇ
@f .xk ; uk ; wk / ˇˇ @ 5 C xk C wk ˇ
BO k D ˇ D ˇ D1
@wk wk DwN k @w k ˇ
wk DwN k
ˇ ( 3 )ˇ ˇ
@h.x ; u ; v / ˇ @ x C v k ˇ
CO k D
k k k ˇ
ˇ D k
ˇ D 3.xO k"/2
@xk xk DxO k" @xk ˇ
xk DxO k"
ˇ ( 3 ) ˇˇ
ˇ
@h.xk ; uk ; vk / ˇ @ xk C vk ˇ
DO k D ˇ D ˇ D 1.
@vk vk DvN k @v k ˇ
vk DvN k

■ The following is some sample code to implement an EKF.


O B,
! Note that the steps for calculating the plant and the A, O CO , and
DO matrices will depend on the nonlinear system underlying the
estimation problem.
% Initialize simulation variables
SigmaW = 1; % Process noise covariance
SigmaV = 2; % Sensor noise covariance
maxIter = 40;

xtrue = 2 + randn(1); % Initialize true system initial state

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–47
xhat = 2; % Initialize Kalman filter initial estimate
SigmaX = 1; % Initialize Kalman filter covariance
u = 0; % Unknown initial driving input: assume zero

% Reserve storage for variables we might want to plot/evaluate


xstore = zeros(maxIter+1,length(xtrue)); xstore(1,:) = xtrue;
xhatstore = zeros(maxIter,length(xhat));
SigmaXstore = zeros(maxIter,length(xhat)^2);

for k = 1:maxIter,
% EKF Step 0: Compute Ahat, Bhat
% Note: For this example, x(k+1) = sqrt(5+x(k)) + w(k)
Ahat = 0.5/sqrt(5+xhat); Bhat = 1;

% EKF Step 1a: State estimate time update


% Note: You need to insert your system's f(...) equation here
xhat = sqrt(5+xhat);

% EKF Step 1b: Error covariance time update


SigmaX = Ahat*SigmaX*Ahat' + Bhat*SigmaW*Bhat';

% [Implied operation of system in background, with


% input signal u, and output signal y]
w = chol(SigmaW)'*randn(1);
v = chol(SigmaV)'*randn(1);
ytrue = xtrue^3 + v; % y is based on present x and u
xtrue = sqrt(5+xtrue) + w; % future x is based on present u

% EKF Step 1c: Estimate system output


% Note: You need to insert your system's h(...) equation here
Chat = 3*xhat^2; Dhat = 1;
yhat = xhat^3;

% EKF Step 2a: Compute Kalman gain matrix


SigmaY = Chat*SigmaX*Chat' + Dhat*SigmaV*Dhat';
L = SigmaX*Chat'/SigmaY;

% EKF Step 2b: State estimate measurement update


xhat = xhat + L*(ytrue - yhat);
xhat = max(-5,xhat); % don't get square root of negative xhat!

% EKF Step 2c: Error covariance measurement update

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–48
SigmaX = SigmaX - L*SigmaY*L';
[~,S,V] = svd(SigmaX);
HH = V*S*V';
SigmaX = (SigmaX + SigmaX' + HH + HH')/4; % Help to keep robust

% [Store information for evaluation/plotting purposes]


xstore(k+1,:) = xtrue; xhatstore(k,:) = xhat;
SigmaXstore(k,:) = (SigmaX(:))';
end;

figure(1); clf; t = 0:maxIter-1;


plot(t,xstore(1:maxIter),'k-',t,xhatstore,'b--', ...
t,xhatstore+3*sqrt(SigmaXstore),'m-.',...
t,xhatstore-3*sqrt(SigmaXstore),'m-.'); grid;
legend('true','estimate','bounds');
xlabel('Iteration'); ylabel('State');
title('Extended Kalman filter in action');

figure(2); clf;
plot(t,xstore(1:maxIter)-xhatstore,'b-',t, ...
3*sqrt(SigmaXstore),'m--',t,-3*sqrt(SigmaXstore),'m--');
grid; legend('Error','bounds',0);
title('EKF Error with bounds');
xlabel('Iteration'); ylabel('Estimation error');

Extended Kalman filter in action EKF error with bounds


7 0.5
Truth
6 EKF estimate
EKF bounds 0
5
Estimation error

4 −0.5
State

3
−1
2
−1.5
1 EKF error
EKF bounds
0 −2
0 10 20 30 40 0 10 20 30 40
Iteration Iteration

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–49
3.13: Preparing to implement EKF on ESC model

■ To implement the EKF for battery-cell state estimation using the ESC
model, we must know the AOk , BO k , CO k and DO k matrices. We first
examine the components of the state equation to find AOk and BO k .
■ Suppose that the process noise models current-sensor measurement
error. That is, the true cell current is ik C wk , but we measure ik only.
■ Also assume we can simplify model with #k D 1, and have adaptivity
of EKF handle the small error introduced by this assumption.
■ Then, the state-of-charge equation can be written as
$t
´kC1 D ´k " .ik C wk / .
Q
■ The two derivatives that we need for this term are:
ˇ ˇ
@´kC1 ˇˇ @´kC1 ˇˇ $t
D 1, and D " ,
@´k ˇ´k DÓ C @wk ˇwk DwN Q
k

remembering that Q is measured in ampere-seconds.


■ If %j D exp."$t=.Rj Cj //, then the resistor-currents state equation
can be written as
2 3 2 3
%1 0 $ $ $ 1 " %1
6 7 6 7
iR;kC1 D 4 0 %2 5 iR;k C 4 1 " %2 5 .ik C wk / .
::: ::: :::
„ ƒ‚ … „ ƒ‚ …
ARC BRC

■ The two derivatives can be found to be


ˇ ˇ
@iR;kC1 ˇˇ @iR;kC1 ˇˇ
D ARC , and D BRC .
@iR;k ˇiR;k DiOC @wk ˇwk DwN
R;k

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–50
! ˇ ˇ"
ˇ .ik C wk / )$t ˇ
■ If we define AH;k D exp " ˇˇ ˇ , then hysteresis state
ˇ
Q
hkC1 D AH;k hk C .AH;k " 1/ sgn .ik C wk / .

■ Taking partial with respect to the state and evaluating at the setpoint
(noting that wk D wN is a member of the setpoint),
ˇ ! ˇ ˇ"
ˇ
@hkC1 ˇ ˇ .ik C wN k / )$t ˇˇ
ˇ OC
ˇ
D exp " ˇ ˇ D ANH;k .
@h hk Dh
k k Q
wk DwN

■ Next, we must find @hkC1 =@wk . However, the absolute-value and sign
functions are not differentiable at ik C wk D 0. Ignoring this detail,

! If we assume that ik C wk > 0,


ˇ ˇ ! ˇ ˇ "
@hkC1 ˇ )$t ˇ ˇ )$t ˇ
D " ˇˇ ˇ exp " ˇ
ˇ ˇ
ˇ j.ik C wk /j .1 C hk / .
@wk Q Q ˇ
! If we assume that ik C wk < 0,
ˇ ˇ ! ˇ ˇ "
@hkC1 ˇ )$t ˇ ˇ )$t ˇ
D " ˇˇ ˇ exp " ˇ
ˇ ˇ
ˇ j.ik C wk /j .1 " hk / .
@wk Q Q ˇ
■ Overall, evaluating at Taylor-series linearization setpoint,
ˇ ˇ ˇ # '
@hkC1 ˇˇ ˇ )$t ˇ
ˇ OC
D " ˇˇ ˇ N hO k .
ˇ ANH;k 1 C sgn.ik C w/ C
@w hk Dh
k Q
k
wk DwN

■ The zero-state hysteresis equation is defined as


8
<sgn.i C w /; ji C w j > 0;
k k k k
skC1 D
:sk ; else:

■ If we consider ik C wk D 0 to be a zero-probability event, then


@skC1 @skC1
D 0, and D 0.
@sk @wk
Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–51

■ We now look at the components that determine CO k and DO k .


■ The ESC-model output equation is
X
yk D OCV.´k / C M0sk C M hk " Rj iRj ;k " R0ik C vk ,
j

no longer considering ik to have wk noise added to it (this would add


correlation between process noise and sensor noise).
■ We have ˇ
ˇ ˇ ˇ
@yk ˇˇ @yk ˇˇ @yk ˇˇ @yk ˇˇ
D M0 , D M, ˇ D "Rj , D 1.
@s ˇk @hk ˇ @iRj ;k ˇ @vk ˇ
■ We also require
ˇ ˇ
@yk ˇˇ @OCV.´k / ˇˇ
D ,
@´k ˇ´k DÓ " @´k ˇ
´k DÓ "
k k

which can be approximated from open-circuit-voltage data. If SOC is


a vector of evenly-spaced SOC points with corresponding OCV,
% Find dOCV/dz at SOC = z from {SOC,OCV} data
function dOCVz = dOCVfromSOC(SOC,OCV,z)
dZ = SOC(2) - SOC(1); % Find spacing of SOC vector
dUdZ = diff(OCV)/dZ; % Scaled forward finite difference
dOCV = ([dUdZ(1) dUdZ] + [dUdZ dUdZ(end)])/2; % Avg of fwd/bkwd diffs
dOCVz = interp1(SOC,dOCV,z); % Could make more efficient than this...

■ The figure shows empirical OCV 5


Empirical OCV derivative for six cells

derivative relationships for six 4


∂ OCV/ ∂ SOC (V)

different lithium-ion cells.


3

■ There is a little noise, which 2

could be filtered (with a


1
zero-phase filter!), but it’s not
0
really necessary to do so. 0 10 20 30 40 50 60
State of charge (%)
70 80 90 100

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–52
3.14: Implementing EKF on ESC model

■ We refactor the code to better represent a real BMS implementation.


■ There is an initialization routine (initEKF.m), called once at startup.
■ An update routine (iterEKF.m) is called every sample interval.
■ “Wrapper” code, which coordinates the entire simulation process, is:
load CellModel % loads "model" of cell

% Load cell-test data. Contains variable "DYNData" of which the field


% "script1" is of interest. It has sub-fields time, current, voltage, soc.
load('Cell_DYN_P25'); % loads data from cell test
T = 25; % Test temperature

time = DYNData.script1.time(:); deltat = time(2)-time(1);


time = time-time(1); % start time at 0
current = DYNData.script1.current(:); % discharge > 0; charge < 0.
voltage = DYNData.script1.voltage(:);
soc = DYNData.script1.soc(:);

% Reserve storage for computed results, for plotting


sochat = zeros(size(soc));
socbound = zeros(size(soc));

% Covariance values
SigmaX0 = diag([1e-3 1e-3 1e-2]); % uncertainty of initial state
SigmaV = 2e-1; % uncertainty of voltage sensor, output equation
SigmaW = 1e1; % uncertainty of current sensor, state equation

% Create ekfData structure and initialize variables using first


% voltage measurement and first temperature measurement
ekfData = initEKF(voltage(1),T,SigmaX0,SigmaV,SigmaW,model);

% Now, enter loop for remainder of time, where we update the EKF
% once per sample interval
hwait = waitbar(0,'Computing...');
for k = 1:length(voltage),
vk = voltage(k); % "measure" voltage
ik = current(k); % "measure" current

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–53
Tk = T; % "measure" temperature

% Update SOC (and other model states)


[sochat(k),socbound(k),ekfData] = iterEKF(vk,ik,Tk,deltat,ekfData);
% update waitbar periodically, but not too often (slow procedure)
if mod(k,1000)==0, waitbar(k/length(current),hwait); end;
end
close(hwait);

figure(1); clf; plot(time/60,100*sochat,time/60,100*soc); hold on


h = plot([time/60; NaN; time/60],...
[100*(sochat+socbound); NaN; 100*(sochat-socbound)]);
title('SOC estimation using EKF');
xlabel('Time (min)'); ylabel('SOC (%)');
legend('Estimate','Truth','Bounds'); grid on

fprintf('RMS SOC estimation error = %g%%\n',...


sqrt(mean((100*(soc-sochat)).^2)));

figure(2); clf; plot(time/60,100*(soc-sochat)); hold on


h = plot([time/60; NaN; time/60],[100*socbound; NaN; -100*socbound]);
title('SOC estimation errors using EKF');
xlabel('Time (min)'); ylabel('SOC error (%)'); ylim([-4 4]);
legend('Estimation error','Bounds'); grid on
print -deps2c EKF2.eps

ind = find(abs(soc-sochat)>socbound);
fprintf('Percent of time error outside bounds = %g%%\n',...
length(ind)/length(soc)*100);

■ The initialization code is:


function ekfData = initEKF(v0,T0,SigmaX0,SigmaV,SigmaW,model)

% Initial state description


ir0 = 0; ekfData.irInd = 1;
hk0 = 0; ekfData.hkInd = 2;
SOC0 = SOCfromOCVtemp(v0,T0,model); ekfData.zkInd = 3;
ekfData.xhat = [ir0 hk0 SOC0]'; % initial state

% Covariance values
ekfData.SigmaX = SigmaX0;

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–54
ekfData.SigmaV = SigmaV;
ekfData.SigmaW = SigmaW;
ekfData.Qbump = 5;

% previous value of current


ekfData.priorI = 0;
ekfData.signIk = 0;

% store model data structure too


ekfData.model = model;
end

■ The iteration code is:


function [zk,zkbnd,ekfData] = iterEKF(vk,ik,Tk,deltat,ekfData)
model = ekfData.model;
% Load the cell model parameters
Q = getParamESC('QParam',Tk,model);
G = getParamESC('GParam',Tk,model);
M = getParamESC('MParam',Tk,model);
M0 = getParamESC('M0Param',Tk,model);
RC = exp(-deltat./abs(getParamESC('RCParam',Tk,model)))';
R = getParamESC('RParam',Tk,model)';
R0 = getParamESC('R0Param',Tk,model);
eta = getParamESC('etaParam',Tk,model);
if ik<0, ik=ik*eta; end;

% Get data stored in ekfData structure


I = ekfData.priorI;
SigmaX = ekfData.SigmaX;
SigmaV = ekfData.SigmaV;
SigmaW = ekfData.SigmaW;
xhat = ekfData.xhat;
irInd = ekfData.irInd;
hkInd = ekfData.hkInd;
zkInd = ekfData.zkInd;
if abs(ik)>Q/100, ekfData.signIk = sign(ik); end;
signIk = ekfData.signIk;

% EKF Step 0: Compute Ahat[k-1], Bhat[k-1]


nx = length(xhat); Ahat = zeros(nx,nx); Bhat = zeros(nx,1);
Ahat(zkInd,zkInd) = 1; Bhat(zkInd) = -deltat/(3600*Q);

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–55
Ahat(irInd,irInd) = diag(RC); Bhat(irInd) = 1-RC(:);
Ah = exp(-abs(I*G*deltat/(3600*Q))); % hysteresis factor
Ahat(hkInd,hkInd) = Ah;
B = [Bhat, 0*Bhat];
Bhat(hkInd) = -abs(G*deltat/(3600*Q))*Ah*(1+sign(I)*xhat(hkInd));
B(hkInd,2) = 1-Ah;

% Step 1a: State estimate time update


xhat = Ahat*xhat + B*[I; sign(I)];

% Step 1b: Error covariance time update


% sigmaminus(k) = Ahat(k-1)*sigmaplus(k-1)*Ahat(k-1)' + ...
% Bhat(k-1)*sigmawtilde*Bhat(k-1)'
SigmaX = Ahat*SigmaX*Ahat' + Bhat*SigmaW*Bhat';

% Step 1c: Output estimate


yhat = OCVfromSOCtemp(xhat(zkInd),Tk,model) + M0*signIk + ...
M*xhat(hkInd) - R*xhat(irInd) - R0*ik;

% Step 2a: Estimator gain matrix


Chat = zeros(1,nx);
Chat(zkInd) = dOCVfromSOCtemp(xhat(zkInd),Tk,model);
Chat(hkInd) = M;
Chat(irInd) = -R;
Dhat = 1;
SigmaY = Chat*SigmaX*Chat' + Dhat*SigmaV*Dhat';
L = SigmaX*Chat'/SigmaY;

% Step 2b: State estimate measurement update


r = vk - yhat; % residual. Use to check for sensor errors...
if r^2 > 100*SigmaY, L(:)=0.0; end
xhat = xhat + L*r;
xhat(hkInd) = min(1,max(-1,xhat(hkInd))); % Help maintain robustness
xhat(zkInd) = min(1.05,max(-0.05,xhat(zkInd)));

% Step 2c: Error covariance measurement update


SigmaX = SigmaX - L*SigmaY*L';
% % Q-bump code
if r^2 > 4*SigmaY, % bad voltage estimate by 2 std. devs, bump Q
fprintf('Bumping SigmaX\n');
SigmaX(zkInd,zkInd) = SigmaX(zkInd,zkInd)*ekfData.Qbump;
end

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–56
[~,S,V] = svd(SigmaX);
HH = V*S*V';
SigmaX = (SigmaX + SigmaX' + HH + HH')/4; % Help maintain robustness

% Save data in ekfData structure for next time...


ekfData.priorI = ik;
ekfData.SigmaX = SigmaX;
ekfData.xhat = xhat;
zk = xhat(zkInd);
zkbnd = 3*sqrt(SigmaX(zkInd,zkInd));
end

■ For the following example, the EKF was executed for a test having
dynamic profiles from 100 % SOC down to around 10 % SOC.

! RMS SOC estimation error = 1:20 %

! Percent of time error outside bounds = 23:79 %.


SOC estimation using EKF SOC estimation errors using EKF
140 4
Estimate Estimation error
120 Truth Bounds
Bounds 2
100
SOC error (%)
SOC (%)

80
0
60

40
−2
20

0 −4
0 100 200 300 400 500 600 0 100 200 300 400 500 600
Time (min) Time (min)

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–57
3.15: Problems with EKF, improved with sigma-point methods

■ The EKF is the best known and most used nonlinear Kalman filter.
■ However, it has serious flaws that can be remedied fairly easily.

ISSUE : How input mean and covariance are propagated through static
nonlinear function to create output mean and covariance estimates.
■ Recall that the EKF, when computing mean estimates in Steps 1a
and 1c, makes the simplification EŒfn.x/" ) fn.EŒx"/.

! This is not true in general, and not necessarily even close to true
(depending on “how nonlinear” the function fn.$/ is).
■ Also, in EKF Steps 1b and 2a, a Taylor-series expansion is performed
as part of the calculation of output-variable covariance.

! Nonlinear terms are dropped, resulting in a loss of accuracy.

EKF mean, variance approximation


■ A simple one-dimensional 10
9 Nonlinear function
example illustrates these two
Function output value

8 EKF function approximation

effects. Consider the figure: 6


7 True output pdf

5 EKF output pdf


■ The nonlinear function is drawn, 4
Input pdf
and the input random-variable 3
2
PDF is shown on the horizontal 1
0
axis, with mean 1.05. 0 0.2 0.4 0.6 0.8 1
Function input value
1.2 1.4 1.6

■ The straight dotted line is the linearized approximation used by the


EKF to find the output mean and covariance.
■ The EKF-approximated PDF is compared to a Gaussian PDF having
same mean and variance of the true data on the vertical axis.
Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–58
■ We notice significant differences between the means and variances:
EKF approach is not producing an accurate estimate of either.
■ For a two-dimensional example, consider the following figure.
Input data scatter Output data scatter

True 3σ contour
Nonlinear
function

3σ contour EKF 3σ contour

! Left frame shows a cloud of Gaussian-distributed random points


used as input to this function, and
! Right frame shows the transformed set of output points.

■ The actual 95 % confidence interval (indicative of a contour of the


Gaussian PDF describing the output covariance and mean) is
compared to EKF-estimated confidence interval.

! Again, EKF is very far from the truth.

■ We can improve on mean and covariance propagation through the


state and output equations using a “sigma-point” approach.

Approximating statistics with sigma points

■ We now look at a different approach to characterizing the mean and


covariance of the output of a nonlinear function.
■ We avoid Taylor-series expansion; instead, a number of function
evaluations are performed whose results are used to compute
estimated mean and covariance matrices.
Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–59
■ This has several advantages:

1. Derivatives do not need to be computed (which is one of the most


error-prone steps when implementing EKF), also implying
2. The original functions do not need to be differentiable, and
3. Better covariance approximations are usually achieved, relative to
EKF, allowing for better state estimation,
4. All with comparable computational complexity to EKF.
■ A set of sigma points X is chosen so that the (possibly weighted)
mean and covariance of the points exactly matches the mean xN and
covariance †xQ of the a priori random variable being modeled.
■ These points are then passed through the nonlinear function,
resulting in a transformed set of points Y.
■ The a posteriori mean yN and covariance †yQ are then approximated by
the mean and covariance of these transformed points Y.
■ Note that the sigma points comprise a fixed small number of vectors
that are calculated deterministically—not like particle filter methods.
■ Specifically, if input RV x has dimension L, mean x, N and covariance
†xQ , then p C 1 D 2L C 1 sigma points are generated as the set
˚ p p *
X D x; N xN C ) †xQ ; xN " ) †xQ ;

with members of X indexed from 0 to p, and where the matrix square


p
root R D † computes a result such that † D RRT .

! Usually, the efficient Cholesky decomposition is used, resulting in


lower-triangular R. (Take care: MATLAB, by default, returns an
upper-triangular matrix that must be transposed.)
Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–60
■ The weighted mean and covariance of X are equal to the original
mean and covariance of x for some f); ˛ .m/; ˛ .c/g if we compute
p p
X .m/
X .c/
xN D ˛i Xi and †xQ D ˛i .Xi " x/.X
N N T,
i " x/
i D0 i D0
.m/ .c/
where Xi is the ith member of X , and both ˛i and ˛i are real
.m/ .c/
scalars where ˛i and ˛i must both sum to one.

! The various sigma-point methods differ only in the choices taken


for these weighting constants.
! Values used by the Unscented Kalman Filter (UKF) and the
Central Difference Kalman Filter (CDKF):

.m/ .m/ .c/ .c/


Method ) ˛0 ˛k ˛0 ˛k
p * 1 * 1
UKF LC* C .1 " ˛ 2 C ˇ/
LC* 2.L C */ LC* 2.L C */
h2 " L 1 h "L
2
1
CDKF h
h2 2h2 h2 2h2

* D ˛ 2 .L C +/ " L is a scaling parameter, with .10!2 & ˛ & 1/. Note that this ˛ is different from ˛ .m/ and ˛ .c/ . + is either 0 or 3 " L.
p
ˇ incorporates prior information. For Gaussian RVs, ˇ D 2. h may take any positive value. For Gaussian RVs, h D 3.

! UKF and CDKF are derived quite differently, but the final methods
are essentially identical.
! CDKF has only one “tuning parameter” h, so implementation is
simpler. It also has marginally higher theoretic accuracy than UKF.
■ Output sigma points are computed: Yi D f .Xi /. Then, the output
mean and covariance are computed as well:
p p
X .m/
X .c/
yN D ˛i Y i and †yQ D ˛i .Yi " y/.Y N T.
N i " y/
i D0 i D0

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–61
■ The diagram illustrates the overall process, with the sets X and Y
stored compactly with each set member a column in a matrix:
i th sigma point

X make
sigma
points
p p
xN N
xC) †xQ N
x") †xQ xN †xQ

nonlinear function Yi D f .Xi /

Y
compute
statistics

yN †yQ
■ Before introducing the SPKF EKF vs. SPKF: mean, variance approximation
10
algorithm, we re-examine the 9 Nonlinear function
Function output value

prior 1D/2D examples using 8


7
EKF function approximation
True output pdf
sigma-point methods. 6 SPKF output pdf
5 EKF output pdf
■ In the 1D example, three input 4
3 Input pdf

sigma points are needed and 2


1
map to the output three sigma 0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6
points shown. Function input value

■ The mean and variance of the sigma-point method is shown as a


dashed-line PDF and closely matches the true mean and variance.
■ For the 2D example, five sigma points represent the input
random-variable PDF (on left).

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–62
Input data scatter Output data scatter

SPKF 3σ contour

True 3σ contour
Nonlinear
function

3σ contour
EKF 3σ contour

■ These five points are transformed to five output points (frame (b)).
■ We see that the mean and covariance of the output sigma points
(dashed ellipse) closely match the true mean and covariance.
■ Will the sigma-point method always be so much better than EKF?

! The answer depends on the degree of nonlinearity of the state and


output equations—the more nonlinear the better SPKF should be
with respect to EKF.

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–63
3.16: The SPKF Steps

■ We now apply the sigma-point approach of propagating statistics


through a nonlinear function to the state-estimation problem.
■ These sigma-points must jointly model all randomness: uncertainty of
the state, process noise, and sensor noise.
■ So we first define an augmented random vector xka that combines
these random factors at time index k.
■ This augmented vector is used in the estimation process as
described below.

SPKF step 1a: State estimate time update.


■ First, form the augmented a posteriori state estimate vector for the
a;C $ C T &T
previous time interval: xO k"1 D .xO k"1/ ; w;
N vN , and the augmented a
( C )
posteriori covariance estimate: †a;C Q
x;k"1 D diag † Q
x;k"1 ; †we ; †vQ .

■ These factors are used to generate the p C 1 augmented sigma points


+ q q ,
a;C a;C a;C
Xk"1 D xO k"1 ; xO k"1 C ) †a;C
Q
x;k"1
a;C
; xO k"1 " ) †a;C
Q
x;k"1 :

■ Can be organized in convenient matrix form:

C
xO k!1 †C
Q
x;k!1
a;C
Xk!1 make and
augmented wN †e
w
sigma
points vN †vQ
q q
a;C a;C
xO k!1 a;C
xO k!1 C) †a;C
Q
x;k!1
a;C
xO k!1 ") †a;C
Q
x;k!1 xO k!1 †a;C
Q
x;k!1

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–64

■ Split augmented sigma


a;C x;C
points Xk"1 into state Xk!1
x;C
portion Xk"1 , process-noise a;C
Xk!1
w;C
portion Xk"1 , and w;C
Xk!1
sensor-noise portion Xkv;C.
Xkv;C

i th sigma
point
■ Evaluate state equation using all
x;C w;C
pairs of Xk"1;i and Xk"1;i (where x;C
Xk!1
subscript i denotes that the ith
vector is being extracted from w;C
Xk!1
the original set), yielding the a
uk!1
x;"
priori sigma points Xk;i . x;!
state eqn: Xk;i x;C
Df .Xk!1;i w;C
; uk!1 ; Xk!1;i /
■ That is, compute
x;" x;C w;C
Xk;i D f .Xk"1;i ; uk"1; Xk"1;i /.
Xkx;!

■ Finally, the a priori state estimate is computed as


p
$ & X
xO k" D E f .xk"1; uk"1; wk"1 / j Yk"1 )
.m/ x;C w;C
˛i f .Xk"1;i ; uk"1; Xk"1;i /
i D0
p
X .m/x;"
D ˛i Xk;i .
i D0

■ Can be computed ˛0.m/


with a simple Xkx;! ˛1.m/
::
matrix multiply :

operation. ˛p.m/ xO k!

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–65

SPKF step 1b: Error covariance time update.


■ Using the a priori sigma points from step 1a, the a priori covariance
estimate is computed as
p
X .c/ ( x;" )( x;" )
" T
†" Q
x;k D ˛ i Xk;i " O
x "
k Xk;i " O
x k .
i D0

SPKF step 1c: Estimate system output yk .


i th sigma
■ The output yk is estimated by point

evaluating the model output


v;C
Xk!1

equation using the sigma points


describing the state and sensor Xkx;!

noise.
uk
■ First, we compute the points x;!
output eqn: Yk;i D h.Xk;i v;C
; uk ; Xk!1;i /
x;" v;C
Yk;i D h.Xk;i ; uk ; Xk"1;i /.
Yk

■ The output estimate is then


p p
$ & X .m/ x;" v;C
X .m/
yOk D E h.xk ; uk ; vk / j Yk"1 ) ˛i h.Xk;i ; uk ; Xk"1;i /D ˛i Yk;i .
i D0 i D0

■ This can be computed with a simple matrix multiplication, as we did


when calculating xO k" at the end of step 1a.
SPKF step 2a: Estimator gain matrix Lk .
■ To compute the estimator gain matrix, we must first compute the
required covariance matrices.
p
X .c/ ( )( )T
Q D
†y;k ˛i Yk;i " yOk Yk;i " yOk
i D0

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–66
p
X .c/ ( x;" )( )T
†"
xQ y;k
Q D ˛i Xk;i " xO k" Yk;i " yOk .
i D0

■ These depend on the sigma-point matrices Xkx;" and Yk , already


computed in steps 1b and 1c, as well as xO k" and yOk , already computed
in steps 1a and 1c.
■ The summations can be performed using matrix multiplies, as we did
in step 1b.
■ Then, we simply compute Lk D †" "1
Q .
Q †y;k
xQ y;k

SPKF step 2b: State estimate measurement update.


■ The state estimate is computed as

xO kC D xO k" C Lk .yk " yOk /:

SPKF step 2c: Error covariance measurement update.


■ The final step is calculated directly from the optimal formulation:
†CQ D †x;k
x;k
"
Q Lk .
Q " Lk †y;k
T

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–67
3.17: An SPKF example, with code

■ Consider the same example used to illustrate EKF:


p
xkC1 D 5 C xk C wk

yk D xk3 C vk

with †we D 1 and †ev D 2.


■ The following is some sample code to implement SPKF
% Define size of variables in model
Nx = 1; % state = 1x1 scalar
Nxa = 3; % augmented state has also w(k) and v(k) contributions
Ny = 1; % output = 1x1 scalar

% Some constants for the SPKF algorithm. Use standard values for
% cases with Gaussian noises. (These are the weighting matrices
% comprising the values of alpha(c) and alpha(m) organized in a
% way to make later computation efficient).
h = sqrt(3);
Wmx(1) = (h*h-Nxa)/(h*h); Wmx(2) = 1/(2*h*h); Wcx=Wmx;
Wmxy = [Wmx(1) repmat(Wmx(2),[1 2*Nxa])]';

% Initialize simulation variables


SigmaW = 1; % Process noise covariance
SigmaV = 2; % Sensor noise covariance
maxIter = 40;
xtrue = 2 + randn(1); % Initialize true system initial state
xhat = 2; % Initialize Kalman filter initial estimate
SigmaX = 1; % Initialize Kalman filter covariance

% Reserve storage for variables we might want to plot/evaluate


xstore = zeros(maxIter+1,length(xtrue)); xstore(1,:) = xtrue;
xhatstore = zeros(maxIter,length(xhat));
SigmaXstore = zeros(maxIter,length(xhat)^2);

for k = 1:maxIter,
% SPKF Step 1a: State estimate time update
% 1a-i: Calculate augmented state estimate, including ...
xhata = [xhat; 0; 0]; % process and sensor noise mean

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–68
% 1a-ii: Get desired Cholesky factor
Sigmaxa = blkdiag(SigmaX,SigmaW,SigmaV);
sSigmaxa = chol(Sigmaxa,'lower');
% 1a-iii: Calculate sigma points (strange indexing of xhat to
avoid
% "repmat" call, which is very inefficient in Matlab)
X = xhata(:,ones([1 2*Nxa+1])) + h*[zeros([Nxa 1]), ...
sSigmaxa, -sSigmaxa];
% 1a-iv: Calculate state equation for every element
% Hard-code equation here for efficiency
Xx = sqrt(5+X(1,:)) + X(2,:);
xhat = Xx*Wmxy;

% SPKF Step 1b: Covariance of prediction


Xs = (Xx(:,2:end) - xhat(:,ones([1 2*Nxa])))*sqrt(Wcx(2));
Xs1 = Xx(:,1) - xhat;
SigmaX = Xs*Xs' + Wcx(1)*Xs1*Xs1';

% [Implied operation of system in background, with


% input signal u, and output signal y]
w = chol(SigmaW)'*randn(1);
v = chol(SigmaV)'*randn(1);
ytrue = xtrue^3 + v; % y is based on present x and u
xtrue = sqrt(5+xtrue) + w; % future x is based on present u

% SPKF Step 1c: Create output estimate


% Hard-code equation here for efficiency
Y = Xx.^3 + X(3,:);
yhat = Y*Wmxy;

% SPKF Step 2a: Estimator gain matrix


Ys = (Y(:,2:end) - yhat*ones([1 2*Nxa])) * sqrt(Wcx(2));
Ys1 = Y(:,1) - yhat;
SigmaXY = Xs*Ys' + Wcx(1)*Xs1*Ys1';
SigmaY = Ys*Ys' + Wcx(1)*Ys1*Ys1';
Lx= SigmaXY/SigmaY;

% SPKF Step 2b: Measurement state update


xhat = xhat + Lx*(ytrue-yhat); % update prediction to estimate

% SPKF Step 2c: Measurement covariance update


SigmaX = SigmaX - Lx*SigmaY*Lx';

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–69

% [Store information for evaluation/plotting purposes]


xstore(k+1,:) = xtrue;
xhatstore(k,:) = xhat;
SigmaXstore(k,:) = (SigmaX(:))';
end

figure(1); clf;
plot(0:maxIter-1,xstore(1:maxIter),'k-',...
0:maxIter-1,xhatstore,'b--', ...
0:maxIter-1,xhatstore+3*sqrt(SigmaXstore),'m-.',...
0:maxIter-1,xhatstore-3*sqrt(SigmaXstore),'m-.'); grid;
legend('true','estimate','bounds');
xlabel('Iteration'); ylabel('State');
title('Sigma-point Kalman filter in action');

figure(2); clf;
plot(0:maxIter-1,xstore(1:maxIter)-xhatstore,'-',0:maxIter-1, ...
3*sqrt(SigmaXstore),'--',0:maxIter-1,-3*sqrt(SigmaXstore),'--');
grid; legend('Error','bounds',0);
title('SPKF Error with bounds');
xlabel('Iteration'); ylabel('Estimation Error');

Sigma−point Kalman filter in action SPKF error with bounds


8
Truth SPKF error SPKF bounds
SPKF estimate 2
EKF error EKF bounds
6 SPKF bounds
Estimation error

1
State

4
0

2 −1

0 −2
0 10 20 30 40 0 10 20 30 40
Iteration Iteration

■ Note the improved estimation accuracy, and greatly improved error


bounds estimates.

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–70
3.18: Implementing SPKF on ESC model

■ We refactor the SPKF code much like we did for EKF. The “wrapper”
code is:
load CellModel % loads "model" of cell

% Load cell-test data . Contains variable "DYNData" of which the field


% "script1" is of interest. It has sub-fields time, current, voltage, soc.
load('Cell_DYN_P25'); % loads data
T = 25; % Test temperature

time = DYNData.script1.time(:); deltat = time(2)-time(1);


time = time-time(1); % start time at 0
current = DYNData.script1.current(:); % discharge > 0; charge < 0.
voltage = DYNData.script1.voltage(:);
soc = DYNData.script1.soc(:);

% Reserve storage for computed results, for plotting


sochat = zeros(size(soc));
socbound = zeros(size(soc));

% Covariance values
SigmaX0 = diag([1e-3 1e-3 1e-2]); % uncertainty of initial state
SigmaV = 2e-1; % Uncertainty of voltage sensor, output equation
SigmaW = 1e1; % Uncertainty of current sensor, state equation

% Create spkfData structure and initialize variables using first


% voltage measurement and first temperature measurement
spkfData = initSPKF(voltage(1),T,SigmaX0,SigmaV,SigmaW,model);

% Now, enter loop for remainder of time, where we update the SPKF
% once per sample interval
hwait = waitbar(0,'Computing...');
for k = 1:length(voltage),
vk = voltage(k); % "measure" voltage
ik = current(k); % "measure" current
Tk = T; % "measure" temperature

% Update SOC (and other model states)


[sochat(k),socbound(k),spkfData] = iterSPKF(vk,ik,Tk,deltat,spkfData);
% update waitbar periodically, but not too often (slow procedure)

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–71
if mod(k,1000)==0, waitbar(k/length(current),hwait); end;
end
close(hwait);

figure(1); clf; plot(time/60,100*sochat,time/60,100*soc); hold on


h = plot([time/60; NaN; time/60],...
[100*(sochat+socbound); NaN; 100*(sochat-socbound)]);
title('SOC estimation using SPKF');
xlabel('Time (min)'); ylabel('SOC (%)');
legend('Estimate','Truth','Bounds'); grid on

fprintf('RMS SOC estimation error = %g%%\n',...


sqrt(mean((100*(soc-sochat)).^2)));

figure(2); clf; plot(time/60,100*(soc-sochat)); hold on


h = plot([time/60; NaN; time/60],[100*socbound; NaN; -100*socbound]);
title('SOC estimation errors using SPKF');
xlabel('Time (min)'); ylabel('SOC error (%)'); ylim([-4 4]);
legend('Error','Bounds'); grid on

ind = find(abs(soc-sochat)>socbound);
fprintf('Percent of time error outside bounds = %g%%\n',...
length(ind)/length(soc)*100);

■ The SPKF initialization code is


function spkfData = initSPKF(v0,T0,SigmaX0,SigmaV,SigmaW,model)

% Initial state description


ir0 = 0; spkfData.irInd = 1;
hk0 = 0; spkfData.hkInd = 2;
SOC0 = SOCfromOCVtemp(v0,T0,model); spkfData.zkInd = 3;
spkfData.xhat = [ir0 hk0 SOC0]'; % initial state

% Covariance values
spkfData.SigmaX = SigmaX0;
spkfData.SigmaV = SigmaV;
spkfData.SigmaW = SigmaW;
spkfData.Snoise = real(chol(diag([SigmaW; SigmaV]),'lower'));
spkfData.Qbump = 5;

% SPKF specific parameters

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–72
Nx = length(spkfData.xhat); spkfData.Nx = Nx; % state-vector length
Ny = 1; spkfData.Ny = Ny; % measurement-vector length
Nu = 1; spkfData.Nu = Nu; % input-vector length
Nw = size(SigmaW,1); spkfData.Nw = Nw; % process-noise-vector length
Nv = size(SigmaV,1); spkfData.Nv = Nv; % sensor-noise-vector length
Na = Nx+Nw+Nv; spkfData.Na = Na; % augmented-state-vector length

h = sqrt(3); spkfData.h = h; % SPKF/CDKF tuning factor


Weight1 = (h*h-Na)/(h*h); % weighting factors when computing mean
Weight2 = 1/(2*h*h); % and covariance
spkfData.Wm = [Weight1; Weight2*ones(2*Na,1)]; % mean
spkfData.Wc = spkfData.Wm; % covar

% previous value of current


spkfData.priorI = 0;
spkfData.signIk = 0;

% store model data structure too


spkfData.model = model;
end

■ The SPKF iteration code is:


function [zk,zkbnd,spkfData] = iterSPKF(vk,ik,Tk,deltat,spkfData)
model = spkfData.model;

% Load the cell model parameters


Q = getParamESC('QParam',Tk,model);
G = getParamESC('GParam',Tk,model);
M = getParamESC('MParam',Tk,model);
M0 = getParamESC('M0Param',Tk,model);
RC = exp(-deltat./abs(getParamESC('RCParam',Tk,model)))';
R = getParamESC('RParam',Tk,model)';
R0 = getParamESC('R0Param',Tk,model);
eta = getParamESC('etaParam',Tk,model);
if ik<0, ik=ik*eta; end;

% Get data stored in spkfData structure


I = spkfData.priorI;
SigmaX = spkfData.SigmaX;
xhat = spkfData.xhat;
Nx = spkfData.Nx;

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–73
Nw = spkfData.Nw;
Nv = spkfData.Nv;
Na = spkfData.Na;
Snoise = spkfData.Snoise;
Wc = spkfData.Wc;
irInd = spkfData.irInd;
hkInd = spkfData.hkInd;
zkInd = spkfData.zkInd;
if abs(ik)>Q/100, spkfData.signIk = sign(ik); end;
signIk = spkfData.signIk;

% Step 1a: State estimate time update


% - Create xhatminus augmented SigmaX points
% - Extract xhatminus state SigmaX points
% - Compute weighted average xhatminus(k)

% Step 1a-1: Create augmented SigmaX and xhat


[sigmaXa,p] = chol(SigmaX,'lower');
if p>0,
fprintf('Cholesky error. Recovering...\n');
theAbsDiag = abs(diag(SigmaX));
sigmaXa = diag(max(SQRT(theAbsDiag),SQRT(spkfData.SigmaW)));
end
sigmaXa=[real(sigmaXa) zeros([Nx Nw+Nv]); zeros([Nw+Nv Nx]) Snoise];
xhata = [xhat; zeros([Nw+Nv 1])];
% NOTE: sigmaXa is lower-triangular

% Step 1a-2: Calculate SigmaX points (strange indexing of xhata to


% avoid "repmat" call, which is very inefficient in MATLAB)
Xa = xhata(:,ones([1 2*Na+1])) + ...
spkfData.h*[zeros([Na 1]), sigmaXa, -sigmaXa];

% Step 1a-3: Time update from last iteration until now


% stateEqn(xold,current,xnoise)
Xx = stateEqn(Xa(1:Nx,:),I,Xa(Nx+1:Nx+Nw,:));
xhat = Xx*spkfData.Wm;

% Step 1b: Error covariance time update


% - Compute weighted covariance sigmaminus(k)
% (strange indexing of xhat to avoid "repmat" call)
Xs = Xx - xhat(:,ones([1 2*Na+1]));
SigmaX = Xs*diag(Wc)*Xs';

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–74

% Step 1c: Output estimate


% - Compute weighted output estimate yhat(k)
I = ik; yk = vk;
Y = outputEqn(Xx,I,Xa(Nx+Nw+1:end,:),Tk,model);
yhat = Y*spkfData.Wm;

% Step 2a: Estimator gain matrix


Ys = Y - yhat(:,ones([1 2*Na+1]));
SigmaXY = Xs*diag(Wc)*Ys';
SigmaY = Ys*diag(Wc)*Ys';
L = SigmaXY/SigmaY;

% Step 2b: State estimate measurement update


r = yk - yhat; % residual. Use to check for sensor errors...
if r^2 > 100*SigmaY, L(:,1)=0.0; end
xhat = xhat + L*r;
xhat(zkInd)=min(1.05,max(-0.05,xhat(zkInd)));

% Step 2c: Error covariance measurement update


SigmaX = SigmaX - L*SigmaY*L';
[~,S,V] = svd(SigmaX);
HH = V*S*V';
SigmaX = (SigmaX + SigmaX' + HH + HH')/4; % Help maintain robustness

% Q-bump code
if r^2>4*SigmaY, % bad voltage estimate by 2-SigmaX, bump Q
fprintf('Bumping sigmax\n');
SigmaX(zkInd,zkInd) = SigmaX(zkInd,zkInd)*spkfData.Qbump;
end

% Save data in spkfData structure for next time...


spkfData.priorI = ik;
spkfData.SigmaX = SigmaX;
spkfData.xhat = xhat;
zk = xhat(zkInd);
zkbnd = 3*sqrt(SigmaX(zkInd,zkInd));

% Calculate new states for all of the old state vectors in xold.
function xnew = stateEqn(xold,current,xnoise)
current = current + xnoise; % noise adds to current
xnew = 0*xold;

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–75
xnew(irInd,:) = RC*xold(irInd,:) + (1-RC)*current;
Ah = exp(-abs(current*G*deltat/(3600*Q))); % hysteresis factor
xnew(hkInd,:) = Ah.*xold(hkInd,:) - (1-Ah).*sign(current);
xnew(zkInd,:) = xold(zkInd,:) - current/3600/Q;
xnew(hkInd,:) = min(1,max(-1,xnew(hkInd,:)));
xnew(zkInd,:) = min(1.05,max(-0.05,xnew(zkInd,:)));
end

% Calculate cell output voltage for all of state vectors in xhat


function yhat = outputEqn(xhat,current,ynoise,T,model)
yhat = OCVfromSOCtemp(xhat(zkInd,:),T,model);
yhat = yhat + M*xhat(hkInd,:) + M0*signIk;
yhat = yhat - R*xhat(irInd,:) - R0*current + ynoise(1,:);
end

% "Safe" square root


function X = SQRT(x)
X = sqrt(max(0,x));
end
end

■ For the following example, the SPKF was executed for the same test
profiles as before.

! RMS SOC estimation error = 0:65 %.

! Percent of time error outside bounds = 0 %.


SOC estimation using SPKF SOC estimation errors using SPKF
120 4
Estimate SPKF error
100 Truth EKF error
Bounds 2
SOC error (%)

80
SOC (%)

60 0

40
−2
20

0 −4
0 100 200 300 400 500 600 0 100 200 300 400 500 600
Time (min) Time (min)

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–76
3.19: Real-world issues pertaining to sensors, initialization

Current-sensor bias

■ KF theory assumes that all noises are zero mean.


■ An unknown current-sensor bias can introduce permanent SOC error.

! Accumulated ampere-hours of bias tend to move SOC estimate


faster than measurement updates can correct.
■ Best solution would be to design sensing hardware to eliminate
current-sensor bias, but this can be done only approximately.
■ So, we can also attempt to correct for the (unknown, time-varying)
bias algorithmically by estimating the bias.
■ Using the ESC model as an example, we augment the pack state
b
´k D ´k"1 " .ik"1 " ik"1 C wk"1 /$t=Q
b
iRj ;k D ARC iRj ;k"1 C BRC .ik"1 " ik"1 C wk"1/
( ˇ ˇ)
Ah;k D exp " .ik"1 " ik"1 C wk"1 /)$t=Qˇ
ˇ b

hk D Ah;k hk"1 C .1 " Ah;k / sgn.ik"1 " ik"1


b
C wk"1 /

ikb D ik"1
b
C nbk"1,

where nbk is a fictitious noise source included in the model only that
allows the SPKF to adapt the bias state.
■ The output equation is also modified:
X
yk D OCV.´k / C M hk " Rj iRj ;k " R0.ik " ikb / C vk ;
j

where vk models sensor noise.


Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–77
Real-world issue: Voltage-sensor faults

■ Consider first a one-output situation (common).


p
■ Part of the SPKF calculates †y;k
Q , and we know &y;k
Q Q .
D †y;k
■ If the absolute value of yQk is “significantly” greater than &y;k
Q , then
either our state estimate is way off, or we have a voltage sensor fault.

1. We can skip measurement update of SPKF, and/or


2. We can “bump up” †x;kQ by multiplying it by a value greater than 1,
especially if a number of “measurement errors” happen in a row.
■ Both done in practice to aid robustness of a real implementation.
■ For a multi-output model, define ´k D Mk yQk .

! The mean of ´k is EŒ´k " D EŒMk yQk " D 0.

! The covariance of ´k is † Q́ ;k D EŒMk yQk yQkT MkT " D Mk †y;k


Q Mk .
T

! ´k is Gaussian (since it is a linear combination of Gaussians).

■ If we define Mk such that MkT Mk D †"1


Q , then
y;k
"1
! Mk is the lower-triangular Cholesky factor of †y;k
Q .

! We also have ´k ' N .0; I / since


( )"1 T
† Q́ ;k D Mk MkT Mk Mk D Mk Mk"1Mk"T MkT D I .
■ If we further compute normalized estimation error squared (NEES)

ek2 D ´Tk ´k D yQkT †"1


Q y
y;k Qk ,

then ek2 is the sum of squares of independent N .0; 1/ RVs.


■ Then, ek2 is a chi-square RV with m degrees of freedom, where m is
the dimension of yQk .
Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–78
■ Since it is a sum of squares, it is never negative; it is also asymmetric
about its mean value.
■ The PDF of a chi-square RV X having m degrees of freedom is
1
fX .x/ D m=2 x .m=2"1/ e "m=2,
2 ,.m=2/
which is tricky, but we never need to evaluate it in real time.
■ Instead, we rely on values precomputed from the distribution.
■ For confidence interval -2 distribution with 24 degrees of freedom
0.06
estimation we need to find
0.05
two critical values.
0.04 mean
PDF f .x/

■ For 1 " ˛ confidence of a 0.03


valid measurement, want 0.02 -2L -2U
˛=2 area between 0 and -2L
0.01
and ˛=2 area above -2U .
0
12.4012 24 39.3641
■ Figure drawn for ˛ D 0:05. x
■ We find -2L from where the inverse CDF of the distribution is equal to
˛=2. In MATLAB:
X2L = chi2inv(0.025,24) % Lower critical value X2L = 12.4012

■ We find -2U from where the inverse CDF is equal to 1 " ˛=2. In
MATLAB:
X2U = chi2inv(1-0.025,24) % Upper critical value X2U = 39.3641

■ Note that -2L and -2U need to be computed once only, offline.

! They are based on the number of measurements in the output


vector and the desired confidence level 1 " ˛ only.
! They do not need to be recalculated as the EKF or SPKF runs.

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–79

■ For hand calculations a -2-table is available on page 3–96.


■ If a value of yQk < -2L or if yQk > -2U , then the measurement is
discarded. Otherwise, the measurement is kept.

Real-world issue: Other sensor faults

■ Not obvious how to catch temperature- and current-sensor faults.

! Current can change “instantly” from imin to imax .

! Might consider thermal model of cell/module to

1. Reduce number of required temperature sensors, and


2. Catch sensor faults as done above for voltage sensor.

Real-world issue: Initialization

■ If vehicle is off for a “long” time, just assume that cell voltage is
equivalent to OCV:

! Reset SOC estimate based on OCV.

! Set diffusion voltages to zero.

! Keep prior value of hysteresis state.

■ If vehicle has been off for a “short” period of time

! Set up and execute simple time/measurement update (simple KF)


equations for SOC and diffusion voltages.
! Hysteresis voltages do not change.

! Run a single-step Kalman filter to update state estimate based on


total time off.

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–80
3.20: Real-world issue: Speed, solved by “bar-delta” filtering

■ We consider again a philosophical question with very important


practical implications:
■ Consider the picture to the right. What is the pack SOC?

! SOC cannot be 0 % because we cannot charge.

! SOC cannot be 100 % because we cannot discharge.

! SOC cannot be the average of the two, 50 %, because


we can neither charge nor discharge.
■ So, battery “pack SOC” is not a helpful concept, by itself.

■ The example is an extreme case, but it is important to estimate the


SOC of all cells even in the typical case.
■ The problem is that the SPKF is computationally complex.

! Running SPKF for one cell is okay, but

! Running 100 SPKFs for 100 cells is probably not okay.

■ In this section we talk about efficient SOC estimation for all individual
cells in a large battery pack.

OBSERVATION : While “pack SOC” does not make sense, the concept of
“pack-average SOC” is a useful one.
■ Since all cells in a series string experience the same current, we
expect their SOC values to

1. Move in the same direction for any given applied current, by


2. A similar amount (but different because of unequal cell capacities).
■ We take advantage of this similarity by creating:

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–81

! One algorithm to determine the composite average behavior of all


cells in the battery pack, and
! Another algorithm to determine the individual differences between
specific cells and that composite average behavior.
N
1 X s
.i /
■ We define pack-average state “x-bar” as x
Nk D xk .
Ns i D1
.i / .i /
! Note that 0 & min.´k / & Ń k & max.´k / & 1; therefore, its range is
i i
within the standard SOC range.
.i / .i /
■ We can then write an individual cell’s state vector as xk D xN k C $xk
.i /
where $xk (called “delta-x”) is the difference between the state
vector of cell i and the pack-average state vector.

! The method is called “bar-delta filtering,” as inspired by the “x-bar”


and “delta-x” naming convention.
■ We use one SPKF to estimate the pack-average state, and Ns SPKFs
(or similar) to estimate the delta states:
Battery pack current measurement
Battery pack voltage measurement “bar” filter estimates Battery pack state estimate
Battery pack temperature measurement pack-average state

Cell current measurement


Cell voltage measurement “delta” filter estimates
Cell temperature measurement difference between Cell delta state estimate
Battery pack state estimate
cell state and
pack-average state
■ It may seem that we have taken a problem of complexity Ns and
replaced it with a problem of complexity Ns C 1.
■ However, this is not the case—the three different types of estimator
involved are not of identical computational complexity.
Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–82

! The bar filter is of the same computational complexity as the


individual state estimators that it uses as a basis (e.g., SPKF).
! However, the delta filters can be made very simple.

! Also, the delta states change much more slowly than the average
state, so the delta filters can be run less frequently, down to 1=Ns
times the rate of the bar filter.
! Overall complexity can be reduced from order Ns to order 1C .

.1/ .2/ .3/ .Ns /


xO k xO k xO k xO k
„ ƒ‚ …
Ns complex filters replaced by
1 complex filter and Ns simple filters

.1/ .2/ .3/ .Ns /


xk $xk $xk $xk $xk

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–83
3.21: Bar-delta filtering using the ESC cell model

The pack bar filter

■ In the implementation that we describe here, a pack-average SPKF


estimated the following quantities:

! The pack-average state-of-charge, pack-average diffusion


current(s), and the pack-average hysteresis voltage.
■ We model current-sensor bias as

ikb D ik"1
b
C nbk"1,

where nbk is a fictitious noise source that is included in the model to


allow SPKF to adapt the bias estimate.
■ We now need to find the pack-average-quantity state equations.
■ For example, starting with a single-cell SOC equation
.i / .i /
´k D ´k"1 " ik"1$t=Q.i /
Ns Ns Ns
1 X .i / 1 X .i / ik"1 $t X 1
´k D ´k"1 "
Ns i D1 Ns i D1 Ns i D1 Q.i /
Ns Ns
1 X .i / ik"1 $t X .i /
D ´k"1 " Qinv
Ns i D1 Ns i D1

Ń k D Ń k"1 " ik"1$t QN inv .

■ Note the new concept of “inverse capacity” to make the equations


simpler. If we are estimating all cells’ capacities, we then have a
time-varying quantity QN inv;k"1.

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–84
■ And, if we also consider the current-bias state,
b
Ń k D Ń k"1 " .ik"1 " ik"1 /$t QN inv;k"1.

■ Similarly, the dynamics of all pack-average states and parameters of


interest may be summarized as:
b
Ń k D Ń k"1 " .ik"1 " ik"1 /$t QN inv;k"1

iNRj ;k D ARC iNRj ;k C BRC .ik"1 " ik"1


b
/
( ˇ ˇ)
ˇ b N
Ah;k D exp " .ik"1 " ik"1/)$t Qinv;k"1ˇ

hN k D Ah;k hN k"1 C .1 " Ah;k / sgn.ik"1 " ik"1


b
/
RN 0
RN 0;k D RN 0;k"1 C nk"1
QN inv
QN inv;k D QN inv;k"1 C nk"1

ikb D ik"1
b
C nbk"1 ,
RN QN
where nk 0 and nk inv are fictitious noise sources that allow the SPKF to
adapt the corresponding pack-average parameters.
■ The bar-filter for the pack employs an SPKF that uses this model of
pack-average states and the measurement equation
X
N
yNk D OCV. Ń k / C M hk " Rj iNRj ;k " RN 0;k .ik " ikb / C vk ;
j

where vk models sensor noise.

The cell delta filters

■ The quantities that we are most interested in estimating at the


individual cell level are: SOC, resistance, and capacity.

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–85
■ These all factor into determining pack available power and lifetime
(state-of-health) estimates.
■ We will first consider the delta filter approach to determining cell SOC.
.i / .i /
■ Note, from before, $´k D ´k " Ń k . Then, using prior equations for
.i /
the dynamics of ´k and Ń k , we find:
.i / .i /
$´k D ´k " Ń k
( .i / ) ( )
/$t QN inv;k"1
b .i / b
D ´k"1 " .ik"1"ik"1 /$tQinv;k"1 " Ń k"1 " .ik"1"ik"1
.i / b .i /
D $´k"1 " .ik"1"ik"1 /$t$Qinv;k"1
where $Qinv;k D Qinv;k " QN inv;k .
.i / .i /

.i / .i /
■ Because $Qinv;k tends to be small, the state $´k does not change
quickly, and can be updated at a slower rate than the pack-average
SOC by accumulating .ik"1 " ik"1
b
/$t in-between updates.
■ An output equation suitable for combining with this state equation is
X
.i / .i / N
yk D OCV. Ń k C $´k / C M hk " Rj iNRj ;k " .RN 0;k C $R0;k /.ik " ikb / C vk
.i /

.i /
■ To estimate $´k , an SPKF is used with these two equations. Since it
is a single-state SPKF, it is very fast.
■ As a preview of parameter estimation (talked about more in the next
chapter. . . ) we can similarly make state-space models of the
delta-resistance and delta capacity states.
■ A simple state-space model of the delta-resistance state is:
.i / .i / $R
$R0;k D $R0;k"1 C nk"10

yk D OCV. Ń k C $´k / " .RN 0;k C $R0;k /.ik " ikb / C vk 0 ,


.i / .i / $R

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–86

where $R0;k D R0;k " RN 0;k and is modeled as a constant value with a
.i / .i /

$R
fictitious noise process nk 0 allowing adaptation, yk is a crude
$R
estimate of the cell’s voltage, and vk 0 models estimation error.
■ The dynamics of the delta-resistance state are simple and linear
enough to use a single-state EKF rather than an SPKF.
■ To estimate cell capacity using an EKF, we model
.i / .i / $Q
$Qinv;k D $Qinv;k"1 C nk"1inv
.i / .i /b
dk D .´k " ´k"1/ C .ik"1 " ik"1 /$t #
# '
QN inv;k"1 C $Qinv;k"1 C ek
.i /

The second equation is a reformulation of the SOC state equation


such that the expected value of dk is equal to zero by construction.
■ As the EKF runs, the computation for dk in the second equation is
compared to the known value (zero, by construction), and the
difference is used to update the inverse-capacity estimate.
■ Note that good estimates of present and previous SOCs are required.
! Here, they come from the pack SPKF combined with the cell SPKF.

■ The output of the delta filters is computed by combining the average


battery pack state with the battery cell module delta states produced
by the individual Kalman filters:
.i / .i /
´k D Ń k C $´k

R0;k D RN 0;k C $R0;k


.i / .i /

.i / 1
Qk D
QN inv;k C $Qinv;k
.i /

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–87
3.22: Example of bar-delta, using desktop validation

■ Two basic approaches to algorithm validation

HARDWARE IN THE LOOP ( HIL ):

■ HIL reveals baseline truth, but it is sometimes hard/impossible to


determine “truth” of all cell states based on recorded data.

DESKTOP VALIDATION :

■ Use model of cell to create synthetic test data.


■ Allows access to “truth” of all cell and algorithm states.
■ Very useful for tuning algorithms.
■ Validity of results limited by the accuracy of cell model.
■ With desktop validation, we need a “data generation” component:

! Creates synthetic BMS data based on drive cycles and other


initialization parameters.
■ And, we need a “BMS algorithm simulation” component:

! Simulates the SPKF algorithms using the synthetic data as input,


based on various initialization parameters.

Data PC−Based Evaluation


Generator Algorithm
System Set

BMS
Evaluation

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–88
■ Either way, validation scenarios include: Normal operation, improper
SOC initialization, sensor failures (fault + noise), temperature drift,
new and old cells mixed, different drive cycles, current-sensor bias.
■ Insight and validation via analysis and display of outputs.

Examples of bar-delta accuracy and speed

■ Simulated cycling of a four-cell pack with a UDDS cycle, a rest period,


the same UDDS cycle, and a rest period.
■ The pack cells had true capacities of 6.5, 7.0, 7.5, and 8:0 Ah,
resistances of 2.0, 2.25, 2.5, and 2:75 m., and initial SOC values of
40, 45, 50, and 55 %. The current-sensor bias was 0:5 A.
■ The algorithms were initialized with all cells having estimated capacity
of 6:2 Ah, estimated resistances of 2:25 m., estimated current-
sensor bias of 0 A, and initial SOC estimates based on initial voltages.
■ SPKF was used for the bar filter and the SOC delta filters, and EKF
was used for the resistance and capacity-inverse delta filters.
■ Pack current and individual cell voltage profiles for algorithm testing:

Battery-pack current Battery-pack cell voltages for all cells


100 4.4
75 4.3
50 4.2
25 4.1
0 4
Voltage (V)
Current (A)

−25 3.9
−50 3.8
−75 3.7
−100 3.6
−125 3.5
−150 3.4
−175 3.3
−200 3.2
0 5 10 15 20 25 30 35 40 45 50 55 60 65 0 5 10 15 20 25 30 35 40 45 50 55 60 65
Time (min) Time (min)

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–89
■ Some results showing accuracy of the method:
Total bar-delta SOC estimates for all cells Total bar-delta estimated resistances for all cells
60 2.8
True SOC 2.7
Estimated SOC
55 2.6

Cell resistance (mΩ)


2.5
Cell SOC (%)

2.4
50
2.3
2.2
45 2.1
2
40 1.9
1.8 True resistance
1.7 Estimated resistance
35
1.6
0 5 10 15 20 25 30 35 40 45 50 55 60 65 0 5 10 15 20 25 30 35 40 45 50 55 60 65
Time (min) Time (min)

■ A different point of view, with SOC estimation errors:


Bar filter pack-average SOC estimation error Total bar-delta SOC estimation error for all cells
5 5
Error in pack-average SOC (%)

SOC estimation error Estimation error in SOC


4 4
Estimation error bounds Error bounds
Error in cell SOC (%)

3 3
2 2
1 1
0 0
−1 −1
−2 −2
−3 −3
−4 −4
−5 −5
0 5 10 15 20 25 30 35 40 45 50 55 60 65 0 5 10 15 20 25 30 35 40 45 50 55 60 65
Time (min) Time (min)

■ Some other estimation accuracies:


Bar filter estimated pack-average resistance Bar filter estimated current-sensor bias
3.5 1
Pack-average resistance (mΩ)

True resistance
3.25
Estimated resistance 0.75
Current-sensor bias (A)

3 Estimate error bounds


0.5
2.75
2.5 0.25
2.25 0
2 −0.25
1.75
−0.5 True bias
1.5
−0.75 Estimated bias
1.25
Estimate error bounds
1 −1
0 5 10 15 20 25 30 35 40 45 50 55 60 65 0 5 10 15 20 25 30 35 40 45 50 55 60 65
Time (min) Time (min)

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–90
■ Capacity estimates evolve in a similar way to resistance estimates.

! However, the time scale of adaptation is much longer, since


capacity is very weakly linked to the output measurement.
! Abrupt changes in capacity will not be tracked very quickly; but,
capacity fade due to normal aging will be tracked very well.
■ Speedup of the method (hand-coded C code, run on G4 processor)

Description of test (for pack comprising 100 cells) CPU time per iteration Speedup

One SPKF per cell 5.272 ms 1.0

One pack bar filter only, no delta filters 0.067 ms 78.7

One pack bar filter, 100 delta filters updated per iteration 0.190 ms 27.7

One pack bar filter, 50 delta filters updated per iteration 0.123 ms 42.9

Where from here?

■ We have seen good and bad ways to estimate SOC for all cells.
■ Model-based methods are preferred; KF-based methods are “optimal”
in some sense.
■ Additionally, KF estimates entire state—not only SOC—therefore can
also be used for degradation predictions. . .
■ Lots of nuances unexplored in this short section of notes. ECE5550
goes into much more depth and breadth of implementation of KF
■ Our next step is to look at state-of-health estimation, which is a form
of parameter estimation.

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–91
Appendix: General sequential probabilistic inference solution

General state-space model:


xk D f .xk"1 ; uk"1 ; wk"1 /

yk D h.xk ; uk ; vk /;

where wk and vk are independent, Gaussian noise processes having covari-


e and †vQ , respectively.
ance matrices †w

Definitions: Let
xQ k" D xk " xO k" ; yQk D yk " yOk .

Initialization: For k D 0, set


$ &
xO 0C D E x0
$ &
†C Q D E .x0 " x
x;0 O 0C /.x0 " xO 0C /T .

Computation: For k D 1; 2; : : : compute:


$ &
State estimate time update: xO k" D E f .xk"1 ; uk"1 ; wk"1 / j Yk"1 .
$ " " T&
Error covariance time update: †" Q
x;k D E .xQ k /.xQ k / .
$ &
Output estimate: yOk D E h.xk ; uk ; vk / j Yk"1 .
$ " &# $ &'"1
Estimator gain matrix: Lk D E .xQ k /.yQk / T
E .yQk /.yQk / T
.
( )
State estimate measurement update: xO kC D xO k" C Lk yk " yOk .

Error covariance measurement update: †C "


Q D †x;k
x;k Q Lk .
Q " Lk †y;k
T

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–92
Appendix: Summary of the linear Kalman filter

Linear state-space model:


xk D Ak"1 xk"1 C Bk"1 uk"1 C wk"1

yk D Ck xk C Dk uk C vk ,

where wk and vk are independent, zero-mean, Gaussian noise processes of


e and †vQ , respectively.
covariance matrices †w

Initialization: For k D 0, set


xO 0C D EŒx0 "
$ &
†CQ
x;0 D E .x0 " O
x C
0 /.x0 " O
x C T
0 / .

Computation: For k D 1; 2; : : : compute:


C
State estimate time update: xO k" D Ak"1 xO k"1 C Bk"1 uk"1 .
C
Error covariance time update: †"
Q D Ak"1 †x;k"1
x;k Q e.
ATk"1 C †w

Output estimate: yOk D Ck xO k" C Dk uk .

Estimator gain matrix:* Lk D †" T " "1


Q Ck C †vQ " .
Q Ck ŒCk †x;k
x;k
T

( )
State estimate measurement update: xO kC D xO k" C Lk yk " yOk .

Error covariance measurement update: †C "


Q .
Q D .I " Lk Ck /†x;k
x;k

"
If a measurement is missed for some reason, then simply skip the measurement update for that
iteration. That is, Lk D 0 and xO kC D xO k! and †C
Q
x;k D †!
Q .
x;k

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–93
Appendix: Summary of the nonlinear extended Kalman filter

Nonlinear state-space model:


xk D f .xk"1 ; uk"1 ; wk"1 /

yk D h.xk ; uk ; vk /;

where wk and vk are independent, Gaussian noise processes of covariance ma-


e and †vQ , respectively.
trices †w

Definitions: ˇ ˇ
df .x ; u ; w / ˇ df .x ; u ; w / ˇ
AOk D BO k D
k k k ˇ k k k ˇ
dxk ˇ C dwk ˇ
ˇ xk DxO k ˇ wk DwN k
dh.xk ; uk ; vk / ˇˇ dh.xk ; uk ; vk / ˇˇ
CO k D ˇ DO k D ˇ .
dx k !
xk DxO k dv k vk DvN k

Initialization: For k D 0, set


xO 0C D EŒx0 "
$ &
†CQ
x;0 D E .x0 " O
x C
0 /.x0 " O
x C T
0 / .

Computation: For k D 1; 2; : : : compute:


C
State estimate time update: xO k" D f .xO k"1 ; uk"1 ; wN k"1 /.

Error covariance time update: †" O C


Q D Ak"1 †x;k"1 AOTk"1 C BO k"1 †w OT
e Bk"1 .
x;k Q

Output estimate: yOk D h.xO k" ; uk ; vN k /.

Estimator gain matrix: Lk D †" O T O "Q CO kT C DO k †vQ DO kT ""1 .


Q Ck ŒCk †x;k
x;k
( )
State estimate measurement update: xO kC D xO k" C Lk yk " yOk .

Error covariance measurement update: †C O "


Q .
Q D .I " Lk Ck /†x;k
x;k

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–94
Appendix: Summary of the nonlinear sigma-point Kalman filter

Nonlinear state-space model:


xk D f .xk"1 ; uk"1 ; wk"1 /

yk D h.xk ; uk ; vk /;

where wk and vk are independent, Gaussian noise processes with


e and †vQ , respectively.
means wN and vN and covariance matrices †w

Definitions: Let
$ &T $ &T
xka D xkT ; wkT ; vkT ; Xka D .Xkx /T ; .Xkw /T ; .Xkv /T ; p D 2 # dim.xka /.

Initialization: For k D 0, set


$ & $ &
xO 0C D E x0 †C
Q
x;0 D E .x0 " O
x C
0 /.x 0 " xO C T
0 /
$ & $ &T $ a &
xO 0a;C D E x0a D .xO 0C /T ; w;
N vN Q D E .x0 " x
†a;C
x;0 O 0a;C /.x0a " xO 0a;C /T
( )
D diag †C Q
x;0 ; † e
w ; † Q
v .
Computation: For k D 1; 2; : : : compute:
n q q o
a;C a;C a;C a;C a;C a;C
State estimate time update: Xk"1 D xO k"1 ; xO k"1 C ) †x;k"1
Q ; xO k"1 " ) †x;k"1
Q .
x;" x;C w;C
Xk;i D f .Xk"1;i ; uk"1 ; Xk"1;i /.
p
X .m/ x;"
"
xO k D ˛i Xk;i .
i D0
p
X .c/ ( )( x;" )T
Error covariance time update: †"
Q D
x;k ˛i x;"
Xk;i " xO k" Xk;i " xO k" .
i D0
x;" v;C
Output estimate: Yk;i D h.Xk;i ; uk ; Xk"1;i /.
p
X .m/
yOk D ˛i Yk;i .
i D0

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
ECE5720, Battery State Estimation 3–95

Computation: (cont):
p
X .c/ ( )( )T
Estimator gain matrix: Q D
†y;k ˛i Yk;i " yOk Yk;i " yOk .
i D0
p
X .c/ ( )( )T
†" Q D
xQ y;k ˛i x;"
Xk;i " xO k" Yk;i " yOk .
i D0

Lk D †" "1
Q .
Q †y;k
xQ y;k
( )
State estimate meas. update: xO kC D xO k" C Lk yk " yOk .

Error covariance meas. update: †C "


Q D †x;k
x;k Q Lk .
Q " Lk †y;k
T

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
Degrees of Upper Tail Areas


Freedom 0.995 0.99 0.975 0.95 0.90 0.75 0.25 0.10 0.05 0.025 0.01 0.005
1 0.000 0.000 0.001 0.004 0.016 0.102 1.323 2.706 3.841 5.024 6.635 7.879
2 0.010 0.020 0.051 0.103 0.211 0.575 2.773 4.605 5.991 7.378 9.210 10.597
3 0.072 0.115 0.216 0.352 0.584 1.213 4.108 6.251 7.815 9.348 11.345 12.838
4 0.207 0.297 0.484 0.711 1.064 1.923 5.385 7.779 9.488 11.143 13.277 14.860
5 0.412 0.554 0.831 1.145 1.610 2.675 6.626 9.236 11.070 12.833 15.086 16.750
6 0.676 0.872 1.237 1.635 2.204 3.455 7.841 10.645 12.592 14.449 16.812 18.548
7 0.989 1.239 1.690 2.167 2.833 4.255 9.037 12.017 14.067 16.013 18.475 20.278
8 1.344 1.646 2.180 2.733 3.490 5.071 10.219 13.362 15.507 17.535 20.090 21.955
9 1.735 2.088 2.700 3.325 4.168 5.899 11.389 14.684 16.919 19.023 21.666 23.589
10 2.156 2.558 3.247 3.940 4.865 6.737 12.549 15.987 18.307 20.483 23.209 25.188
ECE5720, Battery State Estimation

11 2.603 3.053 3.816 4.575 5.578 7.584 13.701 17.275 19.675 21.920 24.725 26.757
12 3.074 3.571 4.404 5.226 6.304 8.438 14.845 18.549 21.026 23.337 26.217 28.300
13 3.565 4.107 5.009 5.892 7.042 9.299 15.984 19.812 22.362 24.736 27.688 29.819
14 4.075 4.660 5.629 6.571 7.790 10.165 17.117 21.064 23.685 26.119 29.141 31.319
15 4.601 5.229 6.262 7.261 8.547 11.037 18.245 22.307 24.996 27.488 30.578 32.801
16 5.142 5.812 6.908 7.962 9.312 11.912 19.369 23.542 26.296 28.845 32.000 34.267
17 5.697 6.408 7.564 8.672 10.085 12.792 20.489 24.769 27.587 30.191 33.409 35.718
Appendix: Critical Values of -2

18 6.265 7.015 8.231 9.390 10.865 13.675 21.605 25.989 28.869 31.526 34.805 37.156
19 6.844 7.633 8.907 10.117 11.651 14.562 22.718 27.204 30.144 32.852 36.191 38.582
20 7.434 8.260 9.591 10.851 12.443 15.452 23.828 28.412 31.410 34.170 37.566 39.997
21 8.034 8.897 10.283 11.591 13.240 16.344 24.935 29.615 32.671 35.479 38.932 41.401
22 8.643 9.542 10.982 12.338 14.041 17.240 26.039 30.813 33.924 36.781 40.289 42.796
23 9.260 10.196 11.689 13.091 14.848 18.137 27.141 32.007 35.172 38.076 41.638 44.181
24 9.886 10.856 12.401 13.848 15.659 19.037 28.241 33.196 36.415 39.364 42.980 45.559
25 10.520 11.524 13.120 14.611 16.473 19.939 29.339 34.382 37.652 40.646 44.314 46.928
critical value of -2 for a specified upper tail area ˛.

26 11.160 12.198 13.844 15.379 17.292 20.843 30.435 35.563 38.885 41.923 45.642 48.290
For some deg. of freedom, each entry represents the

27 11.808 12.879 14.573 16.151 18.114 21.749 31.528 36.741 40.113 43.195 46.963 49.645
0

28 12.461 13.565 15.308 16.928 18.939 22.657 32.620 37.916 41.337 44.461 48.278 50.993
29 13.121 14.256 16.047 17.708 19.768 23.567 33.711 39.087 42.557 45.722 49.588 52.336

Lecture notes prepared by Dr. Gregory L. Plett. Copyright © 2013, 2015, Gregory L. Plett
30 13.787 14.953 16.791 18.493 20.599 24.478 34.800 40.256 43.773 46.979 50.892 53.672
1"˛

31 14.458 15.655 17.539 19.281 21.434 25.390 35.887 41.422 44.985 48.232 52.191 55.003
32 15.134 16.362 18.291 20.072 22.271 26.304 36.973 42.585 46.194 49.480 53.486 56.328
33 15.815 17.074 19.047 20.867 23.110 27.219 38.058 43.745 47.400 50.725 54.776 57.648
34 16.501 17.789 19.806 21.664 23.952 28.136 39.141 44.903 48.602 51.966 56.061 58.964
35 17.192 18.509 20.569 22.465 24.797 29.054 40.223 46.059 49.802 53.203 57.342 60.275
36 17.887 19.233 21.336 23.269 25.643 29.973 41.304 47.212 50.998 54.437 58.619 61.581
-2U.˛;df/

37 18.586 19.960 22.106 24.075 26.492 30.893 42.383 48.363 52.192 55.668 59.893 62.883
˛

38 19.289 20.691 22.878 24.884 27.343 31.815 43.462 49.513 53.384 56.896 61.162 64.181
39 19.996 21.426 23.654 25.695 28.196 32.737 44.539 50.660 54.572 58.120 62.428 65.476
40 20.707 22.164 24.433 26.509 29.051 33.660 45.616 51.805 55.758 59.342 63.691 66.766
45 24.311 25.901 28.366 30.612 33.350 38.291 50.985 57.505 61.656 65.410 69.957 73.166
50 27.991 29.707 32.357 34.764 37.689 42.942 56.334 63.167 67.505 71.420 76.154 79.490
3–96

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy