Vehicle Applications of Controller Area Network
Vehicle Applications of Controller Area Network
Vehicle Applications of Controller Area Network
1 Introduction
The Controller Area Network (CAN) is a serial bus communications proto-
col developed by Bosch in the early 1980s. It defines a standard for efficient
and reliable communication between sensor, actuator, controller, and other
nodes in real-time applications. CAN is the de facto standard in a large vari-
ety of networked embedded control systems. The early CAN development was
mainly supported by the vehicle industry: CAN is found in a variety of passen-
ger cars, trucks, boats, spacecraft, and other types of vehicles. The protocol is
also widely used today in industrial automation and other areas of networked
embedded control, with applications in diverse products such as production
machinery, medical equipment, building automation, weaving machines, and
wheelchairs.
In the automotive industry, embedded control has grown from stand-alone
systems to highly integrated and networked control systems [7, 11]. By net-
working electro-mechanical subsystems, it becomes possible to modularize
functionalities and hardware, which facilitates reuse and adds capabilities.
Fig. 1 shows an example of an electronic control unit (ECU) mounted on a
diesel engine of a Scania truck. The ECU handles the control of engine, turbo,
fan, etc. but also the CAN communication. Combining networks and mecha-
tronic modules makes it possible to reduce both the cabling and the number
∗
The work of K. H. Johansson was partially supported by the European Com-
mission through the ARTIST2 Network of Excellence on Embedded Systems Design,
by the Swedish Research Council, and by the Swedish Foundation for Strategic Re-
search through an Individual Grant for the Advancement of Research Leaders.
†
The work of M. Törngren was partially supported by the European Commission
through ARTIST2 and by the Swedish Foundation for Strategic Research through
the project SAVE.
742 K. H. Johansson, M. Törngren, and L. Nielsen
Fig. 1. An ECU mounted directly on a diesel engine of a Scania truck. The arrows
indicate the ECU connectors, which are interfaces to the CAN. (Courtesy of Scania
AB.)
Application
Presentation
Session
Transport
Network
Data link
CAN
Physical
Fig. 2. The CAN protocol defines the lowest two layers of the OSI model. There exist
several CAN-based higher-layer protocols that are standardized. The user choice
depends on the application.
400
300
250
200
150
100
50
0
1999 2000 2001 2002 2003
Fig. 3. The number of CAN nodes sold per year is currently about 400 million.
(Data from the association CAN in Automation [3].)
which was presented as CAN in 1986 at the SAE congress in Detroit [8]. The
CAN protocol was internationally standardized in 1993 as ISO 11898-1. The
development of CAN was mainly motivated by the need for new functionality,
but it also reduced the need for wiring. The use of CAN in the automotive
industry has caused mass production of CAN controllers. Today, CAN con-
trollers are integrated on many microcontrollers and available at a low cost.
Fig. 3 shows the number of CAN nodes that were sold during 1999–2003.
744 K. H. Johansson, M. Törngren, and L. Nielsen
Bus
2.1 Description
A CAN bus with three nodes is depicted in Fig. 4. The CAN specification [4]
defines the protocols for the physical and the data link layers, which enable
the communication between the network nodes. The application process of a
node, e.g., a temperature sensor, decides when it should request the trans-
mission of a message frame. The frame consists of a data field and overhead,
such as identifier and control fields. Since the application processes in gen-
eral are asynchronous, the bus has a mechanism for resolving conflicts. For
CAN, it is based on a non-destructive arbitration process. The CAN protocol
therefore belongs to the class of protocols denoted as carrier sense multiple
access/collision avoidance (CSMA/CA), which means that the protocol listens
Vehicle Applications of Controller Area Network 745
Message formats
CAN distinguishes four message formats: data, remote, error, and overload
frames. Here we limit the discussion to the data frame, shown in Fig. 5. A data
frame begins with the start-of-frame (SOF) bit. It is followed by an eleven-bit
identifier and the remote transmission request (RTR) bit. The identifier and
the RTR bit form the arbitration field. The control field consists of six bits
and indicates how many bytes of data follow in the data field. The data field
can be zero to eight bytes. The data field is followed by the cyclic redundancy
checksum (CRC) field, which enables the receiver to check if the received bit
sequence was corrupted. The two-bit acknowledgment (ACK) field is used
by the transmitter to receive an acknowledgment of a valid frame from any
receiver. The end of a message frame is signaled through a seven-bit end-of-
frame (EOF). There is also an extended data frame with a twenty-nine-bit
identifier (instead of eleven bits).
Arbitration
Arbitration is the mechanism that handles bus access conflicts. Whenever the
CAN bus is free, any unit can start to transmit a message. Possible conflicts,
due to more than one unit starting to transmit simultaneously, are resolved by
bit-wise arbitration using the identifier of each unit. During the arbitration
phase, each transmitting unit transmits its identifier and compares it with
the level monitored on the bus. If these levels are equal, the unit continues to
transmit. If the unit detects a dominant level on the bus, while it was trying to
transmit a recessive level, then it quits transmitting (and becomes a receiver).
The arbitration phase is performed over the whole arbitration field. When it
is over, there is only one transmitter left on the bus.
The arbitration is illustrated by the following example with three nodes
(see Fig. 6). Let the recessive level correspond to “1” and the dominant level
to “0”, and suppose the three nodes have identifiers Ii , i = 1, 2, 3, equal to
746 K. H. Johansson, M. Törngren, and L. Nielsen
S Identifier R Control
O T
F 1 2 3 4 5 6 7 8 9 10 11 R
Node 1
Node 2
Node 3
Bus
Fig. 6. Example illustrating CAN arbitration when three nodes start transmitting
their SOF bits simultaneously. Nodes 1 and 2 stop transmitting as soon as they
transmit one (recessive level), and Node 3 is transmitting zero (dominant level). At
these instances, Nodes 1 and 2 enter the receiver mode, indicated in grey. When
the identifier has been transmitted, the bus belongs to Node 3 which thus continues
transmitting its control field, data field, etc.
Error handling
Error detection and error handling are important for the performance of CAN.
Because of complementary error detection mechanisms, the probability of hav-
ing an undetected error is very small. Error detection is done in five different
Vehicle Applications of Controller Area Network 747
ways in CAN: bit monitoring and bit stuffing, as well as frame check, ACK
check, and CRC. Bit monitoring simply means that each transmitter monitors
the bus level, and signals a bit error if the level does not agree with the trans-
mitted signal. (Bit monitoring is not done during the arbitration phase.) After
having transmitted five identical bits, a node will always transmit the oppo-
site bit. This extra bit is neglected by the receiver. The procedure is called
bit stuffing, and it can be used to detect errors. The frame check consists of
checking that the fixed bits of the frame have the values they are supposed
to have, e.g., EOF consists of seven recessive bits. During the ACK in the
message frame, all receivers are supposed to send a dominant level. If the
transmitter, which transmits a recessive level, does not detect the dominant
level, then an error is signaled by the ACK check mechanism. Finally, the
CRC is that every receiver calculates a checksum based on the message and
compares it with the CRC field of the message.
Every receiver node obviously tries to detect errors within each message. If
an error is detected, it leads to an immediate and automatic retransmission of
the incorrect message. In comparison to other network protocols, this mech-
anism leads to a high data integrity and a short error recovery time. CAN
thus provides elaborate procedures for error handling, including retransmis-
sion and reinitialization. The procedures have to be studied carefully for each
application to ensure that the automated error handling is in line with the
system requirements.
Higher-layer protocols
The CAN protocol defines the lowest two layers of the OSI model in Fig. 2.
In order to use CAN, protocols are needed to define the other layers. Field-
bus protocols usually do not define the session and presentation layers, since
they are not needed in these applications. The users may either decide to
define their own software for handling the higher layers, or they may use a
standardized protocol. Existing higher-layer protocols are often tuned to a
certain application domain. Examples of such protocols include SAE J1939,
CANopen, and DeviceNet. It is only SAE J1939 that is specially developed
for vehicle applications. Recently, attempts have been made to interface CAN
and Ethernet, which is the dominant technology for local area networks and
widely applied for connecting to the Internet.
748 K. H. Johansson, M. Törngren, and L. Nielsen
The high-level protocols described above have been developed with differ-
ent applications and traditions in mind, which is reflected, for example, in their
support for real-time control. Although SAE J1939 is used for implementing
control algorithms, it does not provide explicit support for time-constrained
messaging. In contrast, such functionalities are provided by CANKingdom
and CANopen, which handle explicit support for inter-node synchronization.
CANKingdom and CANopen allow static and dynamic configuration of the
network, whereas SAE J1939 provides little flexibility.
CAN gateways
3 Architectures
In this section, four vehicular examples of distributed control architectures
based on CAN are presented. The architectures are implemented in a passen-
ger car, a truck, a boat, and a spacecraft.
In the automotive industry, there has been a remarkable evolution over the
last few years in which embedded control systems have grown from stand-
alone control systems to highly integrated and networked control systems.
Originally motivated by reduced cabling and the specific addition of function-
alities with sensor sharing and diagnostics, there are currently several new
x-by-wire systems under development that involve distributed coordination of
many subsystems.
Fig. 7 shows the distributed control architecture of the Volvo XC90. The
blocks represent ECUs and the thick lines represent networks. The actual lo-
cation of an ECU in the car is approximately indicated by its location in the
block diagram. There are three classes of ECUs: powertrain and chassis, info-
tainment, and body electronics. Many of the ECU acronyms are defined in the
figure. Several networks are used to connect the ECUs and the subsystems.
There are two CAN buses. The leftmost network in the diagram is a CAN for
power train and chassis subsystems. It connects for example engine and brake
control (TCM, ECM, BCM, etc.) and has a communication rate of 500 kbps.
The other CAN connects body electronics such as door and climate control
(DDM, PDM, CCM, etc.) and has a communication rate of 125 kbps. The
central electronic module (CEM) is an ECU that acts as a gateway between
the two CAN buses. A media oriented system transport (MOST) network de-
fines networking for infotainment and telematics subsystems. It consequently
connects ECUs for multimedia, phone, and antenna. Finally, local intercon-
nect networks (LINs) are used to connect slave nodes into a subsystem and are
denoted by dashed lines in the block diagram. The maximum configuration
for the vehicle contains about 40 ECUs [7].
Fig. 7. Distributed control architecture for the Volvo XC90. Two CAN buses and
some other networks connect up to about 40 ECUs. (Courtesy of Volvo Car Corpo-
ration.)
are due to the fact that trucks are configured in a large number of physical
variants and have longer expected life times. These characteristics impose
requirements on flexibility with respect to connecting, adding, and removing
equipments and trailers.
The control architecture for a Scania truck is shown in Fig. 8. It consists
of three CAN buses, denoted green, yellow, and red by Scania due to their
relative importance. The leftmost (vertical) CAN contains less critical ECUs
such as the audio system and the climate control. The middle (vertical) CAN
handles the communication for important subsystems that are not directly
involved in the engine and brake management. For example, connected to this
Vehicle Applications of Controller Area Network 753
Red bus
GMS Gearbox management system COO Coordinator system
ACS Articulation control system
EMS Engine management system
EEC Exhaust emission control
BMS Brake management system
SMS Suspension management system
SMD Suspension management dolly
Fig. 8. Distributed control architecture for a Scania truck. Three CAN buses (de-
noted green, yellow, and red due to their relative criticality) connect up to more
than twenty ECUs. The coordinator system ECU (COO) is a gateway between the
three CAN buses. (Courtesy of Scania AB.)
754 K. H. Johansson, M. Törngren, and L. Nielsen
bus is the instrument cluster system. Finally, the rightmost (horizontal) bus
is the most critical CAN. It connects all ECUs for the driveline subsystems.
The coordinator system ECU (COO) is a gateway between the three CAN
buses. Connected to the leftmost CAN is a diagnostic bus, which is used to
collect information on the status of the ECUs. The diagnostic bus can thus
be used for error detection and debugging. Variants of the truck are equipped
with different numbers of ECUs (the figure illustrates a configuration close to
maximum). As for passenger cars, there are also subnetworks, but these are
not shown in the figure.
SAE J1939 is the dominant higher-layer protocol for trucks. It facilitates
plug-and-play functionality, but makes system changes and optimization diffi-
cult, partly because the priorities for scheduling the network traffic cannot be
reconfigured. Manufacturers are using loopholes in SAE J1939 to work around
these problems, but their existence indicates deficiencies in the protocol.
Fig. 9. Distributed control architecture for a boat. The block diagram shows a
SeaCAN system for a 7 m remotely controlled rigid-hull inflatable boat. (Courtesy
of the US Navy.)
The CAN protocol is also used in spacecraft and aircraft. SMART-1 is the first
European lunar mission, where the acronym stands for “small missions for ad-
vanced research in technology.” The spacecraft was successfully launched on
September 27, 2003 by the European Space Agency on an Ariane V launcher.
The Swedish Space Corporation was the prime contractor for SMART-1 and
has developed several of the on-board subsystems including the on-board com-
puter, avionics, and the attitude and orbit control system [2]. The main pur-
pose of SMART-1 is to demonstrate the use of solar-electric propulsion in a
low-thrust transfer from earth orbit into lunar orbit. The spacecraft carries
several scientific instruments, and scientific observations are to be performed
on the way to and in its lunar orbit. Currently (October 2004), SMART-1 is
preparing for the maneuvers that will bring it into orbit [5].
Part of the distributed computer architecture of SMART-1 is presented
in Fig. 10. The block diagram illustrates the decomposition of the system
into two parts: one subsystem dedicated to the SMART-1 control system and
756 K. H. Johansson, M. Törngren, and L. Nielsen
Fig. 10. Part of the distributed control architecture for the SMART-1 spacecraft.
The system has two CAN buses: one for the control of the spacecraft and one for the
payload. The spacecraft controllers are redundant and denoted CON-A and CON-B
in the middle of the block diagram. (Courtesy of the Swedish Space Corporation.)
4 Control Applications
Two vehicular control systems with loops closed over CAN buses are discussed
in this section. The first example is a vehicle dynamics control system for
passenger cars that is manufactured by Bosch. The second example is an
attitude and orbit control system for the SMART-1 spacecraft discussed in
the previous section.
Vehicle dynamics control4 systems are designed to assist the driver in over-
steering, under-steering and roll-over situations [9, 15]. The principle of a ve-
hicle dynamics control (VDC) system is illustrated in Fig. 11. The left figure
shows a situation where over-steering takes place, illustrating the case where
the friction limits are reached for the rear wheels causing the tire forces to
saturate (saturation on the front wheels will instead cause an under-steer sit-
uation). Unless the driver is very skilled, the car will start to skid, meaning
that the vehicle yaw rate and vehicle side slip angle will deviate from what the
driver intended. This is the situation shown for the left vehicle. For the vehicle
on the right, the on-board VDC will detect the emerging skidding situation
and will compute a compensating torque, which for the situation illustrated is
translated into applying a braking force to the outer front wheel. This braking
force will provide a compensating torque and the braking will also reduce the
lateral force for this wheel.
The VDC system compares the driver’s estimated intended course, by
measuring the steering wheel angle and other relevant sensor data, with the
actual motion of the vehicle. When these deviate too much, the VDC will
intervene by automatically applying the brakes of the individual wheels and
also by controlling the engine torque, in order to make the vehicle follow the
4
Also known as electronic stability program, dynamic stability control, or active
yaw control.
758 K. H. Johansson, M. Törngren, and L. Nielsen
Fig. 11. Illustration of behavior during over-steering for vehicle with and without
VDC system (left figure). Central components of VDC (right figure). (Based on
figures provided by the Electronic Stability Control Coalition.)
This section describe parts of the SMART-1 attitude and orbit control system
and how it is implemented in the on-board distributed computer system [2].
The control architecture and the CAN buses of SMART-1 were described in
Section 3. The control objectives of the attitude and orbit control system
are to
• follow desired trajectories according to the goals of the mission,
• point the solar panels toward the sun, and
• minimize energy consumption.
The control objectives should be fulfilled despite the harsh environment and
torque disturbances acting on the spacecraft, such as aero drag (initially when
close to earth), gravitational gradient, magnetic torque, and solar pressure
(mechanical pressure from photons). There are several phases that the control
system should be able to handle, including the phase just after separation from
the launcher, the thrusting phases on the orbit to the moon, and the moon
observation phase.
The sensors and actuators used for controlling the spacecraft’s attitude
are illustrated in Fig. 13. Sensors are a star tracker and solid-state angular
rate sensors. The star tracker provides estimates of the sun vector. It has one
nominal and one redundant processing unit and two hot redundant camera
760 K. H. Johansson, M. Törngren, and L. Nielsen
Sun sensors
(3 in total) Star trackers (2 in total)
Fig. 13. Structure of SMART-1 spacecraft with sensors and actuators for the atti-
tude and orbit control system. (Courtesy of the Swedish Space Corporation.)
heads, which can be operated from either of the two processing units. Five an-
gular rate sensors are included to allow for detection and isolation of a failure
in a sensor unit. The rate sensors can provide estimates of spacecraft attitude
during shorter outages of attitude measurements from the star tracker. Ac-
tuators for the attitude control are reaction wheels and hydrazine thrusters.
There are four reaction wheels aligned in a pyramid configuration based on
considerations of environmental disturbances and momentum management.
The angular momentum storage capability is 4 Nms per wheel with a reac-
tion torque above 20 mNm. The hydrazine system consists of four nominal
and four redundant 1 N thrusters.
The attitude and orbit control system consists of a set of control functions
for rate damping, sun pointing, solar array rotation, momentum reduction,
three-axis attitude control, and electric propulsion (EP) thruster orientation.
The system has a number of operation modes, which consist of a subset of
these control functions. The operation modes include the following:
• Detumble mode: In this mode, rotation is stabilized using one P-controller
per axis with the aid of the hydrazine thrusters and the rate sensors.
• Safe mode: Here the EP thruster is pointed toward the sun and set to rotate
one revolution per hour around the sun vector. The attitude is controlled
using a bang-bang strategy for large sun angles and a PID controller for
smaller angles. Both controllers use the reaction wheels as actuators and
the sun tracker as sensor. The spacecraft rotation is controlled using a
PI controller. When the angular velocity of the reaction wheels exceeds a
certain limit, their momentum is reduced by use of the hydrazine thrusters.
Vehicle Applications of Controller Area Network 761
Fig. 14. Attitude control system under operation in Science mode. Disturbances
include gravity, particles, and aero drag affecting the spacecraft.
• Science mode: In this mode, ground provides the attitude set-points for the
spacecraft and the star tracker provides the actual attitude. The reaction
wheels and the hydrazine thrusters are used.
• Electric propulsion control mode: This mode is similar to the science mode
apart from the additional control of the EP orientation mechanism. This
mechanism can be used to tilt the thrust vector in order to off-load the
reaction wheel momentum about the two spacecraft axes that form the
nominal EP thrust plane. This reduces the amount of hydrazine needed.
The EP mechanism is controlled in an outer and slower control loop (PI)
based on the speed of the reaction wheels and the rotation of the spacecraft
body.
Let us describe the Science mode in some detail. Fig. 14 shows a block di-
agram of the attitude control system in Science mode. As was described for
the Safe mode above, different controllers are activated in the Science mode
depending on the size of the control error. This switching between controllers
is indicated by the block “Mode logic” in the figure. Anti-windup compensa-
tion is used in the control law to prevent integrator windup when the reaction
wheel commands saturate. Also, as in the Safe mode, the hydrazine thrusters
are used to introduce an external momentum when the angular momentum
of the reaction wheels grows too high. The attitude control works indepen-
dently of the thruster commanding. The entire control system is sampled at
1 Hz. The time constant for closed-loop control is about 30 sec for the Science
mode (and 300 sec for the Safe mode). The estimation block in Fig. 14 pro-
vides filtering of signals such as the sun vector and computes the spacecraft
body rates. It includes a Kalman filter with inputs from the star tracker and
762 K. H. Johansson, M. Törngren, and L. Nielsen
the rate sensors. The main purpose is to provide estimates of the attitude
for short periods when the star tracker is not able to deliver the attitude, for
example, due to blinding of the sensor camera heads. The control algorithms
of the attitude and orbit control system reside in the spacecraft controllers of
the control architecture depicted in Fig. 10. With a period of 1 sec, the space-
craft controller issues polling commands over the CAN to the corresponding
sensors, including the gyros and the sun sensors. When all sensor data are
received, the control commands are computed and then sent to the actuators,
including the reaction wheels and the hydrazine thrusters. The maximum nor-
mal utilization of the CAN is about 30%, but under heavy disturbance, due to
retransmission of corrupted messages, it rises to about 40%. The total com-
munication time for communication over the CAN network for attitude and
orbit control sensing and actuating data is approximately 12 msec. It is thus
small compared to the sampling period.
The on-board software can be patched by uploading new software from
ground during operation in space. So far this has been carried out once for the
star tracker node. The need arose during a very intensive solar storm, which
necessitated modification of software filters to handle larger disturbance and
noise levels than had been anticipated.
During operation, the system platform software is responsible for detec-
tion of failing nodes and redundancy management. The control application
is notified of detected errors such as temporary unavailability of data from
one or more nodes. If the I/O nodes do not reply to poll messages for an
extended period of time, the redundancy management will initiate recovery
actions, including switching to the redundant slave nodes and attempting to
use the redundant network.
5 Perspectives
Acknowledgments
We would like to express our gratitude to the individuals and the companies
that provided information on the examples described in this chapter. In par-
ticular, Per Bodin and Gunnar Andersson, Swedish Space Corporation, are
acknowledged for providing information on the SMART-1 spacecraft. Dave
Purdue, US Navy, is acknowledged for the description of SeaCAN. Jakob Ax-
elsson, Volvo Car Corporation, is acknowledged for information on the Volvo
XC90. Ola Larses, Scania AB, is acknowledged for information on the Scania
truck. Lars-Berno Fredriksson is acknowledged for general advice on CAN.
Vehicle Applications of Controller Area Network 765
References
1. http://www.autosar.org, 2004. Homepage of the development partnership Au-
tomotive Open System Architecture (AUTOSAR).
2. P. Bodin, S. Berge, M. Björk, A. Edfors, J Kugelberg, and P. Rathsman. The
SMART-1 attitude and orbit control system: Flight results from the first mission
phase. In AIAA Guidance, Navigation, and Control Conference, number AIAA-
2004-5244, Providence, RI, 2004.
3. http://www.can-cia.de, 2004. Homepage of the organization CAN in Automa-
tion (CiA).
4. CAN specification version 2.0. Robert Bosch GmbH, Stuttgart, Germany, 1991.
5. http://www.esa.int/SPECIALS/SMART-1, 2004. Homepage of the SMART-1
spacegraft of the European Space Agency.
6. K. Etschberger. Controller Area Network: Basics, Protocols, Chips and Appli-
cations. IXXAT Automation GmbH, Weingarten, Germany, 2001.
7. J. Fröberg, K. Sandström, C. Norström, H. Hansson, J. Axelsson, and B. Villing.
A comparative case study of distributed network architectures for different au-
tomotive applications. In Handbook on Information Technology in Industrial
Automation. IEEE Press and CRC Press, 2004.
8. U. Kiencke, S. Dais, and M. Litschel. Automotive serial controller area network.
In SAE International Congress No. 860391, Detroit, MI, 1986.
9. U. Kiencke and L. Nielsen. Automotive Control Systems. Springer-Verlag,
Berlin, 2000.
10. H. Kopetz. Real-Time Systems: Design Principles for Distributed Embedded
Applications. Kluwer Academic Publishers, Dordrecht, 1997.
11. G. Leen and D. Heffernan. Expanding automotive electronic systems. Computer,
35(1):88–93, Jan 2002.
12. http://www.osek-vdx.org, 2004. Homepage of a joint project of the automotive
industry on a standard for an open-ended architecture for distributed control
units in vehicles.
13. M. Törngren. A perspective to the design of distributed real-time control ap-
plications based on CAN. In 2nd International CiA CAN conference, London,
U.K., 1995.
14. M. Törngren, K. H. Johansson, G. Andersson, P. Bodin, and D. Purdue. A
survey of contemporary embedded distributed control systems in vehicles. Tech-
nical Report ISSN 1400-1179, ISRN KTH/MMK-04/xx-SE, Dept. of Machine
Design, KTH, 2004.
15. A. T. van Zanten, R. Erhardt, K. Landesfeind, and G. Pfaff. VDC systems
development and perspective. In SAE World Congress, 1998.