Autosar Exp Aracomapi
Autosar Exp Aracomapi
Autosar Exp Aracomapi
AUTOSAR AP R21-11
Disclaimer
This work (specification and/or software implementation) and the material contained in
it, as released by AUTOSAR, is for the purpose of information only. AUTOSAR and the
companies that have contributed to it shall not be liable for any use of the work.
The material contained in this work is protected by copyright and other types of intel-
lectual property rights. The commercial exploitation of the material contained in this
work requires a license to such intellectual property rights.
This work may be utilized or reproduced without any modification, in any form or by
any means, for informational purposes only. For any other purpose, no part of the work
may be utilized or reproduced, in any form or by any means, without permission in
writing from the publisher.
The work has been developed for automotive applications only. It has neither been
developed, nor tested for non-automotive applications.
The word AUTOSAR and the AUTOSAR logo are registered trademarks.
Table of Contents
1 Preface 9
2 Introduction 10
References
[1] Specification of RTE Software
AUTOSAR_SWS_RTE
[2] Middleware for Real-time and Embedded Systems
http://doi.acm.org/10.1145/508448.508472
[3] Patterns, Frameworks, and Middleware: Their Synergistic Relationships
http://dl.acm.org/citation.cfm?id=776816.776917
[4] Specification of Adaptive Platform Core
AUTOSAR_SWS_AdaptivePlatformCore
[5] Specification of Manifest
AUTOSAR_TPS_ManifestSpecification
[6] SOME/IP Protocol Specification
AUTOSAR_PRS_SOMEIPProtocol
[7] Serialization and Unserialization
https://isocpp.org/wiki/faq/serialization
[8] Copying and Comparing: Problems and Solutions
http://dx.doi.org/10.1007/3-540-45102-1_11
[9] SOME/IP Service Discovery Protocol Specification
AUTOSAR_PRS_SOMEIPServiceDiscoveryProtocol
1 Preface
Typically, reading formal specifications isn’t the easiest way to learn and understand a
certain technology. This especially holds true for the Communication Management API
(ara::com) in the AUTOSAR Adaptive Platform.
Therefore this document shall serve as an entry point not only for the developer of
software components for the Adaptive Platform, who will use the ara::com API to
interact with other application or service components, but also for Adaptive Platform
product vendors, who are going to implement an optimized IPC binding for the ara:-
:com API on their platform.
We strongly encourage both groups of readers to read this document at hand before
going into the formal details of the related SWS.
Since we do address two different groups, it is obvious that parts of the content is more
intended for the user of the API (application software developer), while other parts are
rather intended for the IPC binding implementer (Adaptive Platform product vendor).
We address this by explicitly marking explanations, which are intended for the IPC
binding implementer. So our basic assumption is, that everything which is of interest to
the user of the API is also informative/relevant for the IPC binding implementer, while
parts explicitly marked as "detailed information for the IPC binding implementer" like
this:
AUTOSAR Binding Implementer Hint
Some very detailed technical information
Table 1.1: AUTOSAR Binding Implementer Hint - introduction
are no mandatory knowledge for the user for ara::com API. Nevertheless, the inter-
ested API user might also benefit from these more detailed explanations, as it will help
him to get a good understanding of architectural implications.
2 Introduction
Why did AUTOSAR invent yet another communication middleware API/technology,
while there are dozens on the market — the more so as one of the guidelines of Adap-
tive Platform was to reuse existing and field proven technology?
Before coming up with a new middleware design, we did evaluate existing technologies,
which — at first glance — seemed to be valid candidates. Among those were:
• ROS API
• DDS API
• CommonAPI (GENIVI)
• DADDY API (Bosch)
The final decision to come up with a new and AUTOSAR-specific Communication Man-
agement API was made due to the fact, that not all of our key requirements were met
by existing solutions:
• We need a Communication Management, which is NOT bound to a concrete
network communication protocol. It has to support the SOME/IP protocol but
there has to be flexibility to exchange that.
• The AUTOSAR service model, which defines services as a collection of provided
methods, events and fields shall be supported naturally/straight forward.
• The API shall support an event-driven and a polling model to get access to com-
municated data equally well. The latter one is typically needed by real-time ap-
plications to avoid unnecessary context switches, while the former one is much
more convenient for applications without real-time requirements.
• Possibility for seamless integration of end-to-end protection to fulfill ASIL require-
ments.
• Support for static (preconfigured) and dynamic (runtime) selection of service in-
stances to communicate with.
So in the final ara::com API specification, the reader will find concepts (which we
will describe in-depth in the upcoming chapters), which might be familiar for him from
technologies, we have evaluated or even from the existing Classic Platform:
• Proxy (or Stub)/Skeleton approach (CORBA, Ice, CommonAPI, Java RMI, ...)
• Protocol independent API (CommonAPI, Java RMI)
• Queued communication with configurable receiver-side caches (DDS, DADDY,
Classic Platform)
• Zero-copy capable API with possibility to shift memory management to the mid-
dleware (DADDY)
Terms: Description:
Binding This typically describes the realization of some abstract concept
with a specific implementation or technology.
In AUTOSAR, for instance, we have an abstract data type and
interface model described in the methodology.
Mapping it to a concrete programming language is called lan-
guage binding. In the AUTOSAR Adaptive Platform for instance
we do have a C++ language binding.
In this explanatory document we typically use the tech term bind-
ing to refer to the implementation of the abstract (technology in-
dependent) ara::com API to a concrete communication transport
technology like for instance sockets, pipes, shared memory, ...
Callable In the context of C++ a Callable is defined as: A Callable type is a
type for which the INVOKE operation (used by, e.g., std::function,
std::bind, and std::thread::thread) is applicable. This operation
may be performed explicitly using the library function std::invoke.
(since C++17)
Service Interface
Definition
generated from generated from
Client Application Service Application
The basic idea of this pattern is, that from a formal service definition two code artifacts
are generated:
• Service Proxy: This code is - from the perspective of the service consumer, which
wants to use a possibly remote service - the facade that represents this service
on code level.
In an object-oriented language binding, this typically is an instance of a generated
class, which provides methods for all functionalities the service provides. So the
service consumer side application code interacts with this local facade, which
then knows how to propagate these calls to the remote service implementation
and back.
• Service Skeleton: This code is - from the perspective of the service implementa-
tion, which provides functionalities according to the service definition - the code,
which allows to connect the service implementation to the Communication Man-
Being able to provide their own implementations allows to optimize for their chosen
memory model.
For most of the types ara::com provides a default mapping to existing C++ types
in ara/com/types.h. This default mapping decision could be reused by an AP product
vendor.
The default mapping provided by ara::com even has a real benefit for a product
vendor, who wants to implement its own variant: He can validate the functional behavior
of his own implementation against the implementation of the default mapping.
Checked Errors within ara::com API can only occur in the context of a call of
a service interface method and is therefore fully covered in subsection 6.2.4 and
subsection 6.3.4.
Unchecked Errors within ara::com API can occur in the context of any ara::com
API call.
The ara::com API does not throw any Execption. The only way to have exceptions
is calling the get method of ara::core::Future, if the user decides to use this
approach.
6 API Elements
The following subchapters will guide through the different API elements, which ara::-
com defines. Since we will give code examples for various artifacts and provide sample
code how to use those APIs from a developer perspective, it is a good idea to have
some uniformity in our examples.
So we will use a virtual service (interface) called "RadarService". The following is a
kind of a semi-formal description, which should give you an impression of what this
"RadarService" provides/does and might be easier to read than a formal AUTOSAR
ARXML service description:
1 RadarService {
2 // types used within service
3 type RadarObjects {
4 active : bool
5 objects : array {
6 elementtype: uint8
7 size: variable
8 }
9 }
10
11 type Position {
12 x: uint32
13 y: uint32
14 z: uint32
15 }
16
17 // events provided by service
18 event BrakeEvent {
19 type:RadarObjects
20 }
21
43
44 // methods provided by service
45 method Calibrate {
46 param configuration {
47 type: string
48 direction: in
49 }
50 param result {
51 type: bool
52 direction: out
53 }
54 raises {
55 CalibrationFailed
56 InvalidConfigString
57 }
58 }
59
60 method Adjust {
61 param target_position {
62 type: Position
63 direction: in
64 }
65 param success {
66 type: bool
67 direction: out
68 }
69 param effective_position {
70 type: Position
71 direction: out
72 }
73 }
74
75 oneway method LogCurrentState {}
76 }
Listing 6.1: RadarService Definition
The method “LogCurrentState” is a one way method, which means, that no feedback
is returned to the caller, if the method is executed at all and with which outcome. It
instructs the service RadarService to output its current state into its local log files.
Since it is a core feature, that the technical binding used by an ara::com based appli-
cation is defined/specified by the integrator during deployment any expectations from
an ara::com software developer regarding its content/structure are typically invalid.
Logging it/tracing it out to a log channel might be helpful for debug analysis however.
Then, where does the software-developer get such a highly binding specific ara::-
com::InstanceIdentifier to be used in ara::com API calls?
The answer is: By an ara:com provided functionality, which translates a logical local
name used typically by the software developer in his realm into the technology/bind-
ing specific ara::com::InstanceIdentifier. This indirection masters both chal-
lenges:
• developer using ara::com does not need to know anything about bindings and
their specifics
• Integrators can adapt bindings in deployments
The local name from which the ara::com::InstanceIdentifier is constructed
comes basically from AUTOSAR meta-model, describing your software component
model.
The requirement for this local name — we will call it "instance specifier" from now on
— is, that it is unambiguous within an executable. It has basically the form:
<context 0>/<context 1>/.../<context N>/<port name>
The C++ representation of such an "instance specifier" is the class ara::core::In-
stanceSpecifier. Structurally it looks similar to the ara::com::InstanceIden-
tifier:
1 class InstanceSpecifier {
2 public:
3 // ctor to build specifier from AUTOSAR short name identifier
4 // with ’/’ as separator between package names
5 explicit InstanceSpecifier(const ara::core:string_view value);
6 const ara::core:string_view toString() const;
7 bool operator==(const InstanceSpecifier& other) const;
8 bool operator<(const InstanceSpecifier& other) const;
9 InstanceSpecifier& operator=(const InstanceSpecifier& other);
10 };
Listing 6.3: InstanceSpecifier class
The API ara:com provides the following function, to do the translation from the ara:-
:core::InstanceSpecifier (local name in the software developers realm) to the
technical ara::com::InstanceIdentifier:
1 namespace ara {
2 namespace com {
3 namespace runtime {
4 ara::com::InstanceIdentifierContainer ara::com::runtime::ResolveInstanceIDs
(ara::core::InstanceSpecifier modelName);
5 }
6 }
7 }
Listing 6.4: InstanceSpecifier Resolution
According to the previous explanations, the impression may have arisen that a soft-
ware developer always has to resolve ara::core::InstanceSpecifier to ara:-
:com::InstanceIdentifier manually (by a call to ResolveInstanceIDs())
first, before using ara:com APIs, which need instance identifier information.
This would be indeed a bit awkward as we already mentioned, that the "typical" ap-
proach for a software developer, which implements an Adaptive AUTOSAR SWC, is to
use abstract "instance specifiers" from the realm of the software component model.
As you will see in the upcoming chapters, which detail the APIs on the proxy and skele-
ton side, ara::com provides typically function overloads, which either take ara::-
com::InstanceIdentifier OR ara::core::InstanceSpecifier, freeing the
developer in the most common use cases, where he simply uses ara::core::In-
stanceSpecifier from explicitly calling ResolveInstanceIDs().
This means, that the direct use of ara::com::InstanceIdentifier and manual
resolution of ara::core::InstanceSpecifier is intended more for power users
with rather specific/exotic use cases. Some examples will be given in the chapters,
where the corresponding ara::com API overrides at proxy/skeleton side are dis-
cussed.
The fundamental difference between the two variants is this: An ara::com::In-
stanceIdentifier can be exchanged more easily between Adaptive Application-
s/processes!
As they already exactly contain all the technology specific information and do not need
any further resolution via content of a service instance manifest such a se-
rialized ara::com::InstanceIdentifier can be reconstructed within a different
process and be used as long as his process has access to the same binding technol-
ogy the ara::com::InstanceIdentifier is based upon.
59 /**
60 * This is an overload of the StartFindService method using neither
61 * instance specifier nor instance identifier.
62 * Semantics is, that ALL instances of the service shall be found, by
63 * using all available/configured technical bindings.
64 *
65 */
66 static ara::com::FindServiceHandle StartFindService(
67 ara::com::FindServiceHandler<RadarServiceProxy::HandleType> handler);
68
69 /**
70 * Method to stop finding service request (see above)
71 */
72 static void StopFindService(ara::com::FindServiceHandle handle);
73
74 /**
75 * Opposed to StartFindService(handler, instance) this version
76 * is a "one-shot" find request, which is:
77 * - synchronous, i.e. it returns after the find has been done
78 * and a result list of matching service instances is
79 * available. (list may be empty, if no matching service
80 * instances currently exist)
81 * - does reflect the availability at the time of the method
82 * call. No further (background) checks of availability are
83 * done.
84 *
85 * \param instanceId which instance of the service type defined
86 * by T shall be searched/found.
87 *
88 */
89 static ara::com::ServiceHandleContainer<RadarServiceProxy::HandleType>
FindService(
90 ara::com::InstanceIdentifier instanceId);
91
92 /**
93 * This is an overload of the FindService method using an
94 * instance specifier, which gets resolved via service instance
95 * manifest.
96 */
97 static ara::com::ServiceHandleContainer<RadarServiceProxy::HandleType>
FindService(
98 ara::core::InstanceSpecifier instanceSpec);
99
100 /**
101 * This is an overload of the StartFindService method using neither
102 * instance specifier nor instance identifier.
103 * Semantics is, that ALL instances of the service shall be found, by
104 * using all available/configured technical bindings.
105 */
106 static ara::com::ServiceHandleContainer<RadarServiceProxy::HandleType>
FindService();
107
108 /**
109 * \brief The proxy can only be created using a specific
110 * handle which identifies a service.
111 *
112 * This handle can be a known value which is defined at
113 * deployment or it can be obtained using the
114 * ProxyClass::FindService method.
115 *
116 * \param handle The identification of the service the
117 * proxy should represent.
118 */
119 explicit RadarServiceProxy(HandleType &handle);
120
121 /**
122 * proxy instances are not copy constructible.
123 */
124 RadarServiceProxy(RadarServiceProxy &other) = delete;
125
126 /**
127 * proxy instances are not copy assignable
128 */
129 RadarServiceProxy& operator=(const RadarServiceProxy &other) = delete;
130
131 /**
132 * \brief Public member for the BrakeEvent
133 */
134 events::BrakeEvent BrakeEvent;
135
136 /**
137 * \brief Public Field for UpdateRate
138 */
139 fields::UpdateRate UpdateRate;
140
141 /**
142 * \brief Public member for the Calibrate method
143 */
As you can see in the figure 6.5 ara::com prescribes the Proxy class to provide a
constructor. This means, that the developer is responsible for creating a proxy instance
to communicate with a possibly remote service.
The ctor takes a parameter of type RadarServiceProxy::HandleType — an in-
ner class of the generated proxy class. Probably the immediate question then is: "What
is this handle and how to create it/where to get it from?"
What it is, should be straightforward: After the call to the ctor you have a proxy
instance, which allows you to communicate with the service, therefore the handle has to
contain the needed addressing information, so that the Communication Management
binding implementation is able to contact the service.
What exactly this address information contains is totally dependent on the binding im-
plementation/technical transport layer!
That already partly answers the question "how to create/where to get it": Really creat-
ing is not possible for an application developer as he is — according to AUTOSAR core
concepts — implementing his application AP product and therefore Communication
Management independent.
The solution is, that ara::com provides the application developer with an API to find
service instances, which returns such handles.
This part of the API is described in detail here: subsection 6.2.2. The co-benefit from
this approach — that proxy instances can only be created from handles, which are the
result of a "FindService" API — is, that you are only able to create proxies, which are
really backed by an existing service instance.
• the found service is located within a different application on the same node (within
the same AP infrastructure)
The possible combinations could increase complexity: For an existing service type any of
those cases may apply at the same time — one instance of the service which the
applications talks to is locally in the same process (this is not that strange if you think of
large application with much code re-use), one on the same ECU in a different process and
one on a remote ECU.
We (ara::com design team) require that such a setup works seamlessly for the
ara::com user. By the way: this functionality is called Multi-Binding as you have a
service abstraction in the form of a proxy class, which is bound to multiple different
transport bindings.
In all cases the application developer using ara::com interacts with instances of the same
Proxy class, where you provided the implementation.
The somewhat obvious expectation from an AP product is now, that it provides ways to
communicate in those different cases efficiently.
Meaning that if the developer uses a proxy instance constructed from an instance of
HandleType, which denotes the instance of the service local to the proxy user, then the
Proxy implementation should use a different technical solution (in this case for instance a
simple local function call / local in address space copies) than in the case of a proxy
constructed from an instance of HandleType denoting a remote service instance.
In a nutshell: What the AP product vendor has to provide, is a Proxy class implementation,
which is able to delegate to completely different transport layer implementations depending
on the information contained in the instance of HandleType given in the ctor.
Table 6.1: AUTOSAR Binding Implementer Hint - handle class
So the question which probably might come up here: Why this indirection, that an
application developer first has to call some ara::com provided functionality, to get a
handle, which I then have to use in a ctor call? ara::com could have given back
directly a proxy instance instead of a handle from "FindService" functionality.
The reason for that could be better understood, after reading how ara::com handles
the access to events (subsection 6.2.3). But what is sufficient to say at this point is,
that a proxy instance contains certain state.
And because of this there are use cases, where the application developer wants to use
different instances of a proxy, all "connected" to the same service instance.
So if you just accept, that there are such cases, the decision for this indirection via han-
dles becomes clear: ara::com cannot know, whether an application developer wants
always the same proxy instance (explicitly sharing state) or always a new instance each
time he triggers some "FindService" functionality, which returns a proxy for exactly the
same service instance.
So by providing this indirection/decoupling the decision is in the hands of the ara::-
com user.
Instances of the Proxy class on the other hand are neither copy constructible nor copy
assignable! This is an explicit design decision, which complements the idea of forcing
the construction via HandleType.
The instances of a proxy class might be very resource intensive because of owning
event/field caches, registered handlers, complex state,... and so on. Thus, when al-
lowing copy construction/copy assignment, there is a risk that such copies are done
unintended.
So — in a nutshell — forcing the user to go the route via HandleType for Proxy
creation shall sensitize him, that this decision shall be well thought out.
The Proxy class provides class (static) methods to find service instances, which are
compatible with the Proxy class.
Since the availability of service instances is dynamic by nature, as they have a life
cycle, ara::com provides two different ways to do a “FindService” for convenience in
general:
• StartFindService is a class method, which starts a continuous “FindService”
activity in the background, which notifies the caller via a given callback anytime
the availability of instances of the service changes.
• FindService is a one-off call, which returns available instances at the point in
time of the call.
Both of those methods come in three different overrides, depending on the instance
identifier approach taken (see section 6.1):
• one taking an ara::com::InstanceIdentifier
• one taking an ara::core::InstanceSpecifier
• one taking NO argument.
The semantics of no-argument variant is simple: Find all services of the given type,
irrespective of their binding and binding specific instance identifier. Note, that only
technical bindings will be used for finding/searching, which are configured for the cor-
responding service interface within the service instance manifest in the form of a Ser-
viceInterfaceDeployment.
Note that only technical bindings will be used for finding/searching, which are config-
ured for the corresponding service interface within the service instance manifest in the
form of a service interface deployment.
The synchronous one-off variant FindService returns a container of handles (see
subsection 6.2.1) for the matching service instances, which might also be empty, if no
matching service instance is currently available.
Opposed to that, the StartFindService returns a FindServiceHandle, which
can be used to stop the ongoing background activity of monitoring service instance
availability via call to StopFindService.
The first (and specific for this variant) parameter to StartFindService is a user
provided handler function with the following signature:
1 using FindServiceHandler = std::function<void(ServiceHandleContainer<T
>, FindServiceHandle)>;
Any time the binding detects, that the availability of service instances matching the
given instance criteria in the call to StartFindService has changed, it will call the
user provided handler with an updated list of handles of the now available service
instances.
Right after being called, StartFindService behaves similar to FindService in
the sense, that it will fire the user provided handler function with the currently available
service instances, which might be also an empty handle list.
After that initial callback, it will call the provided handler again in case of changes of
this initial service availability.
Note, that it is explicitly allowed, that the ara::com user/developer does call
StopFindService within the user provided handler.
For this purpose, the handler explicitly gets the FindServiceHandle argument. The
handler needs not to be re-entrant. This means, that the binding implementer has to
care for serializing calls to the user provided handler function.
ara::com App up
callMethod : success
T0 Proxy Service Instance
ara::com App
down
T1 Proxy Service Instance
ara::com App
down
callMethod : failed
T2 Proxy Service Instance
ara::com App up
T3 Proxy Auto Update Proxy Instance Service Instance
ara::com App up
callMethod : success
T4 Proxy Service Instance
Note, in case you have registered a FindServiceHandler, then the binding imple-
mentation must assure, that it does the “auto updating” of existing proxy instances
before it calls the registered FindServiceHandler!
The reason for this is: It shall be supported, that the application developer can interact
successfully with an existing proxy instance within the FindServiceHandler, when
the handle of the proxy instance is given in the call, signaling, that the service instance
is up again.
This expectation is shown in the following code snippet:
1 /**
2 * Reference to radar instance, we work with,
3 * initialized during startup
4 */
5 RadarServiceProxy *myRadarProxy;
6
7 void radarServiceAvailabilityHandler(ServiceHandleContainer<
RadarServiceProxy::HandleType> curHandles, FindServiceHandle handle) {
8 for (RadarServiceProxy::HandleType handle : curHandles) {
9 if (handle.GetInstanceId() == myRadarProxy->GetHandle().
GetInstanceId()) {
10 /**
11 * This call on the proxy instance shall NOT lead to an
exception,
12 * regarding service instance not reachable, since proxy
instance
13 * should be already auto updated at this point in time.
14 */
15 ara::core::Future<Calibrate::Output> out =
16 myRadarProxy->Calibrate("test");
17
18 // ... do something with out.
19 }
20 }
21 }
Listing 6.6: Access to proxy instance within FindService handler
4
4
of the AP product it shall apply the “auto update” to the proxy instance (updating with
new transport layer address).
• at the time of construction of the proxy instance with this outdated handle, the binding
implementation is already aware of the new transport layer address and uses this
one instead.
The “auto update” mechanism even has to work, if the service instance is changing
transport layer mechanism completely.
Table 6.2: AUTOSAR Binding Implementer Hint - auto update
6.2.3 Events
For each event the remote service provides, the proxy class contains a member of a
event specific wrapper class. In our example the member has the name BrakeEvent
and is of type events::BrakeEvent.
As you see in 6.5 all the event classes needed for the proxy class are generated inside
a specific namespace events, which is contained inside the proxy namespace.
The member in the proxy is used to access events/event data, which are sent by the
service instance our proxy is connected to. Let’s have a look at the generated event
class for our example:
1 class BrakeEvent {
2 /**
3 * \brief Shortcut for the events data type.
4 */
5 using SampleType = RadarObjects;
6
7 /**
8 * \brief The application expects the CM to subscribe the event.
9 *
10 * The Communication Management shall try to subscribe and resubscribe
11 * until \see Unsubscribe() is called explicitly.
12 * The error handling shall be kept within the Communication Management
.
13 *
14 * The function returns immediately. If the user wants to get notified,
15 * when subscription has succeeded, he needs to register a handler
16 * via \see SetSubscriptionStateChangeHandler(). This handler gets
17 * then called after subscription was successful.
18 *
19 * \param maxSampleCount maximum number of samples, which can be held.
20 */
21 void Subscribe(size_t maxSampleCount);
22
23 /**
24 * \brief Query current subscription state.
25 *
26 * \return Current state of the subscription.
27 */
28 ara::com::SubscriptionState GetSubscriptionState() const;
29
30 /**
31 * \brief Unsubscribe from the service.
32 */
33 void Unsubscribe();
34
35 /**
36 * \brief Get the number of currently free/available sample slots.
37 *
38 * \return number from 0 - N (N = count given in call to Subscribe())
39 * or an ErrorCode in case of number of currently held samples
40 * already exceeds the max number given in Subscribe().
41 */
42 ara::core::Result<size_t> GetFreeSampleCount() const noexcept;
43
44 /**
45 * Setting a receive handler signals the Communication Management
46 * implementation to use event style mode.
47 * I.e. the registered handler gets called asynchronously by the
48 * Communication Management as soon as new event data arrives for
49 * that event. If the user wants to have strict polling behavior,
50 * where no handler is called, NO handler should be registered.
51 *
52 * Handler may be overwritten anytime during runtime.
53 *
54 * Provided Handler needs not to be re-entrant since the
55 * Communication Management implementation has to serialize calls
56 * to the handler: Handler gets called once by the MW, when new
57 * events arrived since the last call to GetNewSamples().
58 *
59 * When application calls GetNewSamples() again in the context of the
60 * receive handler, MW must - in case new events arrived in the
61 * meantime - defer next call to receive handler until after
62 * the previous call to receive handler has been completed.
63 */
64 void SetReceiveHandler(ara::com::EventReceiveHandler handler);
65
66 /**
67 * Remove handler set by SetReceiveHandler()
68 */
69 void UnsetReceiveHandler();
70
71 /**
72 * Setting a subscription state change handler, which shall get
73 * called by the Communication Management implementation as soon
74 * as the subscription state of this event has changed.
75 *
76 * Communication Management implementation will serialize calls
77 * to the registered handler. If multiple changes of the
78 * subscription state take place during the runtime of a
79 * previous call to a handler, the Communication Management
80 * aggregates all changes to one call with the last/effective
81 * state.
82 *
83 * Handler may be overwritten during runtime.
84 */
85 void SetSubscriptionStateChangeHandler(
86 ara::com::SubscriptionStateChangeHandler handler);
87
88 /**
89 * Remove handler set by SetSubscriptionStateChangeHandler()
90 */
91 void UnsetSubscriptionStateChangeHandler();
92
93 /**
94 * \brief Get new data from the Communication Management
95 * buffers and provide it in callbacks to the given callable f.
96 *
97 * \pre BrakeEvent::Subscribe has been called before
98 * (and not be withdrawn by BrakeEvent::Unsubscribe)
99 *
100 * \param f
101 * \parblock
102 * callback, which shall be called with new sample.
103 *
104 * This callable has to fulfill signature
105 * void(ara::com::SamplePtr<SampleType const>)
106 * \parblockend
107 *
108 * \param maxNumberOfSamples
109 * \parblock
110 * upper bound of samples to be fetched from middleware buffers.
111 * Default value means "no restriction", i.e. all newly arrived samples
112 * are fetched as long as there are free sample slots.
113 * \parblockend
114 *
115 * \return Result, which contains the number of samples,
116 * which have been fetched and presented to user via calls to f or an
117 * ErrorCode in case of error (e.g. precondition not fullfilled)
118 */
119 template <typename F>
120 ara::core::Result<size_t> GetNewSamples(
121 F&& f,
122 size_t maxNumberOfSamples = std::numeric_limits<size_t>::max());
123 };
Listing 6.7: Proxy side BrakeEvent Class
The data type of the event data in our example event is RadarObjects (see 6.1). The
first you encounter is the using-directive which assigns the generic name SampleType
to the concrete type, which is then used throughout the interface.
The mere fact, that there exists a member of the event wrapper class inside the proxy
instance does not mean, that the user gets instant access to events raised/sent out by
service instance.
First you have to “subscribe” for the event, in order to tell the Communication Manage-
ment, that you are now interested in receiving events.
For that purpose the event wrapper class of ara::com provides the method
1 /**
2 * \brief The application expects the CM to subscribe the event.
3 *
4 * ....
5 *
6 * \param maxSampleCount maximum number of samples, which can be held.
7 */
8 void Subscribe(size_t maxSampleCount);
The call to the Subscribe method is asynchronous by nature. This means that at
the point in time Subscribe returns, it is just the indication, that the Communication
Management has accepted the order to care for subscription.
The subscription process itself may (most likely, but depends on the underlying IPC
implementation) involve the event provider side. Contacting the possibly remote service
for setting up the subscription might take some time.
So the binding implementation of the subscribe is allowed to return immediately after
accepting the subscribe, even if for instance the remote service instance has not yet
acknowledged the subscription (in case the underlying IPC would support mechanism
like acknowledgment at all). If the user — after having called Subscribe — wants to
get feedback about the success of the subscription, he might call:
1 /**
2 * \brief query current subscription state.
3 *
4 * \return current state of the subscription.
5 */
6 ara::com::SubscriptionState GetSubscriptionState() const;
In the case the underlying IPC implementation uses some mechanism like a subscrip-
tion acknowledge from the service side, then an immediate call to GetSubscrip-
tionState after Subscribe may return kSubscriptionPending, if the acknowl-
edge has not yet arrived.
Otherwise — in case the underlying IPC implementation gets instant feedback, which is
very likely for local communication — the call might also already return kSubscribed.
If the user needs to monitor the subscription state, he has two possibilities:
• Polling via GetSubscriptionState
• Registering a handler, which gets called, when the subscription state changes
The first possibility by using GetSubscriptionState we have already described
above. The second possibility relies on using the following method on the event wrap-
per instance:
1 /**
2 * Setting a subscription state change handler, which shall get called
by
3 * the Communication Management implementation as soon as the
subscription
4 * state of this event has changed.
5 *
6 * Handler may be overwritten during runtime.
7 */
8 void SetSubscriptionStateChangeHandler(ara::com::
SubscriptionStateChangeHandler handler);
Here the user may register a handler function, which has to fulfill the following signa-
ture:
1 enum class SubscriptionState { kSubscribed, kNotSubscribed,
kSubscriptionPending };
2 using SubscriptionStateChangeHandler = std::function<void(
SubscriptionState)>;
So, after you successfully subscribed to an event according to the previous chapters,
how is the access to received event data samples achieved? The event data, which
is sent from the event emitter (service provider) to subscribing proxy instances is —
in typical IPC implementations — accumulated/queued in some buffers (e.g. kernel
buffers, special IPC implementation controlled shared memory regions, ...). So there
has to be taken an explicit action, to get/fetch those event samples from those buffers,
eventually deserialze it and and then put them into the event wrapper class instance
specific cache in form of a correct SampleType. The API to trigger this action is Get-
NewSamples.
1 /**
2 * \brief Get new data from the Communication Management
3 * buffers and provide it in callbacks to the given callable f.
4 *
5 * ....
6 */
7 template <typename F>
8 ara::core::Result<size_t> GetNewSamples(
9 F&& f,
10 size_t maxNumberOfSamples = std::numeric_limits<size_t>::max());
As you can see, the API is a function template, due to the fact, that the first parameter
f is a very flexible user provided Callable, which has to fulfill the following singnature
requirement: void(ara::com::SamplePtr<SampleType const>).
The second argument of type size_t controls the maximum number of event samples,
that shall be fetched/deserialized from the middleware buffers and then presented to
the application in form of a call to f.
On a call to GetNewSamples(), the ara::com implementation checks first, whether
the number of event samples held by the application already exceeds the maximum
number, which it had committed in the previous call to Subscribe(). If so, an
ara::core::ErrorCode is returned. Otherwise ara::com implementation checks,
whether underlying buffers contain a new event sample and — if it’s the case — deseri-
alizes it into a sample slot and then calls the application provided f with a SamplePtr
pointing to this new event sample. This processing (checking for further samples in the
buffer and calling back the application provided callback f) is repeated until either:
• there aren’t any new samples in the buffers
• there are further samples in the buffers, but the application provided maxNum-
berOfSamples argument in call to GetNewSamples() has been reached.
• there are further samples in the buffers, but the application already exceeds its
maxSampleCount, which it had committed in Subscribe().
Within the implementation of callback f, which the application/user provides, it can be
decided, what to do with the passed SamplePtr argument (i.e. by eventually doing a
deep inspection of the event data): Shall the new sample be "thrown away", because it
is not of interest or shall it be kept for later. To get an idea, what keeping/throwing away
of event samples means, the semantics of the SamplePtr, which is the access/entry
point to the event sample data has to be fully understood.
The following chapter shall clarify this.
The returned ara::core::Result contains either an ErrorCode or — in the suc-
cess case — the number of calls to f, which have been done in the context of the
GetNewSamples call.
This API allows you to register a user defined callback, which the Communication Man-
agement has to call in case new event data is available since the last call to Get-
NewSamples(). The registered function needs NOT to be re-entrant as the Commu-
nication Management has to serialize calls to the registered callback.
It is explicitly allowed to call GetNewSamples() from within the registered callback!
Note, that the user can alter the behavior between event-driven and polling style any-
time as he also has the possibility to withdraw the user specific “receive handler” with
the UnsetReceiveHandler() method provided by the event wrapper.
The following short code snippet is a simple example of how to work with events on
proxy/client side. In this sample a proxy instance of type RadarService is created
within main and a reception handler is registered, which gets called by the ara::com
implementation any time new BrakeEvent events get received. This means, that in
this example we are using the "Event-Driven" approach.
In our sample receive handler, we update our local cache with newly received events,
thereby filtering out all BrakeEvent events, which do not fulfill a certain property. After-
wards we call a processing function, which processes the samples, we have decided
to keep.
1 #include "RadarServiceProxy.hpp"
2 #include <memory>
3 #include <deque>
4
5 using namespace com::mycompany::division::radarservice;
6 using namespace ara::com;
7
8 /**
9 * our radar proxy - initially the unique ptr is invalid.
10 */
11 std::unique_ptr<proxy::RadarServiceProxy> myRadarProxy;
12
13 /**
14 * a storage for BrakeEvent samples in fifo style
15 */
16 std::deque<SamplePtr<const proxy::events::BrakeEvent::SampleType>>
lastNActiveSamples;
17
18 /**
The following figure sketches a simple deployment, where we have a service providing
an event, for which two different local adaptive SWCs have subscribed through their
respective ara::com proxies/event wrappers.
As you can see in the picture both proxies have a local event cache. This is the cache,
which gets filled via GetNewSamples(). What this picture also depicts is, that the
service implementation sends its event data to a Communication Management buffer,
which is apparently outside the process space of the service implementation — the
picture here assumes, that this buffer is owned by kernel or it is realized as a shared
memory between communicating proxies and skeleton or owned by a separate binding
implementation specific “demon” process.
Middleware Event.Send()
controlled Skeleton
Event Buffers
Event.GetNewSamples()
Proxy
Local Event
Cache
The background of those assumptions made in the figure is the following: Adap-
tive applications are realized as processes with separated/protected memory/address
spaces.
Event Data sent out by the service implementation (via the skeleton) cannot be buffered
inside the service/skeleton process private address space: If that would be the case,
event data access by the proxies would typically lead to context switches to the service
application process.
Something, which we want to have total control over on service side via the Method-
CallProcessingMode (see subsection 6.3.3) and should therefore not be triggered
by the communication behavior of arbitrary service consumers. Now let’s have a rough
look at the three different places, where the buffer, which is target for the “send event”
might be located:
• Kernel Space: Data is sent to a memory region not mapped directly to an appli-
cation process. This is typically the case, when binding implementation uses IPC
primitives like pipes or sockets, where data written to such a primitive ends up in
kernel buffer space.
• Shared Memory: Data is sent to a memory region, which is also directly readable
from receivers/proxies. Writing/reading between different parties is synchronized
specifically (lightweight with mem barriers or with explicit mutexes).
• IPC-Daemon Space: Data is sent to an explicit non-application process, which
acts as a kind of demon for the IPC/binding implementation. Note, that technically
this approach might be built on an IPC primitive like communication via kernel
space or shared memory to get the data from service process to demon process.
Each of those approaches might have different pros and cons regarding flexibility/size
of buffer space, efficiency in terms of access speed/overhead and protection against
malicious access/writing of buffers. Therefore consideration of different constraints in
an AP product and its use might lead to different solutions.
What shall be emphasized here in this example, is, that the AP product vendor is explic-
itly encouraged to use a reference based approach to access event data: The ara:-
:com API of event wrapper intentionally models the access via SamplePtr, which are
passed to the callbacks and not the value!
In those rather typical scenarios of 1:N event communication, this would allow to have
inside the “Local Event Cache” not the event data values itself but pointers/references
to the data contained in a central Communication Management buffer. Updating the
local cache via GetNewSamples() could then be implemented not as a value copy
but as reference updates.
To be honest: This is obviously a coarse grained picture of optimization possibilities
regarding buffer usage! As hinted here (section 9.1) data transferred to application
processes must typically be de-serialized latest before first application access.
Since de-serialization has to be specific to the alignment of the consuming application
the central sharing of an already de-serialized representation might be tricky. But at
least you get the point, that the API design for event data access on the proxy/service
consumer side gives room to apply event data sharing among consumers.
6.2.4 Methods
For each method the remote service provides, the proxy class contains a member of a
method specific wrapper class.
In our example, we have three methods and the corresponding members have the
name Calibrate (of type methods::Calibrate), Adjust (of type methods::-
Adjust) and LogCurrentState (of type methods::LogCurrentState). Just like
the event classes the needed method classes of the proxy class are generated inside
a specific namespace methods, which is contained inside the proxy namespace.
The method member in the proxy is used to call a method provided by the possibly
remote service instance our proxy is connected to.
Let’s have a look at the generated method class for our example — we pick out the
Adjust method here:
1 class Adjust {
2 public:
3 /**
4 * For all output and non-void return parameters
5 * an enclosing struct is generated, which contains
6 * non-void return value and/or out parameters.
7 */
8 struct Output {
9 bool success;
10 Position effective_position;
11 };
12
13 /**
14 * \brief Operation will call the method.
15 *
16 * Using the operator the call will be made by the Communication
17 * Management and a future returned, which allows the caller to
18 * get access to the method result.
19 *
20 * \param[in] target_position See service description.
21 *
22 * \return A future containing Output struct
23 */
24 ara::core::Future<Output> operator()(const Position &target_position);
25 };
Listing 6.9: Proxy side Adjust Method Class
So the method wrapper class is not that complex. It just consists of two parts: An inner
structure definition, which aggregates all OUT-/INOUT-parameters of the method, and
a bracket operator, which is used to call the service method.
The operator contains all of the service methods IN-/INOUT-parameters as IN-
parameters. That means INOUT-parameters in the abstract service method description
are split in a pair of IN and OUT parameters in the ara::com API.
The return value of a call to a service method, which is not a “one-way method” is
an ara::core::Future, where the template parameter is of the type of the inner
struct, which aggregates all OUT-parameters of the method. More about this ara:-
:core::Future in the following subsection.
Before proceeding with the functionalities provided for “normal” methods, we briefly
introduce “one-way methods” here as we already referred to this term in the previous
section. ara::com supports a special flavor of a method, which we call “one-way” or
“fire-and-forget”. Technically this is a method with only IN-params — no OUT-params
and no raising of errors allowed. There is also no hand-shaking/synchronisation pos-
sible with the server! The client/caller therefore gets no feedback at all, whether the
server/callee has processed a “one-way” call or not.
There are communication patterns, where such a best-effort approach is fully suffi-
cient. In this case such a “one-way/fire-and-forget” semantics is very light-weight from
a resource perspective. If we look at the signature of such a method, we see, that it is
simpler, than that from a regular method:
1 class LogCurrentState {
2 public:
3 /**
4 * \brief Operation will call the method.
5 *
6 * Using the operator the call will be made by the Communication
7 * Management.
8 *
9 * It is a one-way method, so no feedback (return value/out-parameter)
10 * is given.
11 */
12 void operator()();
13 };
Listing 6.10: Proxy side LogCurrentState Method Class
Similar to the access to event data described in the previous section (subsection 6.2.3),
we provide API support for an event-driven and polling-based approach also for ac-
cessing the results of a service method call.
The magic of differentiation between both approaches lies in the returned ara:-
:core::Future: ara::core::Future is basically an extended version of the
C++11/C++14 std::future class; see [4] for details.
Like in the event data access, event-driven here means, that the caller of the method
(the application with the proxy instance) gets notified by the Communication Manage-
ment implementation as soon as the method call result has arrived.
For a Communication Management implementation of ara::com this means, it has
to setup some kind of waiting mechanism (WaitEvent) behind the scene, which gets
woken up as soon as the method result becomes available, to notify the ara::com
user. So how do the different usage patterns of the ara::core::Future work then?
Let’s have a deeper look at our ara::core::Future and the interfaces it provides:
1 enum class future_status : uint8_t
2 {
3 ready, ///< the shared state is ready
4 timeout, ///< the shared state did not become ready before the specified
timeout has passed
5 };
6
7 template <typename T, typename E = ErrorCode>
8 class Future {
9 public:
10
11 Future() noexcept = default;
12 ~Future();
13
14 Future(Future const&) = delete;
15 Future& operator=(Future const&) = delete;
16
17 Future(Future&& other) noexcept;
18 Future& operator=(Future&& other) noexcept;
19
20 /**
21 * @brief Get the value.
22 *
23 * This function shall behave the same as the corresponding std::future
function.
24 *
25 * @returns value of type T
26 * @error Domain:error the error that has been put into the
corresponding Promise via Promise::SetError
27 *
28 */
29 T get();
30
31 /**
77 */
78 template <typename Clock, typename Duration>
79 future_status wait_until(std::chrono::time_point<Clock, Duration> const
& deadline) const;
80
81 /**
82 * @brief Register a callable that gets called when the Future becomes
ready.
83 *
84 * When @a func is called, it is guaranteed that get() and GetResult()
will not block.
85 *
86 * @a func may be called in the context of this call or in the context
of Promise::set_value()
87 * or Promise::SetError() or somewhere else.
88 *
89 * The return type of @a then depends on the return type of @a func (
aka continuation).
90 *
91 * Let U be the return type of the continuation (i.e. std::result_of_t<
std::decay_t<F>(ara::core::Future<T,E>)>).
92 * If U is ara::core::Future<T2,E2> for some types T2, E2, then the
return type of @a then is ara::core::Future<T2,E2>,
93 * otherwise it is ara::core::Future<U>. This is known as implicit
unwrapping.
94 *
95 * @param func a callable to register
96 * @returns a new Future instance for the result of the continuation
97 */
98 template <typename F>
99 auto then(F&& func) -> SEE_COMMENT_ABOVE;
100
101 /**
102 * @brief Return whether the asynchronous operation has finished.
103 *
104 * If this function returns true, get(), GetResult() and the wait calls
are guaranteed not to block.
105 *
106 * @returns true if the Future contains a value or an error, false
otherwise
107 */
108 bool is_ready() const;
109 };
Listing 6.11: ara::core::Future Class
2
3 int main() {
4 // some code to acquire a handle
5 // ...
6 RadarServiceProxy service(handle);
7 Future<Calibrate::Output> callFuture = service.Calibrate(
myConfigString);
8
9 /**
10 * Now we do a blocking get(), which will return in case the result
11 * (valid or exception) is received.
12 *
13 * If Calibrate could throw an exception and the service has set one,
14 * it would be thrown by get()
15 */
16 Calibrate::Output callOutput = callFuture.get();
17
18 // process callOutput ...
19 return 0;
20 }
Listing 6.12: Synchronous method call sample
The last possibility to get notification of the result of the future (valid or exception) is
by registering a callback method via then(). This is one of the extensions to the
ara::core::Future over std::future.
As you can see, all the possibilities to get access to the future’s method result we
have discussed (and partly showed in examples) up to now — blocking “get”, all “wait”
variants and “then” — are event-driven. I.e. the event of the arrival of the method
result (or an error) leads to either resuming of a blocked user thread or call to a user
provided function!
There are of course cases, where the ara::com users does not want his applica-
tion (process) getting activated by some method-call return event at all! Think for a
typical RT (real time) application, which must be in total control of its execution. We
discussed this RT/polling use case already in the context of event data access already
( subsubsection 6.2.3.3). For method calls the same approach applies!
There may be cases, where you already have called a service method via the ()-
operator, which returned you an ara::core::Future, but you are not interested
in the result anymore.
It could even be the case, that you already have registered a callback via ara::-
core::Future::then() for it. Instead of just let things go and “ignore” the callback,
you should tell the Communication Management explicitly.
This might free resources and avoid unnecessary processing load on the binding imple-
mentation level. Telling that you are not interested in the method call result anymore is
simply done by letting the ara::core::Future go out of scope, so that its destructor
gets called.
Call of the dtor of the ara::core::Future is a signal to the binding implementation,
that any registered callback for this future shall not be called anymore, reserved/allo-
cated memory for the method call result might be freed and event waiting mechanisms
for the method result shall be stopped.
To trigger the call to the dtor you could obviously let the future go out of scope.
Depending on the application architecture this might not be feasible, as you already
might have assigned the returned ara::core::Future to some variable with greater
scope.
To solve this, the ara::core::Future is default-constructible. Therefore you simply
overwrite the returned ara::core::Future in the variable with a default constructed
instance as is shown in the example below:
1 using namespace ara::com;
2
3 Future<Calibrate::Output> calibrateFuture;
4
5 int main() {
6 // some code to acquire handle
7 // ...
8 RadarServiceProxy service(handle);
9 calibrateFuture = service.Calibrate(myConfigString);
10
11 /** ....
12 * Some state changes happened, which render the calibrate method
13 * result superfluous ...
14 *
6.2.5 Fields
Conceptually a field has — unlike an event — a certain value at any time. That results
in the following additions compared to an event:
• if a subscription to a field has been done, “immediately” initial values are sent
back to the subscriber in an event-like notification pattern.
• the current field value can be queried via a call to a Get() method or could be
updated via a Set() method.
Note, that all the features a field provides are optionally: In the configuration (IDL) of
your field, you decide, whether it has “on-change-notification”, Get() or Set(). In our
example field (see below), we have all three mechanisms configured.
For each field the remote service provides, the proxy class contains a member of a
field specific wrapper class. In our example the member has the name UpdateRate
(of type fields::UpdateRate).
Just like the event and method classes the needed field classes of the proxy class are
generated inside a specific namespace fields, which is contained inside the proxy
namespace.
The explanation of fields has been intentionally put after the explanation of events and
methods, since the field concept is roughly an aggregation of an event with correlated
get()/set() methods. Therefore technically we also implement the ara:com field repre-
sentation as a combination of ara:com event and method.
Consequently the field member in the proxy is used to
• call Get() or Set() methods of the field with exactly the same mechanism as
regular methods
• access field update notifications in the form of events/event data, which are sent
by the service instance our proxy is connected to with exactly the same mecha-
nism as regular events
Let’s have a look at the generated field class for our example UpdateRate field here:
1 class UpdateRate {
2 /**
3 * \brief Shortcut for the events data type.
4 */
5 using FieldType = uint32_t;
6
7 /**
8 * \brief See Events for details, as a field contains the possibility
for
9 * notifications the details of the interfaces described there.
10 */
11 void Subscribe(size_t maxSampleCount);
12 ara::core::Result<size_t> GetFreeSampleCount() const noexcept;
13 ara::com::SubscriptionState GetSubscriptionState() const;
14 void Unsubscribe();
15 void SetReceiveHandler(ara::com::EventReceiveHandler handler);
16 void UnsetReceiveHandler();
17 void SetSubscriptionStateChangeHandler(ara::com::
SubscriptionStateChangeHandler handler);
18 void UnsetSubscriptionStateChangeHandler();
19 template <typename F>
20 ara::core::Result<size_t> GetNewSamples(
21 F&& f,
22 size_t maxNumberOfSamples = std::numeric_limits<size_t>::max());
23
24 /**
25 * The getter allows to request the actual value of the service
provider.
26 *
27 * For a description of the future, see the method.
28 * It should behave like a Method.
29 */
30 ara::core::Future<FieldType> Get();
31
32 /**
33 * The setter allows to request the setting of a new value.
34 * It is up to the Service Provider ro accept the request or modify it.
35 * The new value shall be sent back to the requester as response.
36 *
37 * For a description of the future, see the method.
38 * It should behave like a Method.
39 */
40 ara::core::Future<FieldType> Set(const FieldType& value);
41 };
Listing 6.14: Proxy side UpdateRate Field Class
20 /**
21 * Ctor taking instance specifier as parameter and having default
22 * request processing mode kEvent.
23 */
24 RadarServiceSkeleton(ara::core::InstanceSpecifier instanceSpec,
25 ara::com::MethodCallProcessingMode mode =
26 ara::com::MethodCallProcessingMode::kEvent);
27
28 /**
29 * skeleton instances are nor copy constructible.
30 */
31 RadarServiceSkeleton(const RadarServiceSkeleton& other) = delete;
32
33 /**
34 * skeleton instances are nor copy assignable.
35 */
36 RadarServiceSkeleton& operator=(const RadarServiceSkeleton& other) =
delete;
37
38 /**
39 * The Communication Management implementer should care in his dtor
40 * implementation, that the functionality of StopOfferService()
41 * is internally triggered in case this service instance has
42 * been offered before. This is a convenient cleanup functionality.
43 */
44 ~RadarServiceSkeleton();
45
46 /**
47 * Offer the service instance.
48 * method is idempotent - could be called repeatedly.
49 */
50 void OfferService();
51
52 /**
53 * Stop Offering the service instance.
54 * method is idempotent - could be called repeatedly.
55 *
56 * If service instance gets destroyed - it is expected that the
57 * Communication Management implementation calls StopOfferService()
58 * internally.
59 */
60 void StopOfferService();
61
62 /**
63 * For all output and non-void return parameters
64 * an enclosing struct is generated, which contains
65 * non-void return value and/or out parameters.
66 */
67 struct CalibrateOutput {
68 bool result;
69 };
70
71 /**
72 * For all output and non-void return parameters
73 * an enclosing struct is generated, which contains
74 * non-void return value and/or out parameters.
75 */
76 struct AdjustOutput {
77 bool success;
78 Position effective_position;
79 };
80
81 /**
82 * This fetches the next call from the Communication Management
83 * and executes it. The return value is a ara::core::Future.
84 * In case of an Application Error, an ara::core::ErrorCode is stored
85 * in the ara::core::Promise from which the ara::core::Future
86 * is returned to the caller.
87 * Only available in polling mode.
88 */
89 ara::core::Future<bool> ProcessNextMethodCall();
90
91 /**
92 * \brief Public member for the BrakeEvent
93 */
94 events::BrakeEvent BrakeEvent;
95
96 /**
97 * \brief Public member for the UpdateRate
98 */
99 fields::UpdateRate UpdateRate;
100
101 /**
102 * The following methods are pure virtual and have to be implemented
103 */
104 virtual ara::core::Future<CalibrateOutput> Calibrate(
105 std::string configuration) = 0;
106 virtual ara::core::Future<AdjustOutput> Adjust(
107 const Position& position) = 0;
108 virtual void LogCurrentState() = 0;
109 };
Listing 6.15: RadarService Skeleton
6.3.1 Instantiation
As you see in the example code of the RadarServiceSkeleton above, the skeleton
class, from which the service implementer has to subclass his service implementa-
tion, provides three different ctor variants, which basically differ in the way, how the
instance identifier to be used is determined.
Since you could deploy many different instances of the same type (and therefore same
skeleton class) it is straightforward, that you have to give an instance identifier upon
creation. This identifier has to be unique. In the exception-less creation of a service
skeleton a static member function Preconstruct checks the provided identifier. The
construction token is embedded in the returned ara::core::Result if the identifier
was unique. Otherwise it returns ara::core:.ErrorCode.
If a new instance shall be created with the same identifier, the existing instance needs
to be destroyed before.
Exactly for this reason the skeleton class (just like the proxy class) does neither support
copy construction nor copy assignment! Otherwise two "identical" instances would exist
for some time with the same instance identifier and routing of method calls would be
non-deterministic.
The different variants of ctors regarding instance identifier definition reflect their dif-
ferent natures, which are described in section 6.1.
Now let’s come to the point, where we deliver on the promise to support event-driven
and polling behavior also on the service providing side. From the viewpoint of the
service providing instance — here our skeleton/skeleton subclass instance — requests
(service method or field getter/setter calls) from service consumers may come in at
arbitrary points in time.
In a purely event-driven setup, this would mean, that the Communication Management
generates corresponding call events and transforms those events to concrete method
calls to the service methods provided by the service implementation.
The consequences of this setup are clear:
• general reaction to a service method call might be fast, since the latency is only
restricted by general machine load and intrinsic IPC mechanism latency.
• rate of context switches to the OS process containing the service instance might
be high and non-deterministic, decreasing overall throughput.
As you see — there are pros and cons for an event-driven processing mode at the
service provider side. However, we do support such a processing mode with ara:-
:com. The other bookend we do support, is a pure polling style approach. Here
the application developer on the service provider side explicitly calls an ara::com
provided API to process explicitly one call event.
With this approach we again support the typical RT-application developer. His applica-
tion gets typically activated due to a low jitter cyclical alarm.
When his application is active, it checks event queues in a non-blocking manner and
decides explicitly how many of those accumulated (since last activation time) events it
is willing to process. Again: Context switches/activations of the application process are
only accepted by specific (RT) timers. Asynchronous communication events shall not
lead to an application process activation.
So how does ara::com allow the application developer to differentiate between those
processing modes? The behavior of a skeleton instance is controlled by the second
parameter of its ctor, which is of type ara::com::MethodCallProcessingMode.
1 /**
2 * Request processing modes for the service implementation side
3 * (skeleton).
4 *
5 * \note Should be provided by platform vendor exactly like this.
6 */
7 enum class MethodCallProcessingMode { kPoll, kEvent, kEventSingleThread };
That means the processing mode is set for the entire service instance (i.e. all its
provided methods are affected) and is fix for the whole lifetime of the skeleton instance.
The default value in the ctor is set to kEvent, which is explained below.
If you set it to kPoll, the Communication Management implementation will not call
any of the provided service methods asynchronously!
If you want to process the next (assume that there is a queue behind the scenes,
where incoming service method calls are stored) pending service-call, you have to call
the following method on your service instance:
1 /**
2 * This fetches the next call from the Communication Management
3 * and executes it. The return value is a ara::core::Future.
4 * In case of an Application Error, an ara::core::ErrorCode is stored
5 * in the ara::core::Promise from which the ara::core::Future
6 * is returned to the caller.
7 * Only available in polling mode.
8 */
9 ara::core::Future<bool> ProcessNextMethodCall();
• the callback is invoked after the service method called by the middleware corre-
sponding to the outstanding request has finished.
• in the callback the RT application decides, if there is enough time left for serving
a subsequent service method. If so, it calls another ProcessNextMethodCall.
Sure - this simple example assumes, that the RT application knows worst case runtime
of its service methods (and its overall time slice), but this is not that unlikely!
The bool value of the returned ara::core::Future is set to true by the Commu-
nication Management in case there really was an outstanding request in the queue,
which has been dispatched, otherwise it is set to false.
This is a somewhat comfortable indicator to the application developer, not to call re-
peatedly ProcessNextMethodCall although the request queue is empty. So calling
ProcessNextMethodCall directly after a previous call returned an ara::core::-
Future with the result set to false might most likely do nothing (except that inciden-
tally in this minimal time frame a new request came in).
Note that the binding implementation is free to decide, whether it dispatches the
method call event to your service method implementation within the thread context
in which you called ProcessNextMethodCall, or whether it does spawn a separate
thread for this method call.
In comparison to a shared memory solution the access from the polling service provider to
those queue location might come with higher costs/latency.
Table 6.10: AUTOSAR Binding Implementer Hint - service method call queue
On the contrary the mode kEventSingleThread assures, that on the service in-
stance only one service method at a time will be called by the Communication Man-
agement implementation.
That means, Communication Management implementation has to queue incoming ser-
vice method call events for the same service instance and dispatch them one after the
other.
Why did we provide those two variants? From a functional viewpoint only kEvent
would have been enough! A service implementation, where certain service methods
could not run concurrently, because of shared data/consistency needs, could simply
do its synchronization (e.g. via std::mutex) on its own!
The reason is “efficiency”. If you have a service instance implementation, which has
extensive synchronization needs, i.e. would synchronize almost all service method
calls anyways, it would be a total waste of resources, if the Communication Manage-
ment would “spend” N threads from its thread-pool resources, which directly after get
a hard sync, sending N-1 of it to sleep.
For service implementations which lie in between — i.e. some methods can be called
concurrently without any sync needs, some methods need at least partially synchro-
nization — the service implementer has to decide, whether he uses kEvent and does
synchronization on top on his own (possibly optimizing latency, responsiveness of his
service instance) or whether he uses kEventSingleThread, which frees him from
synchronizing on his own (possibly optimizing ECU overall throughput).
6.3.4 Methods
Service methods on the skeleton side are abstract methods, which have to be overwrit-
ten by the service implementation sub-classing the skeleton. Let’s have a look at the
Adjust method of our service example:
1 /**
2 * For all output and non-void return parameters
3 * an enclosing struct is generated, which contains
4 * non-void return value and/or out parameters.
5 */
6 struct AdjustOutput {
7 bool success;
8 Position effective_position;
9 };
10
11 virtual ara::core::Future<AdjustOutput> Adjust(
12 const Position& position) = 0;
Listing 6.17: Skeleton side Adjust method
The IN-parameters from the abstract definition of the service method are directly
mapped to method parameters of the skeletons abstract method signature.
In this case it’s the position argument from type Position, which is — as it is a non-
primitive type — modeled as a “const ref”1 .
The interesting part of the method signature is the return type. The implementation of
the service method has to return our extensively discussed ara::core::Future.
The idea is simple: We do not want to force the service method implementer to signal
the finalization of the service method with the simple return of this “entry point” method!
Maybe the service implementer decides to dispatch the real processing of the service
call to a central worker-thread pool! This would then be really ugly, when the “entry
point” methods return would signal the completion of the service call to the Communi-
cation Management.
Then — in our worker thread pool scenario — we would have to block into some kind
of wait point inside the service method and wait for some notification from the worker
thread, that he has finished and only then we would return from the service method.
In this scenario we would have a blocked thread inside the service-method! From the
viewpoint of efficient usage of modern multi-core CPUs this is not acceptable.
The returned ara::core::Future contains a structure as template parameter,
which aggregates all the OUT-parameters of the service call.
The following two code examples show two variants of an implementation of Adjust.
In the first variant the service method is directly processed synchronously in the method
1
The referenced object is provided by the Communication Management implementation until the
service method call has set its promise (valid result or error). If the service implementer needs the
referenced object beyond that, he has to make a copy.
As you see in the example above: Inside the body of the service method an internal
method is called, which does the work synchronously. I.e. after the return of “doAd-
justInternal” in out the attributes, which resemble the service methods out-params are
set. Then this out value is set at the ara::core::Promise and then the Future
created from the Promise is returned.
This has the effect that the caller, who gets this Future as return, can immediately call
Future::get(), which would not block, but immediately return the AdjustOutput.
3 /**
4 * Our implementation of the RadarService
5 */
6 class RadarServiceImpl : public RadarServiceSkeleton {
7
8 public:
9 Future<AdjustOutput> Adjust(const Position& position)
10 {
11 ara::core::Promise<AdjustOutput> promise;
12 auto future = promise.get_future();
13
representing this ApplicationError simply has to be stored into the Promise, from
which the Future is returned to the caller:
1 using namespace ara::com;
2 using namespace com::mycompany::division::radarservice;
3
4 /**
5
6 Our implementation of the RadarService
7 */
8 class RadarServiceImpl : public RadarServiceSkeleton {
9
10 public:
11 Future<CalibrateOutput> Calibrate(const std::string& configuration)
12 {
13 ara::core::Promise<CalibrateOutput> promise;
14 auto future = promise.get_future();
15
16 // we check the given configuration arg
17 if (!checkConfigString(configuration))
18 { // given arg is invalid: // assume that in ARXMLs we have ErrorDomain
with name SpecificErrors // which contains InvalidConfigString error. //
Note that numeric error code will be casted to ara::core::ErrorCode //
implicitly. promise.SetError(SpecificErrorsErrc::InvalidConfigString); }
19
20 else
21 { ... }
22
23 // we return a future with a potentially set exception
24 return future;
25 }
26
27 private:
28 bool checkConfigString(const std::string& config);
29
30 std::string curValidConfig_;
31 }
Listing 6.20: Returning Future with possibly set exception
In this example, the implementation of “Calibrate” detects, that the given configuration
string argument is invalid and sets the corresponding exception to the Promise.
6.3.5 Events
On the skeleton side the service implementation is in charge of notifying about occur-
rence of an event. As shown in 6.15 the skeleton provides a member of an event
wrapper class per each provided event. The event wrapper class on the skeleton/event
provider side looks obviously different than on the proxy/event consumer side.
On the service provider/skeleton side the service specific event wrapper classes are
defined within the namespace event directly beneath the namespace skeleton.
Let’s have a deeper look at the event wrapper in case of our example event Bra-
keEvent:
1 class BrakeEvent {
2 public:
3 /**
4 * Shortcut for the events data type.
5 */
6 using SampleType = RadarObjects;
7
8 void Send(const SampleType &data);
9
10 ara::com::SampleAllocateePtr<SampleType> Allocate();
11
12 /**
13 * After sending data you loose ownership and can’t access
14 * the data through the SampleAllocateePtr anymore.
15 * Implementation of SampleAllocateePtr will be with the
16 * semantics of std::unique_ptr (see types.h)
17 */
18 void Send(ara::com::SampleAllocateePtr<SampleType> data);
19 };
Listing 6.21: Skeleton side of BrakeEvent class
The using directive — analogue to the Proxy side — just introduces the common name
SampleType for the concrete data type of the event. We provide two different variants
of a “Send” method, which is used to send out new event data. The first one takes a
reference to a SampleType.
This variant is straight forward: The event data has been allocated somewhere by the
service application developer and is given via reference to the binding implementation
of Send().
After the call to send returns, the data might be removed/altered on the caller side. The
binding implementation will make a copy in the call.
The second variant of ’Send‘ also has a parameter named “data”, but this is now of a
different type ara::com::SampleAllocateePtr<SampleType>. According to our
general approach to only provide abstract interfaces and eventually provide a proposed
mapping to existing C++ types (see section 5.3) this pointer type, we introduced here,
shall behave like a std::unique_ptr<T>.
That roughly means: Only one party can hold the pointer - if the owner wants to give it
away, he has to explicitly do it via std::move. So what does this mean here? Why do
we want to have std::unique_ptr<T> semantics here?
To understand the concept, we have to look at the third method within the event wrapper
class first:
1 ara::com::SampleAllocateePtr<SampleType> Allocate();
The event wrapper class provides us here with a method to allocate memory for one
sample of event data. It returns a smart pointer ara::com::SampleAllocateePtr
<SampleType>, which points to the allocated memory, where we then can write an
event data sample to. And this returned smart pointer we can then give into an upcom-
ing call to the second version of “Send”.
So — the obvious question would be — why should I let the binding implementation
do the memory allocation for event data, which I want to notify/send to potential con-
sumers? The answer simply is: Possibility for optimization of data copies.
The following “over-simplified” example makes things clearer: Let’s say the event,
which we talk about here (of type RadarObjects), could be quite big, i.e. it contains
a vector, which can grow very large (say hundreds of kilobytes). In the first variant of
“Send”, you would allocate the memory for this event on your own on the heap of your
own application process.
Then — during the call to the first variant of “Send” — the binding implementation has
to copy this event data from the (private) process heap to a memory location, where
it would be accessible for the consumer. If the event data to copy is very large and
the frequency of such event occurrences is high, the sheer runtime of the data copying
might hurt.
The idea of the combination of Allocate() and the second variant to send event
data (Send(SampleAllocateePtr<SampleType> data)) is to eventually avoid
this copy!
A smart binding implementation might implement the Allocate() in a way, that it
allocates memory at a location, where writer (service/event provider) and reader (ser-
vice/event consumer) can both directly access it! So an ara::com::SampleAllo-
cateePtr<SampleType> is a pointer, which points to memory nearby the receiver.
Such locations, where two parties can both have direct access to, are typically called
“shared memory”. The access to such regions should — for the sake of data consis-
tency — be synchronized between readers and writers.
This is the reason, that the Allocate() method returns such a smart pointer with the
aspects of single/solely user of the data, which it points to: After the potential writer
(service/event provider side) has called Allocate(), he can access/write the data
pointed to as long as he hands it over to the second send variant, where he explicitly
gives away ownership!
This is needed, because after the call, the readers will access the data and need a
consistent view of it.
6.3.6 Fields
The using directive — again as in the Event Class and on the Proxy side — just intro-
duces the common name FieldType for the concrete data type of the field.
We provide an Update method by which the service implementer can update the cur-
rent value of the field.
It is very similar to the simple/first variant of the Send method of the event class: The
field data has been allocated somewhere by the service application developer and is
given via reference to the binding implementation of Update. After the call to Update
returns, the data might be removed/altered on the caller side.
The binding implementation will make a (typically serialized) copy in the call.
In case “on-change-notification” is configured for the field, notifications to subscribers
of this field will be triggered by the binding implementation in the course of the Update
call.
In this case, new subscribers will get potentially outdated field values on subscription,
since updating of the field value is deferred to the explicit call of a “GetHandler”.
You also have to keep in mind: In such a setup, with enabled “on-change-notification”
together with a registered “GetHandler” the Communication Management implemen-
tation will not automatically care for, that the value the developer returns from the
“GetHandler” will be synchronized with value, which subscribers get via “on-change-
notification” event!
If the implementation of “GetHandler” does not internally call Update() with the same
value, which it will deliver back via ara:com promise, then the field value delivered via
“on-change-notification” event will differ from the value returned to the Get() call. I.e.
the Communication Management implementation will not automatically/internally call
Update() with the value the “GetHandler” returned.
Bottom line: Using RegisterGetHandler is rather an exotic use case and develop-
ers should be aware of the intrinsic effect.
Additionally a user provided “GetHandler”, which only returns the current value, which
has already been updated by the service implementation via Update(), is typically
very inefficient! The Communication Management then has to call to user space and
to additionally apply field serialization of the returned value at any incoming Get() call.
Both things could be totally “optimized away” if the developer does not register a “GetH-
andler” and leaves the handling of Get() calls entirely to the Communication Manage-
ment implementation.
Since the most basic guarantee of a field is, that it has a valid value at any time, ara:-
com has to somehow ensure, that a service implementation providing a field has to
provide a value before the service (and therefore its field) becomes visible to potential
consumers, which — after subscription to the field — expect to get initial value notifica-
tion event (if field is configured with notification) or a valid value on a Get call (if getter
is enabled for the field).
An ara::com Communication Management implementation needs therefore behave
in the following way: If a developer calls OfferService() on a skeleton implemen-
tation and had not yet called Update() on any field, which
• has notification enabled
• or has getter enabled but not yet a “GetHandler” registered
the Communication Management implementation shall return an unchecked error indi-
cating this programming error.
Note: The AUTOSAR meta-model supports the definition of such initial values for a
field in terms of a so called FieldSenderComSpec of a PPortPrototype. So this
model element should be considered by the application code calling Update().
Since the underlying field value is only known to the middleware, the current field
value is not accessible from the “Get/SetHandler” implementation, which are on ap-
plication level. If the “Get/SetHandler” needs to read the current field value, the
skeleton implementation must provide a field value replica accessible from applica-
tion level.
6.4 Runtime
Note: A singleton called Runtime may be needed to collect cross-cutting functional-
ities. Currently there are no requirements for such functionalities, so this chapter is
empty. This might change until the 1st release.
7 ara::core::Optional<Current_t> current;
8 ara::core::Optional<Health> health;
9 };
Listing 7.1: Definition of BatteryState
8.1 Introduction
The Adaptive AUTOSAR Communication Management is based on Service Oriented
communication. This is good for implementing platform independent and dynamic ap-
plications with a service-oriented design.
For ADAS applications, it is important to be able to transfer raw binary data streams
over Ethernet efficiently between applications and sensors, where service oriented
communication (e.g. SOME/IP, DDS) either creates unnecessary overhead for efficient
communication, or the sensors do not even have the possibility to send anything but
raw binary data.
The Raw Data Binary Stream API provides a way to send and receive Raw Binary Data
Streams, which are sequences of bytes, without any data type. They enable efficient
communication with external sensors in a vehicle (e.g. sensor delivers video and map
data in "Raw data" format). The communication is performed over a network using
sockets.
From the ara::com architecture point of view, Raw Data Streaming API is static, i.e.
its is not generated. It is part of the ara::com namespace, but is independent of the
ara::com middleware services.
The Raw Data Binary Stream API can be used in both the client or the server side. The
functionality of both client and server allow to send and receive. The only difference is
that the server can wait for connections but cannot actively connect to a client. On the
other side, the client can connect to a server (that is already waiting for connections)
but the client cannot wait for connections.
The usage of the Raw Data Binary Streams API from Adaptive Autosar must follow this
sequence:
• As client
1. Connect: Establishes connection to sensor
2. ReadData/WriteData: Receives or sends data
3. Shutdown: Connection is closed.
• As server
1. WaitForConnection: Waits for incoming connections from clients
2. ReadData/WriteData: Receives or sends data
3. Shutdown: Connection is closed and stops waiting for connections.
The class ara::com::raw defines a RawDataStream class for reading and writing
binary data streams over a network connection using sockets. The client side is an
object of the class class ara::com::raw::RawDataStreamClient and the server
side is ara::com::raw::RawDataStreamServer
8.2.1.1 Constructor
The constructor takes as input the instance specifier qualifying the network binding and
parameters for the instance.
RawDataStreamClient(const ara::com::InstanceSpecifier\& instance);
RawDataStreamServer(const ara::com::InstanceSpecifier\& instance);
8.2.1.2 Destructor
Destructor of RawDataStream. If the connection is still open, it will be shut down before
destroying the RawDataStream object.Destructor of RawDataStream. If the connection
is still open, it will be shut down before destroying the RawDataStream object.
~RawDataStreamClient();
~RawDataStreamServer();
The manifest defines the parameters of the Raw Data Stream deployment.
The RawDataStreamMapping defines the actual transport that raw data uses in the
sub-classes of EthernetRawDataStreamMapping.
The IP address is defined in the attribute communicationConnector (type Ethernet-
CommunicationConnector).
The socketOption attribute allows to specify non-formal socket options that might only
be valid for specific platforms.
In principle, Raw Data Streaming can use any transport layer but currently only TCP
and UDP are supported. The following attributes of the sub-class EthernetRawDataS-
treamMapping with type PositiveInteger allow choosing it:
• multicastUdpPort
• tcpPort
• udpPort
At least one of the three previous attributes has to be defined.
The EthernetRawDataStreamMapping also has an attribute regarding security:
• tlsSecureComProps
All the methods of RawDataStream have an optional input parameter for the timeout.
This argument defines the timeout of the method in milliseconds. The type is std::-
chrono::milliseconds.
If timeout is 0 or not specified the operation will block until it returns.
If timeout is specified is > 0 the method call will return a timeout error if the time to
perform the operation exceeds the timeout limit.
8.3.2 Methods
The API methods are synchronous, so they will block until the method returns or until
timeout is reached.
8.3.2.1 WaitForConnection
This method is available only in the server side of the Raw Data Stream.
The server side of the Raw Data Stream is ready to be connected from a client. No
connection from clients can be established until this method is called in the server.
8.3.2.2 Connect
This method is available only in the client side of the Raw Data Stream.
This method initializes the socket and establishes a connection to the TCP server. In
the case of UDP, no connection is established. Incoming and outgoing packets are
restricted to the specified address.
The sockets are specified in the manifest which is accessed through the Instance-
Specifer provided in the constructor.
ara::core:Result<void> Connect();
ara::core:Result<void> Connect(std::chrono::milliseconds timeout);
8.3.2.3 Shutdown
This method shuts down communication. It is available from both client and server
sides of the Raw Data Stream.
ara::core:Result<void> Shutdown();
ara::core:Result<void> Shutdown(std::chrono::milliseconds timeout);
8.3.2.4 ReadData
This method reads bytes from the socket connection. The maximum number of bytes
to read is provided with the parameter length. The timeout parameter is optional.
ara::core::Result<ReadDataResult> ReadData(size_t length);
ara::core::Result<ReadDataResult> ReadData(
size_t length,
std::chrono::milliseconds timeout);
If the operation worked, it returns a struct with a pointer to the memory containing the
read data and the actual number of read bytes.
struct ReadDataResult{
ara::com::SamplePtr<uint8_t> data;
size_t numberOfBytes;
}
8.3.2.5 WriteData
This method writes bytes to the socket connection. The data is provided as a buffer with
the data parameter. The number of bytes to write is provided in the length parameter.
An optional timeout parameter can also be used.
ara::core:Result<size_t> WriteData(
ara::com::SamplePtr<uint8_t> data,
size_t length);
ara::core:Result<size_t> WriteData(
ara::com::SamplePtr<uint8_t> data,
size_t length,
std::chrono::milliseconds timeout);
If the operation worked, it will return the actual number of bytes written. In case of an
error, it will return a ara::core::ErrorCode:
• Stream Not Connected: If the connection is not yet established.
• Communication Timeout: No data was written until the timeout expiration.
8.4 Overview
The diagram 8.1 shows the sequence when using the Raw Data Streaming API on the
client side.
The diagram 8.2 shows the sequence when using the Raw Data Streaming API on the
server side.
Note that the sequences with a client that sends data and a server that reads data are
also valid.
8.4.2 Usage
Since the Raw Data Streaming provides an API it is required to have the instances of
the RawDataStreamServer or RawDataStreamClient and call the methods according
to the sequences described in 8.4.1
The code 8.1 shows how to use the RawDataStreamServer for sending and receiving
data.
1 // NOTE! For simplicity the example does not use ara::core::Result.
2
3 #include "ara/core/instance_specifier.h"
4 #include "raw_data_stream.h"
5 int main() {
6 size_t rval;
7 ara::com::raw::RawDataStream::ReadDataResult result;
8
38 server.Shutdown(); return 0;
39 }
Listing 8.1: Example of usage as server
The code 8.2 shows how to use the RawDataStreamClient for sending and receiving
data.
1 // NOTE! For simplicity the example does not use ara::core::Result.
2
3 #include "ara/core/instance_specifier.h"
4 #include "raw_data_stream.h"
5 int main() {
6 size_t rval;
7 ara::com::raw::RawDataStream::ReadDataResult result;
8
9 // Instance Specifier from model
10 ara::core::InstanceSpecifier instspec
11 {...}
12
13 // Create a RawDataStreamClient instance
14 ara::com::raw::RawDataStreamClient client {instspec};
15
8.4.3 Security
Raw Data Stream communication can be transported using TCP and UDP. Therefore
different security mechanisms have to be available to secure the stream communica-
tion. Currently the security protocols TLS, DTLS and IPSec are available.
Access control to Raw Data Streams can also be defined by the IAM.
All security functions are configurable in the deployment and mapping model of Raw
Data Streaming Interface.
If sensor data must fulfil security requirements, security extensions have to be used.
8.4.4 Safety
The RawDataStream interface only transmits raw data without any data type informa-
tion. Therefore Raw Data Stream interface cannot provide any data protection, such
as E2E protection. If it is required it must be implemented in the application that uses
the RawDataStream interface.
Implementation of Raw Data Streaming interface should be independent from the un-
derlying Sockets API (e.g. POSIX Sockets).
9 Appendix
9.1 Serialization
Serialization (see [7]) is the process of transforming certain data structures into a
standardized format for exchange between a sender and a (possibly different) receiver.
You typically have this notion if you transfer data from one network node to another.
When putting data on the wire and reading it back, you have to follow exact, agreed-on
rules to be able to correctly interpret the data on the receiver side. For the network
communication use case the need for a defined approach to convert an in-process
data representation into a wire-format and back is very obvious: The boxes doing the
communication might be based on different micro-controllers with different endianness
and different data-word sizes (16-bit, 32-bit, 64-bit) and therefore employing totally
different alignments. In the AUTOSAR CP serialization typically plays no role
for platform internal/node internal communication! Here the internal in-memory data
representation can be directly copied from a sender to a receiver. This is possible,
because three assumptions are made in the typical CP product:
• Endianness is identical among all local SWCs.
• Alignment of certain data structures is homogeneous among all local SWCs.
• Data structures exchanged are contiguous in memory.
The first point is maybe a bit pathological as it is most common, that “internal” com-
munication generally means communication on a single- or multi-core MCU or even
a multi-processor system, where endianness is identical everywhere. Only if we look
at a system/node composed of CPUs made of different micro-controller families this
assumption may be invalid, but then you are already in the discussion, whether this
communication is still “internal” in the typical sense. The second assumption is valid/ac-
ceptable for CP as here a static image for the entire single address space system is
built from sources and/or object files, which demands that compiler settings among the
different parts of the image are somewhat aligned anyway. The third one is also as-
sured in CP. It is not allowed/possible to model non contiguous data types, which get
used in inter-SWC communication.
For the AP things look indeed different. Here the loading of executables during runtime,
which have been built independently at different times and have been uploaded to an
AP ECU at different times, is definitely a supported use case. The chance, that com-
piler settings for different ara::com applications were different regarding alignment
decisions is consequently high. Therefore an AP product (more concrete its IPC bind-
ing implementation) has to use/support serialization of exchanged event/field/method
data. How serialization for AP internal IPC is done (i.e. to what generalized format) is
fully up to the AP vendor. Also regarding the 3rd point, the AP is less restrictive. So for
example the AP supports exchange of std::map data types or record like datatypes,
which contain variable-length members. These datatypes are generally NOT contigu-
ous in-memory (depending on the allocation strategy). So even if the data contained
in the map or records is compatible with the receiver layout wise, a deep copy (mean-
ing collecting contained elements and their references from various memory regions
— see [8]) must be done during transfer. Of course the product vendor could apply
optimization strategies to get rid of the serialization and de-serialization stages within
a communication path:
• Regarding alignment issues, the most simple one could be to allow the integrator
of the system to configure, that alignment for certain communication relations
can be considered compatible (because he has the needed knowledge about the
involved components).
• Another approach common to middleware technology is to verify, whether align-
ment settings on both sides are equal by exchanging a check-pattern as kind of
a init-sequence before first ara::com communication call.
• The problem regarding need for deep-copying because of non-contiguous mem-
ory allocation could be circumvented by providing vector implementations which
care for continuity.
One thing which typically is at the top of the list of performance optimizations in
IPC/middleware implementations is the avoidance of unnecessary copies between
sender and the receiver of data. So the buzzword “zero-copy” is widely used to de-
scribe this pattern. When we talk about AP, where we have architectural expectations
like applications running in separate processes providing memory protection, the typi-
cal communication approach needs at least ONE copy of the data from source address
space to target address space. Highly optimizing middleware/IPC implementations
could even get rid of this single copy step by setting up shared memory regions be-
tween communicating ara::com components. If you look at 6.22, you see, that we
directly encourage such implementation approaches in the API design. But the not
so good news is, that if the product vendor does NOT solve the serialization problem,
he barely gets benefit from the shared memory approach: If conversions (aka de/se-
rialization) have to be done between communication partners, copying must be done
anyhow — so tricky shared memory approaches to aim for “zero-copy” do not pay.
As laid out in the preceding chapters, ara::com expects the functionality of a ser-
vice discovery being implemented by the product vendor. As the service discovery
functionality is basically defined at the API level (see section 6.4) with the methods
for FindService, OfferService and StopOfferService, the protocol and im-
plementation details are partially open.
When an AP node (more concretely an AP SWC) offers a service over the network or
requires a service from another network node, then service discovery/service registry
obviously takes place over the wire. The protocol for service discovery over the wire
needs to be completely specified by the used communication protocol. For SOME/IP,
this is done in the SOME/IP Service Discovery Protocol Specification [9]. But if an
ara::com application wants to communicate with another ara::com application on
the same node within the AP of the same vendor there has to be a local variant of a ser-
vice discovery available. Here the only difference is, that the protocol implementation
for service discovery taking place locally is totally up to the AP product vendor.
From an abstract perspective a AP product vendor could choose between two ap-
proaches: The first one is a centralist approach, where the vendor decides to have one
central entity (f.i. a daemon process), which:
• maintains a registry of all service instances together with their location informa-
tion
• serves all FindService, OfferService and StopOfferService re-
quests from local ara::com applications, thereby either updating the registry
(OfferService, StopOfferService) or querying the registry ( FindSer-
vice)
• serves all SOME/IP SD messages from the network either updating its registry
(SOME/IP Offer Service received) or querying the registry (SOME/IP Find
Service received)
• propagates local updates to its registry to the network by sending out SOME/IP
SD messages.
The following figure roughly sketches this approach.
Service discovery
Service discovery
ara::com App
Switch ECU with AP/CP product
Middleware
Impl.
Service Service
Provider or
Registry/ Consumer
Service discovery
Discovery App
Service
Provider or
Consumer
App
A slightly different — more distributed — approach would be, to distribute the service
registry information (availability and location information) among the ara::com appli-
cations within the node. So for the node local communication use case no prominent
discovery demon would be needed. That could be technically reached by having a
broadcast-like communication. That means any service offering and finding is prop-
agated to all local ara::com applications, so that each application has a local (in
process) view of the service registry. There might be a benefit with this approach as
local communication might be more flexible/stable as it is not dependent from a single
registry demon. However, for the service discovery communication to/from the network
a single responsible instance is needed anyhow. Here the distributed approach is not
feasible as SOME/IP SD requires a fixed/defined set of ports, which just can be pro-
vided (in typical operating systems / with typical network stacks) by a single application
process.
At the end we also do have a singleton/central instance, with the slight difference, that
it is responsible for taking the role as a service discovery protocol bridge between node
local discovery protocol and network SOME/IP SD protocol. On top of that — since
registry is duplicated/distributed among all ara::com applications within the node —
this bridge also holds a local registry.
Service discovery
Service discovery
ara::com App
Switch ECU with AP/CP product
Middleware
Impl. with SD
Service
Discovery Provider or
Service discovery Bridge Consumer
App
Service
Provider or
Consumer
App
The following figure depicts an obvious and/or rather simple case. In this example,
which only deals with node local (inside one AP product/ECU) communication between
service consumers (proxy) and service providers (skeleton), there are two instances of
the same proxy class on the service consumer side. You see in the picture, that the
service consumer application has triggered a “FindService” first, which returned two
handles for two different service instances of the searched service type. The service
consumer application has instantiated a proxy instance for each of those handles. Now
in this example the instance 1 of the service is located inside the same adaptive ap-
plication (same process/address space) as the service consumer (proxy instance 1),
while the service instance 2 is located in a different adaptive application (different pro-
cess/address space).
Instance1 Instance2
Client Service Service
Implementation Implementation Implementation
FindService(ServiceType, AnyInstance)
returns Handle1, Handle2
Service
Registry/
Discovery
The line symbolizing the transport layer between proxies and skeletons are colored
differently in this picture: The instance of the proxy class for instance 1 has a red
colored transport layer (binding implementation), while the transport layer for instance
2 is colored blue. They are colored differently because the used technology will be
different already on the level of the proxy implementation. At least if you expect that the
AP product vendor (in the role as IPC binding implementer) strives for a well performing
product!
The communication between proxy instance 1 and the service instance 1 (red) should
in this case be optimized to a plain method call, since proxy instance and skeleton
instance 1 are contained in ONE process.
The communication between proxy instance 2 and the service instance 2 (blue) is a
real IPC. So the actions taken here are of much higher costs involving most likely a
variety of syscalls/kernel context switches to transfer calls/data from process of ser-
vice consumer application to service application (typically using basic technologies like
pipes, sockets or shared mem with some signaling on top for control).
So from the service consumer side application developer it is totally transparent: From
the vendors ProxyClass::FindService implementation he gets two opaque han-
dles for the two service instances, from which he creates two instances of the same
proxy class. But “by magic” both proxies behave totally different in the way, they con-
tact their respective service instances. So — somehow there must be some information
contained inside this handle, from which the proxy class instance knows which tech-
nical transport to choose. Although this use case looks simple at the first look it isn’t
on the second ... The question is: Who writes When into the handle, that the proxy in-
stance created from it shall use a direct method/function call instead of a more complex
IPC mechanism or vice versa?
At the point in time when instance 1 of the service does register itself via Skele-
tonClass::OfferService at the registry/service discovery, this cannot be decided!
Since it depends on the service consumer which uses it later on. So most likely
the SkeletonClass::OfferService implementation of the AP vendor takes the
needed information from the argument (skeleton generated by the AP vendor) and no-
tifies via AP vendor specific IPC the registry/service discovery implementation of the
AP vendor. The many “AP vendor” in the preceding sentence were intentional. Just
showing, that all those mechanisms going on here are not standardized and can there-
fore deliberately designed and optimized by the AP vendors. However, the basic steps
will remain. So what typically will be communicated from the service instance side to
the registry/discovery in the course of SkeletonClass::OfferService is the tech-
nical addressing information, how the instance could be reached via the AP products
local IPC implementation.
Normally there will be only ONE IPC-mechanism used inside one AP product/AP node!
If the product vendor already has implemented a highly optimized/efficient local IPC im-
plementation between adaptive applications, which will then be generally used. So —
in our example let”s say the underlying IPC-mechanism is unix domain sockets — the
skeleton instance 1 would get/create some file descriptor to which its socket endpoint
is connected and would communicate this descriptor to the registry/service discovery
during SkeletonClass::OfferService. Same goes for the skeleton instance 2,
just the descriptor is different. When later on the service consumer application part
does a ProxyClass::FindService, the registry will send the addressing informa-
tion for both service instances to the service consumer, where they are visible as two
opaque handles.
So in this example obviously the handles look exactly the same — with the small dif-
ference, that the contained filedescriptor values would be different as they reference
distinctive unix domain sockets. So in this case it somehow has to be detected inside
the proxy for instance 1, that there is the possibility to optimize for direct method/func-
tion calls. One possible trivial trick could be, that inside the addressing information,
which skeleton instance 1 gives to the registry/discovery, also the ID of the process
(pid) is contained; either explicitly or by including it into the socket descriptor filename.
So the service consumer side proxy instance 1 could simply check, whether the PID
inside the handle denotes the same process as itself and could then use the optimized
path. By the way: Detection of process local optimization potential is a triviality, which
almost every existing middleware implementation does today — so no further need to
stress this topic.
Now, if we step back, we have to realize, that our simple example here does NOT fully
reflect what Multi-Binding means. It does indeed describe the case, where two
instances of the same proxy class use different transport layers to contact the service
instance, but as the example shows, this is NOT reflected in the handles denoting the
different instances, but is simply an optimization! In our concrete example, the service
consumer using the proxy instance 1 to communicate with the service instance 1 could
have used also the Unix domain socket transport like the proxy instance 2 without
any functional losings — only from a non-functional performance viewpoint it would
be obviously bad. Nonetheless this simple scenario was worth being mentioned here
as it is a real-world scenario, which is very likely to happen in many deployments and
therefore must be well supported!
Instance1 Instance2
Client Service Service
Implementation Implementation Implementation
Switch
SOME/IP
FindService(ServiceType, AnyInstance)
returns Handle1, Handle2
Service
Registry/
Discovery
So in this scenario the registry/service discovery demon on our AP ECU has seen
a service offer of instance 2 and this offer contained the addressing information on
IP network endpoint basis. Regarding the service offer of the instance 1 nothing
changed: This offer is still connected with some Unix domain socket name, which
is essentially a filename. In this example the two handles for instance 1 and 2 returned
from ProxyClass::FindService internally look very different: Handle of instance
1 contains the information, that it is a Unix domain socket and a name, while han-
dle 2 contains the information, that it is a network socket and an IP address and port
number. So — in contrast to our first example (subsection 9.3.1) here we do really
have a full blown Multi-Binding, where our proxy class ctor instantiates/creates
two completely different transport mechanisms from handle 1 and handle 2! How this
dynamic decision, which transport mechanism to use, made during call of the ctor,
is technically solved is — again — up to the middleware implementer: The generated
proxy class implementation could already contain any supported mechanism and the
information contained in the handle is just used to switch between different behavior or
the needed transport functionality aka binding could be loaded during runtime after a
certain need is detected from the given handle via shared library mechanisms.
comes from low power/low resources embedded ECUs: Managing a huge amount of
IP sockets in parallel means huge costs in terms of memory (and runtime) resources.
So somehow our AUTOSAR CP siblings which will be main communication partner in
an inside vehicle network demand this approach, which is uncommon, compared to
non-automotive IT usage pattern for ports.
Typically this requirement leads to an architecture, where the entire SOME/IP traffic
of an ECU / network endpoint is routed through one IP port! That means SOME/IP
messages originating from/dispatched to many different local applications (service
providers or service consumers) are (de)multiplexed to/from one socket connection.
In Classic AUTOSAR (CP) this is a straight forward concept, since there is already
a shared communication stack through which the entire communication flows. The
multiplexing of different upper layer PDUs through one socket is core functionality inte-
grated in CPs SoAd basic software module. For a typical POSIX compatible OS with
POSIX socket API, multiplexing SOME/IP communication of many applications to/from
one port means the introduction of a separate/central (demon) process, which man-
ages the corresponding port. The task of this process is to bridge between SOME/IP
network communication and local communication and vice versa.
Instance1 Instance2
Client Service Service
Implementation Implementation Implementation
Switch
SOME/IP
Bridge
Service
Registry/
Discovery
SOME/IP
In the above figure you see, that the service proxy within our ara::com enabled appli-
cation communicates through (green line) a SOME/IP Bridge with the remote service
instance 2. Two points which may pop out in this figure:
• we intentionally colored the part of the communication route from app to bridge
(green) differently than the part from the bridge to the service instance 2 (blue).
• we intentionally drew a box around the function block service discovery and
SOME/IP bridge.
The reason for coloring first part of the route differently from the second one is simple:
Both parts use a different transport mechanism. While the first one (green) between the
proxy and the bridge uses a fully vendor specific implementation, the second one (blue)
has to comply with the SOME/IP specification. “Fully vendor specific” here means,
that the vendor not only decides which technology he uses (pipes, sockets, shared
mem, ...), but also which serialization format (see section 9.1) he employs on that
path. Here we obviously dive into the realm of optimizations: In an optimized AP
product, the vendor would not apply a different (proprietary) serialization format for
the path denoted with the green line. Otherwise it would lead to an inefficient runtime
behavior. First the proxy within the service consumer app would employ a proprietary
serialization of the data before transferring it to the bridge node and then the bridge
would have to de-serialize and re-serialize it to SOME/IP serialization format! So even
if the AP product vendor has a much more efficient/refined serialization approach for
local communication, using it here does not pay, since then the bridge is not able to
simply copy the data through between internal and external side. The result is, that for
our example scenario we eventually do have a Multi-Binding setup. So even if the
technical transport (pipes, unix domain sockets, shared mem, ...) for communication to
other local ara::com applications and to the bridge node is the same, the serialization
part of the binding differs.
Regarding the second noticeable point in the figure: We drew a box around the ser-
vice discovery and SOME/IP bridge functionality since in product implementations it
is very likely, that it is integrated into one component/running within one (demon) pro-
cess. Both functionalities are highly related: The discovery/registry part also consists
of parts local to the ECU (receiving local registrations/offers and serving local Find-
Service requests) and network related functions (SOME/IP service discovery based
offers/finds) , where the registry has to arbitrate. This arbitration in its core is also a
bridging functionality.
The most important meta-model element from the ara::com perspective is the Ser-
viceInterface. Most important, because it defines everything signaturewise of an
ara::com proxy or skeleton. The ServiceInterface describes the methods, fields
and the methods a service interface consists of and how the signatures of those ele-
ments (arguments and data types) look like. So the 6.1 is basically a simplification of
meta-model ServiceInterface and the real meta-model data type system.
The relationship between the meta-model element ServiceInterface and ara:-
:com is therefore clear: ara::com proxy and skeleton classes get generated from
ServiceInterface.
With software components, the AUTOSAR methodology defines a higher order ele-
ment than just interfaces. The idea of a software component is to describe a reusable
part of software with well defined interfaces. For this the AUTOSAR manifest spec-
ification defines a model element SoftwareComponentType, which is an abstract
element with several concrete subtypes, of which the subtype AdaptiveApplica-
tionSwComponentType is the most important one for Adaptive Application software
developers. A SoftwareComponentType model element is realized by C++ code.
Which service interfaces such a component "provides to" or "requires from" the out-
side is expressed by ports. Ports are typed by ServiceInterfaces. P-ports ex-
press that the ServiceInterface, which types the port, is provided, while R-ports
express, that the ServiceInterface, which types the port, is required by the Soft-
wareComponentType.
The figure Figure 9.6 gives a coarse idea, how the model view relates to the code
implementation.
Service
Interface
RadarService
is of type is of type
meta model level
SoftwareComponentType A SoftwareComponentType B
R-Port P-Port
RadarService RadarService
Implementation level
Implementation of Implementation of
SoftwareComponentType A SoftwareComponentType B
RadarService RadarService
Proxy instance Skeleton instance
Implementation of Implementation of
SoftwareComponentType A SoftwareComponentType B
RadarService RadarService
Proxy instance Skeleton instance
Instance of
Instance of
A
A B A
B
B
Composite Component
Executable 1 Executable 2
The figure above shows an arbitrary example, where the implementations of A and B
are instantiated in different contexts. On the lower left side there is an Executable 1,
which directly uses two instances of As impl and one instance of Bs impl. Opposed to
that, the right side shows an Executable 2, which "directly" (i.e. on its top most level)
uses one instance of Bs impl and an instance of a composite software component,
which itself "in its body" again instantiates one instance of As and Bs impl. Note: This
natural implementation concept of composing software components from other compo-
nents to a bigger/composite artefact is fully reflected in the AUTOSAR meta-model in
the form of a CompositionSwComponentType, which itself is a SoftwareCompo-
nentType and allows arbitrary recursive nesting/compositing of software components.
The second case on the other hand belongs to the realm of "deployment level" and
shall be clarified in the following sub-chapter.
Deployable software units within AP are so called Adaptive Applications (the corre-
sponding meta-model element is AdaptiveAutosarApplication). Such an Adap-
tive Application consists of 1..n executeables, which are in turn built up by instantiating
CompositionSwComponentType (with arbitrary nesting) as described in the previ-
ous chapter. Typically integrators then decide, which Adaptive Applications in the form
of its 1..n executables they start at all and how many times they start a certain Adaptive
Application/its associated executables. That means for those kind of implicit instanti-
ation no specific code has to be written! Integrators rather have to deal with machine
configuration, to configure how many times Applications get started. A started Adaptive
Application then turns into 1..n processes (depending on the number of executables it
is made of). We call this then the "deployment level".
Implementation level
A
A B A
B
B
Composite Component
Executable1 Executable 2
Deployment level
The figure above shows a simple example, where we have two Adaptive Applications,
where each of those exactly consists of one executable. Adaptive Application 1 with
Executable 1 is deployed twice, leading to Process 1 and Process 2 after executable
start, where Application 2, which consists of Executable 2 is deployed once leading to
Process 3 after start.
B B A
B InstanceId: Radar
The figure above outlines the "problem" with a simple example. Within Executable
2 there are three instantiations of SoftwareComponentType B implementation in
different contexts (nesting levels). All instances do provide a specific instance of SI
RadarService. The integrator, who applies the Service Instance Manifest for
Process 2 has to do the technical mapping on ara::com level. I.e. he has to de-
cide, which technical transport binding is to be used in each of the B instantiations and
subsequently also, which technical transport binding specific instance ID. In our exam-
ple, the integrator wants to provide the SI RadarService via SOME/IP binding and an
SOME/IP specific instance ID "1" in the context of the B instantiation, which is nested
inside the composite component on the right side, while he decides to provide the SI
RadarService via local IPC (Unix domain socket) binding and a Unix domain socket
specific instance ID "/tmp/Radar/3" and "/tmp/Radar/4" in the context of the B instanti-
ations on the left side, which are not nested (they are instantiated at "top-level" of the
executable). Here it gets obvious, that within the Service Instance Manifest, which al-
lows to specify the mapping of port instantiations within a Process to technical bindings
and their concrete instance IDs, the sole usage of the port name from the model isn’t
sufficient to differentiate. To get unique identifiers within an executable (and therefore a
process), the nature of nested instantiation and re-use of SoftwareComponentTypes
has to be considered. Every time a SoftwareComponentType gets instantiated, its
instantiation gets a unique name within its instantiation context. This concept applies
to both: C++ implementation level and AUTOSAR meta-model level! In our concrete
example this means:
• B instantiations on top level get unique names on their level: "B_Inst_1" and
"B_Inst_2"
• B instantiation within the Composite Component Type gets unique name on this
level: "B_Inst_1"
• Composite Component instantiation on top level gets unique name on its level:
"Comp_Inst_1"
• From the perspective of the executable/process, we therefore have unique iden-
tifiers for all instances of B:
– "B_Inst_1"
– "B_Inst_2"
– "Comp_Inst_1::B_Inst_1"
For an Adaptive Software Component developer this then means in a nutshell:
If you construct an instance specifier to be transormed via ResolveInstan-
ceIDs() into an ara::com::InstanceIdentifier or used directly with Find-
Service() (R-port side from model perspective) or as ctor parameter for a skeleton
(P-port side from model perspective), it shall look like:
<context identifier>/<port name>
Port name is to be taken from the model, which describes the AdaptiveApplica-
tionSwComponentType to be developed. Since you are not necessarily the person
who decides where and how often your component gets deployed, you should fore-
see, that your AdaptiveApplicationSwComponentType implementation can be
handed over a stringified <context identifier>, which you
• either use directly, when constructing ara::core::InstanceSpecifier to
instantiate proxies/skeleton, which reflect your own component ports.
• "hand over" to other AdaptiveApplicationSwComponentType implementa-
tions, which you instantiate from your own AdaptiveApplicationSwCompo-
nentType implementation (that is creating a new nesting level)
Note: Since AUTOSAR AP does not prescribe, how the component model on meta-
model level shall be translated to (C++) implementation level, component instantiation
(nesting of components) and "handing over" of the <context identifier> is up to
the implementer! It might be a "natural" solution, to solve this by a <context iden-
tifier> ctor parameter for multi instantiable AdaptiveApplicationSwCompo-
nentTypes.
Figure 9.13: Multiple usage of the same service instance manifest for an abstract binding