Implementing DDD Cqrs and Event Sourcing
Implementing DDD Cqrs and Event Sourcing
Alex Lawrence
* * * * *
This is a Leanpub book. Leanpub empowers authors and publishers with the
Lean Publishing process. Lean Publishing is the act of publishing an in-progress
ebook using lightweight tools and many iterations to get reader feedback, pivot
until you have the right book and build traction once you do.
* * * * *
Programming paradigms
The content and the code examples in this book compromise a combination of
imperative, declarative, object-oriented and functional programming. However,
the majority of the implementations apply the object-oriented paradigm. Also,
classes are used extensively as well as private fields and private methods for
strong encapsulation. Still, over the course of the book, the domain-related
implementations transition towards a more functional style. Also, certain
infrastructural functionalities apply selected principles of it wherever useful. As
the changes come naturally together with introducing specific concepts, there is
no need for upfront knowledge in Functional Programming.
Arrow functions
Async functions
Await operator
Classes, specifically private class fields
const, let
Destructuring assignment
JavaScript Modules
Shorthand property names
Spread Syntax
Template literals
Throwing and handling Errors
crypto.createHash()
fs.watch
many of the standard operations of fs.promises
http functions such as createServer(), get(), post()
various path functions
Readable Streams and Transform Streams
url.parse()
child_process.spawn()
.data directory
Starting from Chapter 9, many code examples and Sample Application implementations save data
to the filesystem. Every created file and subdirectory is put within a directory called “.data”,
which itself is placed at the bundle root. This data container can be deleted at any time without
influencing the functionality of associated code.
The complete source code that is bundled with the book is licensed under the
MIT License. This means, you are free to re-use every contained functionality
for your own projects. However, please refrain from publishing the complete
bundle, as it represents an essential part of the work for this book. Also, note that
the code is primarily for illustration purposes. While the implementations fulfill
the respective functional requirements, they are not necessarily ready to be used
in production environments. Amongst other things, the code may miss essential
validations and can be vulnerable to security-related attacks.
Ever since the age of 10, I have been interested in computers. Experimenting
with BASIC on a C64, programming with Turbo Pascal and failing to self-teach
C++ accompanied my early adolescence. Only much later when studying
Informatics, I deep-dived into the theory and learned many other programming
languages. My first job introduced me to advanced concepts and patterns,
including CQRS and Event Sourcing. At some point, I quit my job and became a
freelancer. Simultaneously, I launched a mouse tracking product that naturally
applied Event Sourcing. Also, I joined a startup developing a collaborative
software with strong focus on DDD, CQRS and Event Sourcing in Node.js. In
this project, I had the chance to deepen my theoretical knowledge and apply
many concepts practically.
Complementary content
This is an example paragraph of complementary information. It is not required to be read in order
to understand the main content.
Chapter summary
The chapter order in this book is determined by what makes most sense with
regard to building the Sample Application. As a consequence, the topics are laid
out in a way they can build upon each other. Amongst other things, this also
reduces the likelihood of referring to terms before they are explained. Every
chapter follows the same structure. The main part explains the respective
concepts and illustrates them either with drawings or code examples. At the end
of each chapter, the discussed concepts are applied to the Sample Application.
This is done by describing the working steps to take and by showing the
according drawings and code. An exception to this format is Chapter 5, which
does not contain a Sample Application section.
Chapter 1
Chapter 1 introduces the concept Domains and defines it in the context of DDD
as problem-specific knowledge fields. The explanation incorporates the
importance of always focusing on the original problems to solve. Different
categories of knowledge areas and their significance are compared to each other.
This is followed by the definition of Domain Experts, which represent the
primary source of relevant information in each project. For an adequate
subdivision of knowledge areas, the concept Subdomains is introduced. This is
accompanied by a description of the different types Core Domain, Supporting
Subdomain and Generic Subdomain. A brief illustration on how to identify
Subdomains provides practical guidance. At the end of the chapter, the Domains
and the Subdomains for the Sample Application are identified.
Chapter 2
Chapter 2 covers the concept Domain Models, which are sets of knowledge
abstractions focused on solving specific problems. The chapter starts with
describing their typical structure and components, and explains why such
abstractions should incorporate verbs, adjectives and adverbs. Also, the relation
to Domain Expert knowledge is clarified. As next step, the concept Ubiquitous
Language is introduced, which promotes a linguistic system to unify
communication and eliminate translation. Afterwards, different representation
possibilities for Domain Models are described and compared to each other, such
as drawings and code experiments. This is followed by a section on Domain
Modeling, which is the process of creating structured knowledge abstractions.
As last step, the Sample Application Domain Model is created and expressed in
multiple different ways.
Chapter 3
Chapter 3 focuses on the concept Bounded Contexts, which represent
conceptional boundaries for the applicability of Domain Models. First, the
concept is differentiated from Domains and Domain Models. Then, multiple
context sizes and their implications are compared to each other. This includes
large unified interpretations and smaller boundaries that align with Subdomains.
Afterwards, the relation between a Bounded Context and a Ubiquitous Language
is clarified. Also, it is explained why contexts are first and foremost of
conceptual nature, but still commonly align with technological structures. The
relationship and integration patterns Open Host Service, Anti-Corruption
Layer and Customer-Supplier are discussed briefly. This is followed by the
concept Context Map for visualizing conceptional boundaries. Finally, the
chapter illustrates the definition of Bounded Contexts for the Sample
Application.
Chapter 4
Chapter 4 deals with Software Architecture, which is a high-level structural
definition of a software and its parts. Since this topic is very broad, the focus is
narrowed down to common architectural patterns that fit well with DDD. The
chapter starts with describing the typical parts of a software, consisting of
Domain, Infrastructure, Application and User Interface. This is followed by
describing and comparing the patterns Layered Architecture and Onion
Architecture, which share many fundamental principles. Afterwards, it is
explained and justified what approach the book and its implementations follow.
Also, the section includes an explanation on how to invert dependencies between
software layers by using abstractions. The chapter ends with defining the
Software Architecture and the resulting directory layout for the Sample
Application.
Chapter 5
Chapter 5 describes and illustrates selected concepts for a high code quality with
focus on Domain Model implementations. First, it explains the importance of
establishing a binding between the Domain Model and its implementation,
together with other artifacts. Afterwards, refactoring is described as functional
refinement in the context of DDD and is differentiated from pure technical
improving. Next, code readability is discussed and its characteristics are broken
down into quantifiable aspects. The importance of combining related state and
behavior is emphasized and exemplified by creating meaningful units with high
cohesion. This is followed by an explanation on how to deal with dependencies
and how to apply Dependency Inversion. The final concept of the chapter is
Command-Query-Separation, which promotes a separation of state
modifications and computations.
Chapter 6
Chapter 6 provides a number of tactical patterns as essential building blocks for
a Domain Model implementation. First, Value Objects are introduced as a way
to design a descriptive part of a Domain without a conceptual identity. This is
accompanied by an illustration of the key characteristics Conceptual Whole,
Equality by Values and Immutability. Afterwards, Entities are described as
possibility to design unique, distinguishable and changeable Domain Model
components. This includes an explanation of identities and how to generate
reliable identifiers using UUID. Next, Domain Services are discussed for
expressing stateless computations and encapsulating external dependencies
through abstractions. Afterwards, Invariants are presented as a way to
transactionally ensure consistent state. Finally, all patterns are applied to create a
useful first Sample Application Domain Model implementation.
Chapter 7
Chapter 7 presents the concept of Domain Events, which represent structured
knowledge of specific meaningful occurrences in a Domain. The chapter starts
with explaining their relation to Event-Driven Architecture. This is followed by
describing common naming conventions for creating expressive event types
based on a Ubiquitous Language. Afterwards, the standard structure and content
of events are discussed, including a differentiation between specialized values
and generic information. Next, it is described how the distribution and
processing of Domain Events work. This includes the implementation of an in-
memory Message Bus and an Event Bus. Also, important challenges of a
distribution process are discussed briefly. At the end of the chapter, Domain
Event notifications are used to integrate different context implementations of the
Sample Application.
Chapter 8
Chapter 8 introduces Aggregates to establish transactional consistency
boundaries, which are essential for concurrency and persistence. The first section
contains an explanation of transactions and a breakdown of their characteristics
Atomicity, Consistency, Isolation and Durability. This is followed by discussing
the structural possibilities of Aggregates, the Aggregate Root and the
management of individual component access. Afterwards, the principle of
Concurrency is explained and illustrated in detail, together with Concurrency
Control. Then, the most important Aggregate design considerations are
described, which focus on component associations, invariants and optimal size.
Eventual Consistency is introduced as mechanism to synchronize distributed
information across consistency boundaries in a non-transactional fashion.
Finally, the Sample Application is analyzed and refactored in order to achieve a
useful Aggregate design.
Chapter 9
Chapter 9 describes the concept of Repositories for enabling persistence in a
meaningful way to a Domain Model. First, the chapter underlines the emphasis
on the Domain Model as opposed to technological aspects. Then, a basic
Repository functionality is illustrated that consists of saving, loading and custom
querying. Also, the influences of persistence support on a Domain Model
expression are exemplified. The next part explains Optimistic Concurrency and
promotes version numbers as implementation. Furthermore, it describes
automatic retries and conflict resolution for concurrency conflicts. Afterwards,
the chapter focuses on the interaction with Domain Events and provides different
publishing approaches. One promotes extending Repositories and the other one
recommends separating concerns. Lastly, the Sample Application is refactored
for persistence support together with reliable Domain Event publishing.
Chapter 10
Chapter 10 illustrates Application Services, which are the responsible part for
executing use cases, managing transactions and handling security. The first
section describes and compares two approaches for their overall design and
implementation. This is followed by an explanation of different possible service
scenarios, ranging from simple Entity modifications to specialized stateless
computations. Afterwards, the topics transactions and consistency are explained
with regard to Application Services and complemented with a detailed example.
The subsequent section discusses cross-cutting concerns and utilizes the design
patterns Decorator and Middleware for generic solution approaches. Next, the
security-related concepts Authentication and Authorization are explained
briefly and illustrated with individual examples. The chapter ends with
implementing the Application Services for the Sample Application together with
exemplary authentication and authorization.
Chapter 11
Chapter 11 explains Command Query Responsibility Segregation, which
separates a software into a write side and a read side. The chapter starts with an
architectural overview to explain the high-level picture as well as the
interactions between individual parts. The following section introduces and
illustrates the two Domain Layer concepts Write Model and Read Model. This
is followed by a comparison of different approaches for the synchronization of
Read Model data. As next step, the message types Commands and Queries are
explained together with recommendations on their naming and structure.
Afterwards, Command Handlers and Query Handlers are presented as a
specialized form of Application Services. Finally, all covered concepts are
combined and applied in order to create a Sample Application implementation
with CQRS.
Chapter 12
Chapter 12 covers the pattern Event Sourcing, where state is represented as a
sequence of immutable change events. As first part, the chapter provides an
architectural overview and describes the overall flow. This is complemented with
a clarification on the relation to Domain Events and recommendations on event
anatomy. Afterwards, event-sourced Write Models are explained and two
different implementation approaches are compared. Then, the concept Event
Store is introduced for the persistence of event-sourced state. This is followed
by a section on Read Model projections, which are responsible for computing
derived read data. Next, the concept of Event Sourcing is differentiated from
Event-Driven Architecture and their combination is explained. The chapter ends
with transforming the existing Sample Application code into an implementation
with Event Sourcing.
Chapter 13
Chapter 13 describes how to split a software into separate executable programs
that operate as autonomous and self-contained runtime units. The chapter starts
with an explanation of program layout possibilities. This includes a division of
architectural layers as well as the alignment with context implementations. Next,
the two main types of context-level communication are described and compared
to each other. This is followed by a section on remote use case execution,
which includes an HTTP interface factory as example implementation.
Afterwards, the idea of remote event distribution is discussed and
complemented with a filesystem-based implementation. As last step, the chapter
separates the Sample Application implementation into multiple programs. This
incorporates the introduction of a proxy server that provides a unified HTTP
interface to multiple endpoints.
Chapter 14
Chapter 14 focuses on the User Interface part in the context of web-based
software. As first step, the chapter explains how to serve files to browsers and
introduces an HTTP file server. Then, the concept of a Task-based UI is
introduced and compared to a CRUD-based approach. This is followed by
explaining and illustrating the Optimistic UI pattern, which can improve user
experience through optimistic success indication. Afterwards, the concept of
Reactive Read Models is described, which helps to build reactive User
Interfaces. This includes the introduction of the Server-Sent Events web
standard. Also, the notion of components and their composition is explained
together with the Custom Elements technology. The chapter ends with
illustrating the implementation of the User Interface for the Sample Application.
Appendix A
Appendix A explains the use of static types and their potential benefits with
TypeScript as example. The first part focuses on Value Objects, Entities and
Services. This is followed by describing how static types affect the definition
and the usage of events. Next, the implications on event-sourced Write Models
are discussed. Afterwards, the concept Dependency Injection is revisited. Then,
the effects of static types on Commands and Queries are explained. The
appendix ends with summarizing the most important aspects of a TypeScript
implementation for the Sample Application.
Appendix B
Appendix B deals with using existing technologies for production scenarios. The
first part discusses identifier generation. This is followed by a section on
Containerization and Orchestration using Docker and Docker Compose. Next,
the Event Store is discussed with the example of EventStoreDB. Then, Read
Model stores are covered with Redis as exemplary technology. Afterwards, the
Message Bus functionality is exemplified with RabbitMQ. Then, a static file
server and a proxy functionality are illustrated with NGINX. Finally, the
introduced technologies are used for the Sample Application implementation.
Chapter 1: Domains
Generally speaking, a Domain is an area of expertise. Put into more abstract
terms, it is a conceptual compound of cohesive knowledge. In the context of
Domain-Driven Design (DDD), the term stands for the topical area in which a
software operates in. [Vernon, p. 43] describes it as “what an organization does
and the world it does it in”. As alternative, [Evans, p. 2] defines it as “the subject
area to which the user applies a program”. Another way to think of it, is as the
knowledge space around the problems a software is designed to solve. The exact
definition of the term can vary slightly, depending on its context. Regardless of
such detailed differences, it is always important to identify the distinct
knowledge areas a software operates in.
Domain Experts
Domain Experts are the primary source of specialized knowledge that enables
to create an adequate software-based solution to a problem. For every project,
there should be at least one person with this role. If this is not the case, the
relevant Domain knowledge can be acquired by individuals, who then become
the experts. In general, Domain Experts should be able to provide helpful
answers to most of the arising questions. However, they cannot be expected to
know “everything” about a respective Domain. Also, the knowledge may be
shared across multiple persons, each with their own specialty. Furthermore, some
facts may simply be unknown. In such situations, it makes sense to put effort
into discovering the unknown parts and gaining new insights.
Subdomains
Subdomains are distinguishable knowledge areas that are part of a larger
compound. In theory, almost every Domain can be understood as a collection of
individual self-contained parts. Likewise, multiple Domains can be grouped
together with others into an overarching one. This is mainly a matter of
perspective and scope. Amongst other things, it also illustrates the ambiguity of
the term “Domain”. On the one hand, it can stand for the whole of something.
On the other hand, it can describe a subordinate part. For software projects, the
overall Domain may not be a universally meaningful knowledge area. Rather, it
can be a collection of parts that may even be unrelated. There are three types of
Subdomains, of which each is described as a following subsection.
Core Domain
The Core Domain is the knowledge area that is most relevant to the problems a
software aims to solve. It should be identified as early as possible. Most of the
expertise and effort must be put into this area. With regard to a business, the
Core Domain is the key to success. Ideally, the functionality related to its
knowledge area provides competitive advantage over comparable projects. If a
software does not excel at its core, the associated business is likely to fail. There
are scenarios where the overall Domain of a software exclusively consists of a
Core Domain without other Subdomains. This is often the case for projects with
a specialized functionality that is meant to be integrated into other software.
Supporting Subdomain
Supporting Subdomains are concerned with knowledge of which some parts
are specific to the custom software Domain. Another way to describe them is as
a combination of generic knowledge and problem-specific aspects. While these
areas are not as important as the Core Domain, they still play a decisive role.
There can be an arbitrary number of Supporting Subdomains, each concerned
with individual knowledge. This also means that there can be no such part inside
a particular Domain. Due to the hybrid character of these topical areas, their
existence should always be judged critically. In the worst case, Core Domain
components are mistakenly combined with generic knowledge into an artificial
Supporting Subdomain.
Generic Subdomain
Generic Subdomains deal with universal knowledge that is not specific to the
main problem space of a software. As with Supporting Subdomains, their
existence is not mandatory and their count is not restricted. Generally, these parts
are good candidates for outsourcing the associated work. The desired
functionalities can be developed by external entities. Another option is to use
existing third-party software solutions. Note that even when Subdomains are of
generic nature, they are still an integral part of the overall Domain. Without
appropriately accounting for these secondary parts, a project also risks to fail.
The most prevalent Generic Subdomain in software projects is the knowledge
area around users, identities, authentication and authorization.
Identification of Subdomains
The identification of Subdomains for a software is a fairly subjective process.
There are no exact rules on how to perform this step. Consequently, there is no
absolute right or wrong in the way of doing it. In fact, it is heavily dependent on
the project and the people working on it. Whatever helps to capture the
individual knowledge areas at play, should be considered useful. One
visualization approach is to draw circles for Subdomains, add in their names and
types, and create lines for relationships. This type of artifact does neither have to
be created by a single person, nor exclusively by developers. Since the
Subdomain definitions are relevant for every project member, the activity is
ideally a team effort.
Domain names
While it is important to determine a name for each Subdomain, it is not required
for the overall knowledge area. Even more, a software Domain can be a
specialized collection of individual parts that are unrelated to each other.
[Vernon] uses the abstract term “Domain” for the overarching knowledge area in
most diagrams. Also, [Vernon, p. 44] states: “It should be pretty obvious to you
what your domain is.”. Whether an explicit name is necessary, depends on the
project and its members. Judging from personal experience, it is typically not
required for the Domain Model, the source code or other artifacts. Therefore, it
can be considered optional. However, if an overall Domain is a common area of
expertise that has a universal name, it should be re-used.
Problem definition
The first step is to clarify what the actual problem is and whether it can and
should be solved with software. For many projects, it is not uncommon to start
the development without having these basic things defined. There are various
possible reasons for this circumstance. Domain Experts and customers may not
be able to express their knowledge and requirements adequately. Also,
developers may not understand a Domain well enough, or they might favor
technical challenges over building a useful solution. Furthermore, development
efforts may start prematurely due to underestimating the complexity of a
Domain. Starting with a clear and concise description of the main problem helps
to avoid such issues.
Software project checklist
Solution approach
Before being able to solve a problem, a solution approach needs to be discovered
and selected. This can be done by analyzing the problem, gathering ideas,
interviewing customers, developing prototypes, testing them and so forth. While
all these activities are helpful for finding a good solution, their illustration is not
of relevance for this book. For brevity reasons, it is simply assumed that the
ideal approach is the implementation of a Task Board. Such a software often
makes an appearance as a part of a process model like Scrum or Kanban.
Therefore, the concept should be familiar to most of the readers. In order to not
bloat the complexity, the Sample Application remains free of any such
surrounding methodologies.
Figure 1.5: Task Board example
Domain identification
The final step is to define the overall Domain together with its contained
Subdomains and their types. The Core Domain for the Sample Application is
called Project Management. This knowledge area incorporates everything that
evolves around projects, teams, coordination of work and, more specifically, also
task boards. Excelling in this part bears the possibility to make the software
distinctive and successful. The fact that project members can work from
anywhere requires to deal with topics such as identity, authentication and
authorization. These aspects are contained in the Generic Subdomain Identity.
Other than the core part, it is free of any problem-specific knowledge. While
there are no obvious connections between both parts at this point, it is safe to
assume they somehow interact.
Figure 1.6: Sample Application Domains
The problem statement and the solution approach lay the foundation for the
Sample Application software. Furthermore, the definition of the overall Domain
and its Subdomains serve as first important artifacts. The next step is to create
adequate Domain Models for the individual software parts.
Chapter 2: Domain Models
A Domain Model is a set of knowledge abstractions that is focused on aspects
relevant to solving specific problems. [Evans, p. 3] describes it as “selectively
simplified and consciously structured form of knowledge”. The term “model”
must not be confused with definitions from other software-related concepts. For
example, the pattern Model-View-Controller (MVC) uses it for a technological
layer that contains the business logic. Although a Domain Model is typically
expressed as code, it is not the code itself or a part of it. Rather, it is structured
knowledge that serves as foundation for software and other artifacts. An actual
implementation may only reflect a subset of the underlying abstractions, and
eventually deals with extraneous technical aspects. Using the explicit term
“Domain Model implementation” helps to avoid the ambiguity.
Domain
Knowledge area around a problem
Domain Model
Structured abstractions of Domain knowledge
Domain Model implementation
Software solution based on a Domain Model
Tangibility of abstractions
The idea that a Domain Model is a set of knowledge abstractions may be hard to
grasp at first. Trying to think of something without having a concrete tangible
representation in mind can be difficult. This is one of the reasons why a model is
commonly confused with what are tailored representations of it. While diagrams
and documentation are essential, they often serve a specific purpose and do not
represent the complete Domain Model. Despite any level of sophistication and
detail, almost any expression of information is also a simplification. Even if
every relevant aspect could be included unambiguously in one artifact, the result
is still an immutable snapshot. Since abstractions of knowledge can change over
time, a static representation risks becoming obsolete.
Example: Client project
Figure 2.1: Client project Domain
Consider a client who hires a company to build a software. The client explains
the initial idea and provides the required Domain knowledge. After defining the
problem to solve, the company starts to create fitting knowledge abstractions.
While creating documentation during this process can be helpful, the individual
artifacts are secondary. The actual Domain Model are the abstractions emerging
between the company and the client. Assume that their collaboration runs into
disagreements. Another company is hired for building the same software and
picks up the created documentation. While the artifacts help to convey the idea
of the original model, they do not include every single detail. The new company
works out a second set of abstractions, which may even differ slightly from the
first one.
Ubiquitous Language
One important aspect for a Domain Model is the language that is used to
describe its contents. A Ubiquitous Language is a model-based linguistic
system for communicating specialized knowledge with high expressiveness and
correctness. In this context, the term “ubiquitous” does not mean that such a
language is universal. Rather, it implies its validity everywhere the associated
Domain Model is applicable. This especially includes every implementation
artifact. The concept promotes not just a specialized form of representation, but
rather a linguistic foundation all artifacts should be based on. Furthermore, its
consistent application helps deepen the understanding of the Domain Model and
revealing new insights. When employing a Ubiquitous Language, it is crucial
that every involved member and entity applies it thoroughly.
Translations costs
The purpose of a Ubiquitous Language is to eliminate the need for translation
and to make communication precise and effective. That is between team
members, between different model artifacts and between expert knowledge and
source code. Translation is an activity that imposes costs for the involved parties
and risks the distortion of facts. Even worse, the inability to translate leads to a
total loss of information. When developers do not understand Domain Experts,
important aspects may not be reflected in the code. On the other hand, a software
expressing model knowledge with development-specific terminology cannot be
used correctly by non-developers. Communication problems often lead to the
creation of multiple Domain Models that drift apart. An adequate Ubiquitous
Language helps to prevent these kinds of issues.
Domain Modeling
Domain Modeling is the act of creating and refining a Domain Model out of
unstructured Domain knowledge. Ideally, this process is executed in an iterative
and continuous way. In most cases, it is virtually impossible to transform all
relevant aspects into useful abstractions in one step. Any deep understanding of
a Domain and the problem space typically only evolves over time. The
consequential recurring insights demand to adjust abstractions continuously.
Therefore, a Domain Model is best understood as ever-evolving. In fact, insights
may happen during the implementation and lead to code adjustments. Naturally,
iterative Domain Modeling is a good fit for agile software development. At the
same time, it can also be embedded into more traditional processes, though it
may be less straight forward.
Knowledge transformation
The transformation of knowledge into suitable abstractions can be done in
different ways. There are countless approaches, each with their own
characteristics. One example is Event Storming, which focuses on Domain
Events and works well with DDD. However, to some extent, it is secondary how
exactly the knowledge is processed, as long as it is processed somehow.
Consider the previous client example. The client explains their idea, while a
company employee creates an informal drawing with shapes and arrows. This
artifact is likely to serve two purposes. For one, it is a specific way of capturing
information. Secondly, it establishes a Domain Modeling process, as the
captured information is abstracted and structured. The client may even actively
collaborate in the role of a Domain Expert.
Process integration
Domain Modeling should be integrated into the existing software development
process. Typically, there is one initial iteration and recurring follow-up sessions.
The initial step can be set up as informal meeting with selected project members.
For this part, the participation of Domain Experts and developers is strongly
advised. While the experts possess crucial knowledge, the developers are the
primary consumers of the resulting Domain Model. The experts can explain their
understanding of the Domain. Questions should be brought up so that relevant
information is not only conveyed, but also discussed. Eventually, insights occur
and relevant facts are revealed. The most important parts must be documented,
such as by creating visualizations. On top of the initial modeling, refinement
sessions can be made a continuous team effort.
Figure 2.2: Software process with Domain Modeling
Model representations
There are numerous approaches for representing a Domain Model, each with its
individual advantages and disadvantages. Although language is an important
aspect, it is not mandatory that Domain Models manifest as pure textual
representations. Writing notes or simple sentences using the respective
Ubiquitous Language enables to express information precisely. One
disadvantage is that it bears the risk of getting too detailed. Drawings and
diagrams suffer less from too many details, as they are naturally more abstract.
However, any associated formalism can introduce overhead. Another option is to
create source code. The resulting artifacts are highly tangible representations.
One issue is that the code can be mistaken for the actual Domain Model
implementation. As with most model-related artifacts, it is important to
recognize the temporal validity.
Comparison of Domain Model representations
Approach Advantages Disadvantages
- some facts may be complex to
Written text - high expressiveness
describe
- no special skills
- risk of getting too detailed
required
- associated formalism can add
Diagrams - naturally more abstract
overhead
- cheap to create (if
- more expensive to create
informal)
Code
- appealing to developers - most expensive to create
experiments
- may be mistaken for
- tangible representation
implementation
The described methods can be customized and mixed with each other. There are
numerous further approaches, which are not explicitly mentioned at this point.
Formalism of diagrams
Formalism in artifacts risks causing an overhead that is contrary to an agile Domain Modeling
process. Specifically, unnecessary formalism in diagrams should be avoided when possible. This
book uses informal drawings, similar to how it is done in related literature, such as [Vernon] or
[Evans]. This way, the reader is not required to be experienced with specific diagram types.
Complementary representations
The various approaches of expressing Domain Model knowledge inherently
differ in what information they convey and how this is done. Many times, the
explanation of behavior and characteristics works better with written text. On the
other hand, drawings can make it easier to comprehend a larger system and the
relations of its parts. The previously mentioned options for model
representations are not meant to be interchangeable, as each of them serves a
specific purpose. Rather, they are complementing each other, and their
combination allows to gain a more comprehensive understanding of a Domain
Model. This is one of the reasons why it can be helpful to use different ways to
convey knowledge. Multiple expressions can produce distinctive but
complementary viewpoints.
Sample Application: Domain Models
This section describes the creation of the Domain Models for the Sample
Application. The activity depends on the results from the sample section of the
previous chapter. The problem definition, the solution approach and the
identified knowledge areas establish the required foundation. They enable to
understand which Domain Models must be created and what purpose they must
serve. There are multiple steps involved in the modeling activity. First, all
relevant Domain knowledge is gathered and filtered. Then, the resulting
information is transformed into cohesive sets of useful abstractions. As next
step, the Domain Models are expressed through different exemplary
representations. Besides their main purpose, these artifacts also illustrate the
underlying Ubiquitous Language. Finally, the linguistic system is presented
through a simplified glossary.
2. Identity knowledge:
The same person can work on multiple projects
A person uses an e-mail address for identification
E-mail addresses of persons can be changed
Every person uses a custom nickname
Administrative access to the software must be possible
Domain Modeling
With the relevant Domain knowledge in place, the initial Domain Modeling
activity can be performed. The first step is to analyze each statement and isolate
its relevant aspects. Then, the information is distilled and transformed into
structured knowledge. The resulting abstractions become part of the respective
Domain Model. For example, while none of the statements express it directly,
some imply that tasks have three possible states. This can be used to derive the
fact that tasks on a board are divided into three sets. In contrast, other statements
may also contain irrelevant information. While the average team size is a
domain-related knowledge item, it is not of further relevance. The complete set
of Domain Model statements is presented as items grouped together by their
Subdomain:
These lists are the resulting artifacts of an initial Domain Modeling activity.
Consequently, they are also a communication tool to convey the idea of the
emerging Domain Models. The diction of the statements may suggest that they
are software requirements. This is not the case. Rather, they are an informal
collection of rules and characteristics. As previously explained, a model itself is
structured knowledge and not one specific representation. The same applies here.
However, without working together as an actual team, it is hard to share the
exact same knowledge. For this reason, the lists serve as surrogates for the
Domain Models of the Sample Application. Since written statements are not the
only useful representation type, the next section provides two exemplary types of
visualizations.
Visual representations
Figure 2.3: Sample Application: Domain Model components
The statement collections, the visualization and the exemplary glossary represent
the means to convey the idea of the Domain Models. Their combination provides
the necessary setup for being able to define useful conceptual boundaries and
potential technological units.
Chapter 3: Bounded Contexts
A Bounded Context is a conceptual area that determines the applicability of a
Domain Model. [Vernon, p. 62] describes it as “explicit boundary within which a
Domain Model exists”. [Evans, p. 384] defines it as “intention to unify a model
within certain boundaries”. This implies that a particular set of abstractions must
be interpreted in only one way within a defined scope. Doing so enables the use
of the associated Ubiquitous Language without the risk of misunderstandings or
the need for translation. There are many situations where Bounded Contexts are
utilized for technical decisions and sometimes even mistaken for Software
Architecture. While conceptual boundaries and software structures commonly
align with each other, the primary goal is to define Domain Model applicability.
Relation to Domains
The concepts Domains, Domain Models and Bounded Contexts stand in close
relation and are commonly confused with each other. Overall, they are the more
controversial and discussed parts of DDD. One possible reason is that, unlike
with technical topics, it is more difficult to differentiate the theoretical parts. The
Domain around a problem and its Subdomains are areas of related knowledge.
This means that they categorize existing information. In contrast, Domain
Models are individual abstractions using a custom language that are created out
of existing knowledge. Their contents and their structure are not predetermined
and can be specific to a software. The same applies to Bounded Contexts. They
are individual conceptual boundaries with custom names that define the
applicability of their enclosed Domain Models.
Differentiation between terms
Domain
Collection of existing knowledge
Domain Model
Individually created abstractions
Bounded Context
Individually defined boundary
Consider working for a company that develops a software for meetings. The
associated overall Domain is broad and complex as it encloses many
Subdomains. There are generic topical areas such as Identity, Team Management
and Scheduling. Then, there are supporting areas like Real-Time Communication
(RTC). And most importantly, there are the three core knowledge areas
Facilitation, Presentation and Visualization. Each of them itself again contains
multiple Subdomains. Assume that the company decides to abstract all the
knowledge into one unified Domain Model inside a single Bounded Context.
The main motivation is to use the same interpretation across the whole
organization. Despite possible advantages, this is likely to result in a model that
gets abandoned quickly. The complexity, inflexibility and maintenance costs
would make it practically unusable.
Technological boundaries
Bounded Contexts are of conceptual nature and do not necessarily relate to
software structures. Their main purpose is to define the partitioning of a Domain
into a set of delimited models. Even more, there is no rule that models must be
expressed as code at all. The field of Lean Product Development promotes the
concept Concierge Minimum Viable Product. This approach suggests to evaluate
the potential of an idea by executing it manually first. As example, consider a
recommendation feature for an online job platform. The functionality can be
prototyped by hand-picking job recommendations without developing software.
Still, the manual work represents an application of a particular Domain Model
inside a Bounded Context. This exemplifies the importance of differentiating
between conceptual boundaries and software-related aspects.
Exemplary delimiting software mechanisms
Mechanism Description
Modules Logical division inside a monolithic software
Processes Runtime separation of individual executable programs
Infrastructure Infrastructural distribution of distinct programs
Despite their conceptual nature, it is common and often useful that contexts align
with some form of technical division. [Vernon, p. 71] explains that it “doesn’t
hurt to think about a Bounded Context in terms of the technical components that
house it”. In fact, Domain Models and their boundaries form units that show
similar characteristics as delimiting software mechanisms. Therefore, it can
make sense to represent their shapes as individual modules, as standalone
processes or even as separated infrastructure. The right scale and division
mechanism depends on the individual project and the corresponding Domain.
However, it is crucial to understand that these exemplary constructs are
technological concerns and not the conceptual boundaries themselves.
Context Maps
A Context Map is an informal drawing of Bounded Contexts to visualize their
relationships and their integrations. Similar to the other drawings in this book,
the elements of such a map are typically represented by circular shapes. Each
included context should provide its name. Connections between distinct parts are
visualized with lines. Their relation and integration is explained through a short
textual description. Context Maps can serve multiple purposes at once. On the
one hand, they illustrate the structure of conceptual boundaries and their
interdependencies. On the other hand, they can declare the technical patterns that
are used to integrate the implementations of different contexts. However, when
using them to describe concepts that are independent of technical concerns, the
integration details may be excluded.
Figure 3.4: Generic Context Map
Standard approaches
The simplest possibility is to define one large Domain Model inside a single
Bounded Context. For the Sample Application, this can be a valid strategy. The
overall model has a manageable complexity, it remains largely unchanged over
time, and the only collaborator is the reader. Yet, this approach may lead to an
overly complex or ambiguous abstraction of the Domain. The reason is that
multiple distinct knowledge areas would be combined into one Domain Model.
An alternative is to aim for a one-to-one alignment with Subdomains. This
requires one context for Project Management and one for Identity, each with a
smaller Domain Model. However, simply following the standard
recommendation does not take into account what is the most useful set of
conceptual boundaries.
Language considerations
Another influencing factor for the definition of Bounded Contexts are the
potentially separate languages within different Domain Model subsets. The
Project Management part defines the term “role” as a Software Development
specialization. In contrast, the Identity part uses the same word to differentiate
user types. Due to this collision in terminology, the two parts cannot be simply
joined together into a common model. One pragmatic solution is to employ more
specific terms, such as “project role” and “user role”. Although this would solve
the conflict, the issue hints to a more important aspect. Project Management and
Identity are distinct knowledge areas, which even use the same term for different
concepts. Consequently, there is little advantage to expect in modeling them
together inside the same context.
Component relationships
Relationships between Subdomains or between subsets of a Domain Model are
important factors for conceptual boundaries. For the Sample Application, there is
only one connection between the identified Subdomains. Every team member of
a project references the identity of a user. In this case, the cost of a context-
overarching relationship must be compared to the implication of a large Domain
Model. Furthermore, inside the Domain Model part for Project Management,
multiple subordinate areas can be defined. One option is to introduce two
contexts, one with task boards and tasks, and the other one with projects and
teams. Again, the approach of maintaining relationships between these potential
contexts must be compared to combining them in a single model.
Decision and definition
The definition of Bounded Contexts should be reinforced with project-specific
arguments. Simply following guidelines can be counterproductive. Even more, a
conscious violation can make sense. For the Sample Application, there are
mainly two approaches. One is to create a Bounded Context for Project
Management and another one for Identity. This results in a one-to-one alignment
with Subdomains. Another approach is to further subdivide Project Management
into Domain Models and Bounded Contexts for Task Board and Project. While
both abstraction parts belong to the same Subdomain, they only have a few
points of contact. Consequently, it seems more useful to maintain explicit
relationships instead of modeling them together. In the end, the Sample
Application defines the Bounded Contexts Task Board, Project and User.
The Bounded Context definitions determine the conceptual boundaries and the
set of individual Domain Models for the Sample Application. On top of that,
they lay the foundation for starting to work on the actual technological aspects.
Chapter 4: Software Architecture
The Software Architecture defines the structure of a software as well as the
interaction possibilities between its contained parts. There are multiple
commonly applied architectural patterns that fit especially well with DDD.
Generally, all of them aim to build software in a modular and decoupled way
with a clean separation of concerns. In fact, many of the approaches share the
same fundamental ideas, concepts and terminology. Dividing software into
multiple architectural parts can happen on different levels. The activity can target
the design of a complete system or it can be about selected areas. For software
that applies DDD, it typically affects either the overall structure or individual
implementations of specific Bounded Contexts. Amongst other things, this
means that different subsystems can have different architectures.
Domain
The Domain part hosts the Domain Model implementation. This typically
includes Value Objects, Entities, Domain Services, Invariants, Domain Events,
Aggregates, Factories and possibly persistence-related abstractions. There
should be as few outbound dependencies as possible. Unlike the abstractions, the
concrete persistence implementations belong in the Infrastructure part, as they
contain technological details. The Domain Model implementation is the heart of
every software and therefore its most important part. This is the case for both the
overall knowledge area and for individual Subdomains. Within each conceptual
boundary, the corresponding Domain part is always of the highest importance.
For example, consider an authentication service as part of a larger system.
Despite its generic nature, its Domain Model implementation is still more
important than domain-agnostic functionalities of other parts.
Infrastructure
The Infrastructure part must contain all technical functionalities that neither
belong to the Domain nor to the Application. [Vernon, p. 122] explains that this
includes “all the technical components and frameworks that provide low-level
services for the application”. Although it is not the primary intent, the contained
elements may have external dependencies and make use of specific technologies.
Typically, this part houses components for persistence and information-
exchange-related mechanisms. Furthermore, it commonly contains low-level
functionalities such as identifier generation, password encryption or validation
utilities. One typical example for a persistence mechanism is the implementation
of a Repository. While the corresponding interface conceptually belongs to the
Domain part, the concrete implementation is an infrastructural concern.
Application
The Application part is responsible for the execution of domain-specific use
cases. For this purpose, it utilizes infrastructure functionalities and works with
Domain components. This means, it is a direct consumer of the previously
described parts. Typically, a use case execution incorporates loading persisted
information, executing domain-specific behavior and eventually saving changes.
Also, it can include working with multiple components and orchestrating their
interaction. Conceptually, this software part acts as a facade around a Domain
Model implementation. Therefore, it must also deal with input validation.
[Vernon, p. 120] explains that it is further responsible for “persistence
transactions and security”, for which the Infrastructure provides required
functionalities. According to [Vernon, p. 68], “user interface components and
service-oriented endpoints” are its direct consumers.
User Interface
The User Interface part contains functionalities that are required for a human or
a machine to interact with a software. This involves displaying information,
accepting and translating inputs, communicating with Application Layer
components and presenting returned results. Furthermore, it can also incorporate
client-side validation. While this architectural part is allowed to consume
domain-related functionalities, it must not contain them directly. For web-based
software, the most common approach is to create a browser interface using
HTML, CSS and JavaScript. However, the User Interface is not restricted to
artifacts that are interpreted on a client. For example, when applying an MVC-
based architecture with server-side controllers, their implementations also belong
to this architectural part. Conceptually speaking, it consists of everything in
between the clients and the Application components.
Layered Architecture
Figure 4.1: Layered Architecture
Onion Architecture
Figure 4.2: Onion Architecture
The Onion Architecture also divides software into layers with the rule that
dependencies can only go inwards. The pattern is named after its visual
representation, which resembles the shape of an onion. There are two
noteworthy differences to the Layered Architecture. For one, the Onion
Architecture is more specific to DDD and typically defines three or four layers.
Secondly, a layer can contain multiple parts. The User Interface and the
Infrastructure are placed on the same area. Both of them depend on the
Application part in the next layer. Again, the dependency direction to
Infrastructure is inverted through the use of abstractions. The Application
depends on the Domain, which sits at the core. Overall, the Onion Architecture
is very similar to the Layered Architecture.
High-level structure
There are two useful approaches for the architecture of the Sample Application.
One is to define a single unit for the implementation of all three Bounded
Contexts. Effectively, this produces a monolithic software. The alternative is to
define a separate area for each context implementation. This approach favors the
separation of the conceptual boundaries and their enclosed Domain Models.
Furthermore, it is useful when aiming to separate the implementations, be it
logically or as individual programs. Therefore, the Sample Application
architecture defines the three areas Project, Task Board and User. Each of them
is concerned with the implementation of their associated Bounded Context.
Internally, they apply the Onion Architecture pattern. This is complemented with
one shared area for infrastructural functionalities and domain-related utilities.
Logical division
The introduction of four architectural areas, of which three align with Bounded
Contexts, allows to derive a useful technological division. At a minimum, the
implementations of the different parts should be separated in some way. This can
be achieved by placing them in different physical locations, such as individual
subdirectories. For this purpose, the directories project, task-board and user
are defined to host the implementations for the respective Bounded Contexts.
Within each of them, the parts of the Onion Architecture are reflected as
subdirectories. For shared functionalities, the directory shared is defined, which
is further subdivided into domain and infrastructure. As first approach, the
software is assumed to operate as single process. Consequently, any integration
between different areas can be done via direct code execution.
The Software Architecture and the directory layout provide the required
knowledge that allows to start with the actual implementation. Before focusing
on the patterns on how to express a Domain Model as code, selected code
quality aspects are discussed.
Chapter 5: Code quality
For the majority of software projects, the source code is the most important
artifact. Besides containing the instructions for the actual executable programs,
the code also represents an expression of the Domain Model. This expression
can ideally be understood not only by developers, but also by other technology-
affine project members. In general, it makes sense to aim for a high code quality.
For that, there are essential supporting aspects such as readability and coherence.
Furthermore, there are design principles that also contribute to this goal. This
chapter explains selected aspects and principles that enable to achieve a high
code quality. While the topics are partially explained with regard to Domain
Model implementations, they are equally relevant for other software parts.
Optional Chapter
The code quality aspects and principles covered in this chapter influence most of the
implementations in this book. If you are familiar with them, feel free to skip this chapter. Also, if
you disagree with some of the ideas, the remaining book content remains valuable.
Model binding
A Domain Model and its implementation must be bound strongly to each other.
Artifacts, such as drawings or text, are useful for Domain Modeling and
documentation. The implementation can serve the same purposes, but also
defines the actual executable programs. While it may be acceptable that other
artifacts become temporarily outdated, the software must always reflect the
current Domain Model. Whenever this is not the case, all affected code parts
must be adjusted. As explained in the second chapter, code experiments can blur
the lines between abstractions and their implementation. The therein contained
statement also applies in this context: The source code is not the same as the
Domain Model. Nevertheless, it is a crucial, contemporary and ideally
synchronized type of artifact.
Figure 4.1: Model binding
Refactoring
Refactoring stands for the technical improvement of software without changing
its functionality. The activity can be focused on the amount of code, on its
design, the performance and various other non-functional aspects. However,
concrete functionalities and their characteristics must remain unmodified. In the
context of DDD, refactoring can stand for another concept that is not exclusively
related to code. Rather, it is about the functional refinement of a Domain Model
and its implementation. While the original practice aims to improve software in
a non-functional way, here the goal is to evolve the Domain Model. This is
achieved by refining the existing abstractions and adjusting the associated
implementations. Both practices are useful, but they should always be separated
strictly from each other.
Types of refactoring
Technical refactoring
Improvement of code quality and non-functional aspects
Domain Model refactoring
Refinement of Domain Model and its implementation
Continuous refinement is vital for a Domain Model and its associated source
code. The second chapter explained that modeling should be done in an iterative
way. This also applies to the implementation. In general, the concepts of this
book do not necessitate any type of refactoring activity. However, not following
this practice can have notable effects on a development process. One is that the
absence of refactoring requires to create a completely correct Domain Model
before starting the development. Also, every abstraction needs to be translated
into source code without making an error. Whether such a sequential approach
seems feasible and useful, is an individual decision to make. Judging from
personal experience, iterative modeling and frequent refactoring are essential
key concepts.
Readability
High readability is an important quality for good code. In general, developers
spend more of their time reading and understanding existing source code than
writing new one. Consequently, this activity should be as straightforward and
efficient as possible. The source code of a software is, among other things, a
textual expression of knowledge. For this purpose, it must be easy to
comprehend, especially when it is the only Domain-Model-related artifact.
Ideally, Domain Experts can capture and discuss functionalities based on the
source code. Having a readable implementation makes it also easier to transfer
knowledge, for example when introducing new team members. Although the
actual complexity of a problem is unalterable, high readability can prevent its
software-based solution from additional overhead.
Aspects of readability
The degree of readability consists of multiple aspects. At the most basic level,
the names of identifiers, such as variables and functions, are important. They
should be explicit and intention-revealing. Short, cryptic and abbreviated terms
must be avoided. Overall, the used language must stay consistent. Multiple styles
of writing and different vocabularies cause confusion. An excellent usage of the
underlying natural language is mandatory. Another important aspect is the
arrangement of source code. Statements should follow a logical order to ensure a
continuous reading flow. The lengths of delimiting software constructs, such as
functions or blocks, also affect readability. Their word count should not be too
high, and they should only contain one level of abstraction. Long and
information-packed sections are generally harder to comprehend.
The operation calc() accepts two collections of interests and returns their
matching score. Its source code is short and has a compact style. The continuous
reading flow is naturally given. Despite these advantages, the approach has
various shortcomings. The naming of identifiers is cryptic, which makes it
difficult to anticipate their meaning. This includes the name of the function
itself. “calc” is an unnecessary abbreviation and not intention-revealing. The
variable names inside the function are even worse. Also, there are error-prone
optimizations that execute calculations with booleans. Exploiting the implicit
type conversion is a bad practice. Overall, the function contains multiple levels
of abstraction and has a high complexity. However, the biggest disadvantage is
the inability to anticipate the algorithm by reading the code.
constructor(wordToGuess) {
this.#wordToGuess = wordToGuess;
}
guessLetter(letter) {
this.#guessedLetters.push(letter.toLowerCase());
this.#wasWordGuessed = this.#determineIfWordWasGuessed();
}
#determineIfWordWasGuessed() {
return this.#wordToGuess.split('').every(
letter => this.#guessedLetters.includes(letter.toLowerCase()));
}
While the two implementations differ in their programming paradigms, they both
treat behavior and state as a connected unit. In case of OOP, this is achieved with
an enclosing class. In case of FP, the implementation expresses their relation by
simply placing the two aspects next to each other.
Dependency Inversion
The Dependency Inversion principle provides an approach to alter the
dependency direction of software parts through the use of abstractions. Instead
of one part being dependent on another, both parts depend on a common
abstraction. Only at runtime, the concrete dependency is passed into the
consuming functionality from the outside. Even more, the abstractions can be
defined in a way so that the dependency direction is fully inverted. Typically,
Dependency Inversion is realized with abstract types that are defined through
interfaces. However, JavaScript does not provide a native language construct for
this concept. Still, the principle can be applied by keeping the signatures of
concrete implementations free of internal details. This achieves that the
consumers depend on an implicit abstract interface.
The first example shows the offensive language detection component, which
enables to search content for a list of specified terms:
Commenting System: Offensive language detector
class OffensiveLanguageDetector {
#offensiveTerms;
constructor(offensiveTerms) {
this.#offensiveTerms = offensiveTerms;
}
doesTextContainOffensiveLanguage(text) {
const lowercaseText = text.toLowerCase();
return this.#offensiveTerms.some(term => lowercaseText.includes(term));
}
}
#comments = [];
#offensiveLanguageDetector = new OffensiveLanguageDetector(['stupid', 'idiot']);
submitComment(author, message) {
if (this.#offensiveLanguageDetector.doesTextContainOffensiveLanguage(message))
throw new Error('offensive language detected, message declined');
this.#comments.push({author, message});
}
}
const commentCollection = new CommentCollection();
commentCollection.submitComment('John', 'This is a great article!');
commentCollection.submitComment('Jane', 'You are an idiot!');
commentCollection.submitComment('Joe', 'What about KISS (Keep it simple stupid)?');
#comments = [];
#offensiveLanguageDetector;
constructor({offensiveLanguageDetector}) {
this.#offensiveLanguageDetector = offensiveLanguageDetector;
}
submitComment(author, message) {
if (this.#offensiveLanguageDetector.doesTextContainOffensiveLanguage(message))
throw new Error('offensive language detected, message declined');
this.#comments.push({author, message});
}
Dependency Injection
The act of passing in a concrete dependency from the outside is called Dependency Injection.
This can be done in different ways. The example uses the so-called Constructor Injection.
Another option is Property Injection, where a dependency is directly assigned to a property from
the outside. The third possibility is to pass in dependencies as additional arguments to specific
operations.
Command-Query-Separation
Command-Query-Separation (CQS) is a programming principle according to
which each function must either be a command or a query. Commands execute
actions and modify state, but never return a value. Queries perform computations
and return data, but must not change state. They are referentially transparent and
execute without side effects. In contrast, every command invocation must be
expected to modify state. Although this differentiation happens on a low
abstraction level, it fosters cleanly separated and predictable code designs. The
respective type of each function should be expressed through an intention-
revealing name. Commands must be written as such, for example
writeMessage(author, text) or guessLetter(letter). Queries may use an
imperative style like calculateScore(interestsA, interestsB), but should
be phrased as questions for boolean results, such as wasWordGuessed().
The first example shows an implementation that does not make use CQS (run
code):
Newsletter subscription: Without CQS
class Newsletter {
#subscribersByTopic = {};
constructor(topics = []) {
topics.forEach(topic => this.#subscribersByTopic[topic] = []);
}
subscribe(emailAddress, topic) {
const errors = [];
const subscribers = this.#subscribersByTopic[topic];
if (!emailAddress.includes('@')) errors.push('invalid e-mail');
if (!subscribers) errors.push('invalid topic');
else if (subscribers.includes(emailAddress)) errors.push('duplicate');
if (errors.length === 0) subscribers.push(emailAddress);
return {validationErrors: errors};
}
#subscribersByTopic = {};
constructor(topics = []) {
topics.forEach(topic => this.#subscribersByTopic[topic] = []);
}
subscribe(emailAddress, topic) {
const subscribers = this.#subscribersByTopic[topic];
if (!emailAddress.includes('@')) throw new Error('invalid e-mail');
if (!subscribers) throw new Error('invalid topic');
else if (subscribers.includes(emailAddress)) throw new Error('duplicate');
subscribers.push(emailAddress);
}
The reworked class Newsletter is similar to the first implementation, but differs
in the way validation issues are handled. Its function subscribe() has no return
value and instead throws exceptions for invalid inputs. This approach seemingly
looks like a solution that applies CQS correctly. However, it is in fact worse. The
code still has the same responsibilities. Only the mechanism how to yield a
validation result is different. Also, the function name does not express that there
may be exceptions, which can cause unexpected crashes. Exceptions must be
used for exceptional cases, not for expected situations. Another disadvantage is
that it only reports the first encountered error. This is a deterioration of the code
design. Using exceptions for validation is generally a bad programming practice.
The final implementation shows an approach that correctly applies CQS (run
code):
Newsletter subscription: With CQS
class Newsletter {
#subscribersByTopic = {};
constructor(topics = []) {
topics.forEach(topic => this.#subscribersByTopic[topic] = []);
}
validateSubscription(emailAddress, topic) {
const errors = [];
const subscribers = this.#subscribersByTopic[topic];
if (!emailAddress.includes('@')) errors.push('invalid e-mail');
if (!subscribers) errors.push('invalid topic');
else if (subscribers.includes(emailAddress)) errors.push('duplicate');
return {containsErrors: errors.length > 0, errors};
}
subscribe(emailAddress, topic) {
const validationResult = this.validateSubscription(emailAddress, topic);
if (validationResult.containsErrors) throw new Error('invalid arguments');
this.#subscribersByTopic[topic].push(emailAddress);
}
}
This approach adheres to the CQS principle without decreasing code quality or
misusing technical concepts. The intention-revealing query
validateSubscription() returns a validation result for a given input without
attempting an actual subscription. In contrast, the command subscribe()
mutates state by adding an e-mail to a selected newsletter topic. However, the
function also contains code that again seemingly mixes concerns and misuses
exceptions. Prior to the mutation, it executes validateSubscription() and
throws an exception in case of validation errors. For this approach, it is
important to understand the intended component usage. Consumers are
responsible for validating their input before subscribing. Therefore, passing
invalid input to the command is an unexpected case. Calling the validation and
conditionally throwing an exception ensures the integrity of the newsletter state.
The illustrated aspects and principles of this chapter are low-level building
blocks that help to achieve a high code quality. The next step is to look at the
patterns that enable to express Domain Model concepts as concrete source code.
Chapter 6: Value Objects, Entities and Services
The implementation of a Domain Model is an expression of abstractions as
source code, targeted to solve specific problems. If this part does not manage to
fulfill its duties appropriately, the whole surrounding project is likely to fail its
purpose. Therefore, it should be considered the heart of every software. DDD
provides a number of tactical patterns that guide the code design of the Domain
layer. These patterns can be applied independently of the concept DDD as a
whole. At a minimum, a Domain Model implementation should consist of a
combination of Value Objects and Entities. On top of that, stateless
functionalities can be designed as Domain Services. Applying these tactical
patterns helps to express individual Domain Model concepts adequately.
Value Objects
The implementation of a Domain Model component is typically either done as
Value Object or as Entity. Value Objects quantify or describe an aspect of a
Domain without conceptual identity. They are defined by their attributes. [Evans,
p. 98] describes them as elements “that we care about only for what they are, not
who or which they are”. For example, two date objects are considered the same
as long as their day, month and year match. According to [Vernon, p. 219], such
objects are “easier to create, test, use, optimize and maintain”. Therefore, they
should be used extensively. Value Objects must provide a few key
characteristics. The most important ones are explained in the following
subsections. This is preceded by the introduction of an overarching example
topic.
Value Objects versus data
Value Objects may have functions and carry out domain-specific behavior. They are first class
citizens of a Domain Model implementation and must not be confused with plain data structures.
The subsection on Immutability provides an example to illustrate such meaningful behavior.
Conceptual Whole
A Value Object is a collection of coherent attributes and related functionality that
form a Conceptual Whole. This means, it is a self-contained structure that
captures some descriptive aspect of a Domain. The consequential boundary
makes it possible to give an intention-revealing name based on the respective
Ubiquitous Language. In most scenarios, attributes that belong to a common
conceptual unit should not be placed separately on their own. Doing so fails to
capture the idea of the Domain Model. At the same time, the attributes must not
be embedded into larger structures that are focused on other concepts.
Otherwise, the code establishes incorrect boundaries that do not align with the
actual abstractions. Related attributes and their associated actions should be
combined into meaningful Value Objects.
The first example shows standalone attributes that together represent a single
furniture instance (run code):
Floor Planning: Standalone attributes
const furnitureType = 'desk';
const furnitureWidth = 100, widthUnit = 'cm';
const furnitureLength = 60, lengthUnit = 'cm';
The second example provides a class that expresses the furniture concept as
Value Object type (run code):
Floor Planning: Furniture Value Object
class Furniture {
toString() {
return `${this.type}, ${this.width}${this.widthUnit} wide`
+ `, ${this.length}${this.lengthUnit} long`;
}
The first approach using standalone attributes does not express the domain-
specific concepts furniture and measurement in a meaningful way. There is no
structural relationship or enclosing boundary across the individual values. Not
even their variable naming unambiguously states that there is a conceptual
connection between them. In contrast, the second implementation correctly
captures the furniture concept as the class Furniture. The code defines a
structure that encloses related aspects. However, it mixes multiple concepts into
a single unit and therefore establishes an incorrect boundary. This is because the
furniture component must not be concerned with measurement details. Rather,
they must be encapsulated into a separate Value Object type. Embedding them
directly makes it less clear what the intrinsic attributes of furniture instances are.
The next example implements both the furniture and the measurement concept
as Value Object type (run code):
Floor Planning: Measurement Value Object
class Measurement {
magnitude; unit;
constructor({magnitude, unit}) {
Object.assign(this, {magnitude, unit});
}
toString() {
return `${this.magnitude}${this.unit}`;
}
class Furniture {
toString() {
return `${this.type}, ${this.width} wide, ${this.length} long`;
}
Equality by Values
Two Value Objects of the same type are considered equal when all their
corresponding attributes have equal values. Consequently, it is also irrelevant
whether they are represented by the same reference object. This characteristic is
closely tied to the absence of a conceptual identity. Compare this to primitive
types such as strings or numbers. The character sequence “test” is always equal
to “test”, independent of the technical handling of strings. The number 42 is
always equal to 42, even if both values originate from different memory
addresses. It is a matter of equality and not identity. The same applies to Value
Objects. However, other than primitives, they consist of multiple parts.
Therefore, all their attributes need to match for Value Objects to count as equal.
The following code example implements Equality by Value for the previously
introduced measurement component (run code):
Measurement: Equality by Values
class Measurement {
magnitude; unit;
constructor({magnitude, unit}) {
Object.assign(this, {magnitude, unit});
}
equals(measurement) {
return measurement instanceof Measurement &&
this.magnitude === measurement.magnitude && this.unit === measurement.unit;
}
Immutability
Value Objects must be immutable. This means that their attributes are not
allowed to change after an initial assignment. Consequently, all necessary values
must be provided at construction time. The main motivation for this is best
understood when comparing it to mathematics. A number never changes and
always represents one specific numerical value. An addition of two numbers
refers to a third one, but does not cause any modification. Algebraic variables
can change the values they refer to, but the values themselves remain constant.
The same logic applies to Value Objects. This has some useful advantages. For
one, an object that is initially valid always remains valid. There is no later
verification or validation required. Secondly, working with an immutable object
cannot cause side effects.
magnitude; unit;
constructor({magnitude, unit}) {
Object.assign(this, {magnitude, unit});
}
plus(measurement) {
if (this.unit !== measurement.unit) throw new Error('unit mismatch');
const combinedMagnitude = this.magnitude + measurement.magnitude;
return new Measurement({magnitude: combinedMagnitude, unit: this.unit});
}
The class Measurement defines the instance operation plus() to add two
measurements together in a side-effect-free manner. This function first verifies
that the passed in object has the same unit as the current instance. In case of a
mismatch, it throws an according exception. Instead of requiring identical units,
the implementation could be extended with a conversion mechanism that aligns
measurements with different units. However, for the example, this extended use
case is ignored. If both measurements have equal units, the magnitudes are
added together. The Measurement constructor is invoked with the sum and the
shared unit as arguments. Finally, the new instance is returned. The exemplary
usage demonstrates that the addition of two measurements creates a new object
without mutating existing ones.
Entities
Entities are unique elements of a Domain that have a conceptual identity.
[Vernon, p. 171] suggests to design a concept “as an entity when we care about
its individuality”. In this case, the elements primarily matter for who or which
they are rather than what they are. For example, two persons with equal names
and equal addresses may not be the same. Regardless of similarities, they are
distinguished by a unique identity. Such an identity must be permanent and
constant. Entities are also collections of related attributes and domain-specific
behavior. The difference is that they have a lifecycle throughout which they can
change. Consequently, two Entities with different values can be the same, only in
different versions. Likewise, Entities that are equal may not be the same.
Identities
The identity of an Entity distinguishes it from other elements independent of its
state or shape. There are three requirements for an identity. For one, it has to
exist during the whole Entity’s lifecycle. Secondly, it must be immutable. Third,
it has to be unique. However, the uniqueness may only be valid within a specific
boundary. [Evans, p. 92] explains that identifiers might be valid “only in the
context of the system” or “outside a particular software”. At a minimum, there
may not be duplicated identifiers within a single conceptual boundary. The most
commonly chosen context is the enclosing software. This enables to identify
Entities unambiguously across subsystems. When multiple software products
have to integrate with each other, it is useful to consider globally unique
identifiers.
Natural vs. artificial identifiers
There are mainly two possibilities for the composition of identities. The first one
is to utilize an existing immutable attribute or a combination of many. For a user
account Entity type, this could be the e-mail address. For an online multiplayer
game, it may be the concatenation of username and server name. This approach
is useful when identities must be human-readable. However, it can lead to
problems when the value of an attribute can change. An e-mail address is a good
example of something unique that is not necessarily constant. The second
approach is to use artificial identifiers. They introduce additional complexity, but
serve a dedicated purpose. Good examples are tax IDs and social security
numbers. Generally, this book recommends the use of artificial identifiers.
Identifier generation
Generating identifiers for Entities can be done in different ways. One approach is
to let the employed persistence technology create them. Many storage systems
are capable of generating unique values. However, there are disadvantages with
this approach. Generally, it increases the risk of making the Domain Model
implementation dependent on a specific technology. Furthermore, it often
implies that the identity of a new Entity is only accessible after saving it.
Another possibility is to generate identifiers in the runtime without relying on
external processes. Ideally, this is done in a synchronous way, as it is less
complex. This approach makes it even possible to provide identities from outside
the Application Layer. Doing so helps to support idempotent Entity creation,
which is covered in Chapter 10.
Universally Unique Identifier
The likelihood of a collision for UUIDs primarily depends on the quality of the
random number generation (RNG). Math.random() is one commonly used
option for creating random values. However, the mechanism only produces
pseudo-random numbers that have a relatively high chance of duplicates. The
Web Cryptography API standard defines the operation
crypto.getRandomValues() to produce cryptographically secure random
numbers. This standard is implemented in recent versions of most browsers as
well as in Node.js. While an identifier collision is theoretically still possible with
this mechanism, the chances are very low. Even more, since the identity
uniqueness is often bound to a certain context, the risk of duplicates is
negligible.
The following example provides an operation that creates UUID-compliant
identifiers (run code):
ID generation: Generate id function
const uuidV4Template = '????????-????-4???-1???-????????????';
The first example shows a basic implementation and usage of a contact Entity
component (run code):
Address book: Contact with flat attributes
class Contact {
contact2.houseNumber = '4';
contact2.street = 'First Street';
console.log(contact1, contact2);
Value Containers
Entities should have as few direct attributes as possible and act as so-called
value containers. Instead of enclosing many individual and possibly unrelated
attributes directly, they should incorporate Value Objects whenever possible. The
main responsibility of Entities is to manage their identity and their lifecycle. As
a consequence, they should not be concerned with additional details of
subordinate concepts. This design goal aligns well with the idea of encapsulating
self-contained descriptive concepts as Value Objects whenever possible. As an
example, consider a cooking recipe Entity. The element should not have direct
attributes for components and amounts. Rather, it should act as a container that
encloses ingredients, which themselves are composite Value Objects.
Example: Address book improvements
Consider improving the previously illustrated implementation for the contact
Entity type of the address book software. While the first approach expresses the
Domain Model fully and provides the expected behavior, the code design is not
ideal. The class Contact has numerous individual and partially unrelated
attributes, and therefore too many responsibilities. This design issue can be
solved by extracting meaningful concepts into separate components. Both a
composite for a full name and a postal address are good candidates for individual
Value Object types. This is because they are primarily defined by their attributes,
naturally equal by values and can be treated as immutable. Also, they represent
generic concepts that are widely understood, even without the context of the
example software Domain.
class Contact {
id; name; address;
console.log(contact1, contact2);
The reworked class Contact is reduced to three individual attributes. As most
important part, it encloses its identity. On top of that, there is one attribute for a
name and another one for an address. The constructor functions Name and
Address represent the respective Value Object types. Both of them accept
individual attributes for all the information they enclose. Through the execution
of the function Object.freeze(), the created instances are made immutable.
The Entity type is reduced to the bare required minimum of direct attributes. As
a side effect, the exemplary usage code illustrates another interesting aspect of
the new implementation. Instead of changing individual address attributes, a
complete replacement makes the action more similar to a real world use case.
Relationships
Entity-to-Entity relationships can be designed in three ways. The first one is to
enclose associated elements completely, similar to when an Entity contains a
Value Object. This makes mostly sense when the contained parts conceptually do
not exist on their own. The second possibility is to only reference identifiers of
foreign components. This approach is useful for relationships to other standalone
Entities. Containing such elements risks challenges in combination with
persistence. In contrast, referencing immutable identifiers typically avoids those
issues. The third approach is to define specialized Value Object types that
enclose an identity and a subset of additional Entity attributes. This enables to
make required information locally available without enclosing a complete
element. However, the redundant data can introduce the need for
synchronization.
Bidirectional references
There are situations where two Entities both need to be aware of each other. In
many cases, this happens as part of a parent-child relationship. Generally
speaking, bidirectional Entity references should be treated with caution. The
reason is that they are more complex to deal with than unidirectional
associations. Still, this does not mean that they should be avoided at all costs.
When specific mutual connections are a part of a Domain Model, they should
also appear in the implementation. However, a bidirectional relationship should
always be expressed with identifier references. Otherwise, the implementation
can face issues such as circular dependencies, especially with regard to the
persistence of Entities.
Example: Book favorites
As example, consider implementing the Domain Model for a favorite book list
software. Overall, the use cases consist of creating favorite lists, adding
individual book entries and retrieving the currently favored ones. The book
component incorporates an “International Standard Book Number” (ISBN), a
title and a description. ISBNs are a form of artificial identifiers and can be
therefore be used as conceptual identities. Their presence makes the book
concept automatically an Entity type. The list component is defined as a mutable
collection of books. Since lists may change over time, and they are not
considered equal by values, they are also Entities. The remaining code design
question is how to model the relationship between books and lists.
The following code provides the implementation of the book Entity type:
Book favorites: Book Entity
class Book {
The first approach for favorite lists places the book Entities directly inside the
lists (run code):
Book favorites: Entities completely contained
class FavoriteBookList {
constructor({id}) {
Object.defineProperty(this, 'id', {value: id, enumerable: true});
}
addBook(book) {
this.#books.push(book);
}
getBooks() { return this.#books.slice(); }
The class Book expresses the Domain Model concept of a book as an Entity type.
While its ISBN attribute is defined as immutable, both the title and the
description are allowed to change. The class FavoriteBookList represents the
list component. Adding a book entry is done via the operation addBook().
Accessing the currently contained ones is achieved with the function
getBooks(). One benefit of enclosing complete book Entities is the ability to
render them in a human-readable format. However, this approach faces
challenges regarding persistence. Conceptually, books exist on their own and one
book can be referenced from multiple lists. Containing the Entities completely
would result in data redundancy. Furthermore, each change to a book would
need to be distributed across all affected lists.
constructor({id}) {
Object.defineProperty(this, 'id', {value: id, enumerable: true});
}
addBook(isbn) {
this.#isbns.push(isbn);
}
The updated component FavoriteBookList works only with identifiers and does
not enclose complete Entities. This avoids persistence-related challenges with
regard to consistency and redundancy. Lists can easily be saved and loaded
without special care or custom mechanisms. The referenced identifiers are
immutable and therefore always consistent. Also, the interface of the function
addBook() is simpler because it expects an identifier instead of a full Entity. One
downside of this implementation is the lack of required additional information
from referenced books. The list component is unable to produce a human-
readable output of the contained favorites. This makes the consumer code
responsible for orchestrating both components and assembling the required
information. Depending on the respective architecture, this might not be a
feasible approach.
class FavoriteBookList {
constructor({id}) {
Object.defineProperty(this, 'id', {value: id, enumerable: true});
}
addBook(favoredBook) {
this.#favoredBooks.push(favoredBook);
}
Overall, the example of favorite book lists illustrates and compares the different
approaches for modeling Entity-to-Entity relationships. As a general rule of
thumb, it is beneficial to use identifier references, unless additional information
from foreign Entities is required locally. Chapter 8 and 9 follow up on the
considerations from this section and put them into the context of transactions and
persistence.
Domain Services
Domain Services expose functionalities that are not tied to a specific thing of a
Domain. Most typically, they are used for calculations or other computations that
do not carry domain-specific state. Sometimes, such aspects are mistakenly
implemented as Value Objects or as Entities. While this does not automatically
create an issue, it typically introduces unnecessary complexity to the code.
Another use case for Domain Services are functionalities that have dependencies
to components outside the Domain Layer. In such cases, all details must be
encapsulated inside the implementation and an abstract interface is exposed to
the consuming components. The implementation of a Domain Service does not
require specific technical constructs. In JavaScript, it can be represented as class,
as object or as simple standalone function.
Stateful Services
The absence of state as a requirement for a Domain Service is only focused on domain-related
information. Service structures may possess state for technical reasons, such as keeping
references to other functionalities they depend on.
Invariants
In Computer Science, an Invariant is a condition that must be true for a certain
time span. With regard to Domain Models, it is a constraint that must be satisfied
during the complete lifecycle of affected components. [Vernon, p. 205] describes
it as “state that must stay transactionally consistent”. The term “transactionally”
implies that such a constraint must always be satisfied without delay. As
example, consider a minimum password length for user accounts. Values that
violate this rule are rejected upfront. Invariants are often explained in the context
of Entities, eventually because of their mutable state. Nevertheless, they are
equally important for Value Objects, only that their constraints must be verified
upon construction. Generally speaking, the concept is about protecting
components from entering an invalid state.
Complexity of invariants
Invariants can span across component boundaries and are not restricted to single units. Rather,
they can incorporate complex rules and involve a graph of elements. The only requirement for
something to count as an invariant is that its rules must be satisfied at all times.
class WorkdayPlan {
constructor({id, availableHours}) {
Object.defineProperties(this, {
id: {value: id, writable: false},
availableHours: {value: availableHours, writable: false},
});
}
planActivity(activity) {
if (activity.requiredHours > this.getRemainingHours())
throw new Error(`not enough time remaining for ${activity.description}`);
this.#activities.push(activity);
}
getRemainingHours() {
const hoursRequiredForTasks = this.#activities
.map(activity => activity.requiredHours)
.reduce((hours, taskHours) => hours + taskHours, 0);
return this.availableHours - hoursRequiredForTasks;
}
The first code example provides the function for a constraint verification (run
code usage):
Verification: Verify function
const verify = (constraintName, condition) => {
if (!condition) throw new Error(`constraint violated: ${constraintName}`);
};
The second code example defines an operation to create an MD5 hash for a
given value (run code usage):
Crypto: Create MD5 hash
const createMd5Hash = input => crypto.createHash('md5').update(input).digest('hex');
User context
The user context is a good starting point for the Domain Model implementation.
There are multiple reasons for this. For one, the associated Generic Subdomain
is more commonly known than the other knowledge areas. Therefore, it is easier
to directly focus on transforming the abstractions into source code. Another
reason to begin with this context is that it does not have any outgoing
dependencies to other areas. This frees the development activity from
considerations in terms of relationship and integration. The disadvantage of the
approach is that the Core Domain does not receive the highest priority. For the
Sample Application, the task board context should ideally be implemented first.
Nevertheless, the reversed order makes most sense for applying and illustrating
the previously explained patterns.
The Domain Model for the user context defines the two concepts role and user.
The role exclusively encloses a name attribute, which is constrained to a set of
valid strings. This structure is primarily defined by its attributes and therefore
qualifies as Value Object. In contrast, the user concept represents a mutable
component that must be identifiable independent of its state. Therefore, it must
be implemented as Entity. Besides these two explicit concepts, there are also
additional implicit aspects. The e-mail address also qualifies as dedicated Value
Object that encloses a string attribute together with an invariant for a valid
format. Furthermore, the requirement for unique e-mail addresses necessitates an
overarching component. This part must maintain all used e-mail addresses and
provide an availability check.
The following code examples show the implementations of the Value Object
types for role (run code usage) and e-mail address (run code usage):
User context: Role Value Object
const validRoles = ['user', 'admin'];
class Role {
name;
constructor(name) {
verify('role name', validRoles.includes(name));
Object.freeze(Object.assign(this, {name}));
}
equals(role) { return this.name === role.name; }
class EmailAddress {
value;
constructor(value) {
verify('valid e-mail', emailRegex.test(value));
Object.freeze(Object.assign(this, {value}));
}
const emailRegistry = {
setUserEmailAddress(userId, emailAddress) {
if (!this.isEmailAvailable(emailAddress)) throw new Error('e-mail in use');
emailAddressesByUser.set(userId, emailAddress);
},
isEmailAvailable(emailAddress) {
const usedEmailAddresses = Array.from(emailAddressesByUser.values());
return !usedEmailAddresses.some(
usedEmailAddress => usedEmailAddress.equals(emailAddress));
},
};
The last building block of the user context is the user Entity implementation:
User context: User Entity
class User {
set username(username) {
verify('valid username', typeof username == 'string' && username != null);
this.#username = username;
}
set emailAddress(emailAddress) {
verify('unused e-mail', this.#emailRegistry.isEmailAvailable(emailAddress));
verify('valid e-mail', emailAddress.constructor === EmailAddress);
this.#emailAddress = emailAddress;
this.#emailRegistry.setUserEmailAddress(this.id, this.#emailAddress);
}
set password(password) {
verify('valid password', typeof password == 'string' && !!password);
this.#password = password;
}
set role(role) {
verify('valid role', role.constructor === Role);
this.#role = role;
}
The class Role represents the user role concept as Value Object type. It exposes
an immutable name attribute, which is constrained by the list of valid role
names. The class EmailAddress captures the concept of an e-mail address.
Verifying a correct value is done with a simplified regular expression.
Maintaining used e-mail addresses is the responsibility of the service
emailRegistry. The availability of a value can be checked via the operation
isEmailAvailable(). The class User represents the user Entity and mainly acts
as a value container. Its internal attributes are guarded by setter functions to
ensure correct types and protect invariants. The artificial identifier makes
instances distinguishable independent of state. For ensuring unique e-mail
addresses, the e-mail registry is injected upon construction and used
subsequently.
The last code example of this subsection illustrates a usage of the previously
implemented parts (run code):
User context: Usage example
const userId = generateId(), role = new Role('user');
const emailAddress1 = new EmailAddress('john.doe@example.com');
const emailAddress2 = new EmailAddress('john.doe.81@example.com');
Project context
For illustration purposes, it makes sense to continue with the project context.
This conceptual boundary encloses knowledge abstractions that belong to the
Project Management Core Domain. Therefore, it is concerned with aspects that
are more specialized than the user context. Furthermore, this area is involved in
all four context-overarching relationships. Three of them are outbound
connections. One is the reference from projects to task boards. The second
relationship is the assignment of users as project owners. Third, each team
member references a user identity. In contrast, the fourth relationship is an
inbound connection, which defines that task assignees are team members.
Amongst other things, these overarching relationships express the need for the
involved parts to have conceptual identities.
The Domain Model of the project context contains four parts. The role concept
exclusively encloses a non-empty string. Although there is no logical invariant,
this part is best implemented as Value Object. The team member concept
represents a component that must be referenced from the outside. Consequently,
it must be an Entity type. Regardless of the associated user identity, it requires a
dedicated identifier. This enables the task board context to stay independent of
user details. Also, it makes more sense as users can represent different members
in different teams. The team concept encloses a mutable collection of members
and also qualifies as Entity. Although a project contains relatively constant
attributes, it must be identifiable independent of state. Therefore, it is also
implemented as Entity.
Why the separate Team Entity?
While team members could be direct parts of a project, a separate team Entity ensures a clean
separation of concerns. Projects are responsible for their name, team and task board. Teams are
concerned with managing their members. These two aspects should be separate from each other.
The first implementation provides the project-specific role Value Object (run
code usage):
Project context: Role Value Object
class Role {
name;
constructor(name) {
verify('valid role', typeof name == 'string' || !!name);
Object.freeze(Object.assign(this, {name}));
}
The next example implements the Entity type for a team member (run code
usage):
Project context: Team member Entity
class TeamMember {
set role(role) {
verify('valid role', role.constructor === Role);
this.#role = role;
}
This is followed by the implementation of the team Entity (run code usage):
Project context: Team Entity
class Team {
constructor({id}) {
verify('valid id', id != null);
Object.defineProperty(this, 'id', {value: id, writable: false});
}
addMember(teamMemberId) {
verify('team member is new', !this.#teamMemberIds.includes(teamMemberId));
this.#teamMemberIds.push(teamMemberId);
}
removeMember(teamMemberIdToRemove) {
const indexToRemove = this.#teamMemberIds.indexOf(teamMemberIdToRemove);
if (indexToRemove === -1) return;
this.#teamMemberIds.splice(indexToRemove, 1);
}
The last code provides the implementation of the project Entity type:
Project context: Project Entity
class Project {
set name(name) {
verify('valid name', typeof name === 'string' && name.length > 0);
this.#name = name;
}
The last code shows a basic usage of the project context implementation (run
code):
Project context: Usage example
const projectId = generateId(), teamId = generateId(),
teamMemberId = generateId(), userId = generateId(), taskBoardId = generateId();
The Domain Model of the task board part defines two individual concepts. Apart
from these, there are no other aspects to express explicitly. The task part
represents a component that encloses multiple mutable attributes and requires a
unique identity. These characteristics qualify it as Entity. The task board concept
encloses a varying collection of tasks. Due to its changeable state and the
necessity to be referenced from outside, it must also be implemented as Entity.
One open question is whether a board is responsible for the status of a task or the
task itself. The Domain Model defines that a task must be assigned when it is in
progress. This invariant can be best protected when the status is a direct attribute
of each task.
The first code examples show the implementations for the Entities task (run code
usage) and task board:
Task board context: Task Entity
const validStatus = ['todo', 'in progress', 'done'];
class Task {
set description(description) {
verify('valid description', typeof description == 'string');
this.#description = description;
}
set title(title) {
verify('valid title', typeof title == 'string' && !!title);
this.#title = title;
}
set status(status) {
verify('valid status', validStatus.includes(status));
verify('active task assignee', status !== 'in progress' || !!this.assigneeId);
this.#status = status;
}
set assigneeId(assigneeId) {
verify('active task assignee', this.status !== 'in progress' || assigneeId);
this.#assigneeId = assigneeId;
}
constructor({id}) {
verify('valid id', id != null);
Object.defineProperty(this, 'id', {value: id, writable: false});
}
addTask(task) {
verify('valid task', task instanceof Task);
verify('task is new', !this.#tasks.includes(task));
this.#tasks.push(task);
}
removeTask(taskToRemove) {
const taskIndex = this.#tasks.indexOf(taskToRemove);
verify('task is on board', taskIndex > -1);
this.#tasks.splice(taskIndex, 1);
}
getTasks(status = '') {
if (status === '') return this.#tasks.slice();
return this.#tasks.filter(task => task.status === status);
}
The class Task captures the concept of an individual working item. Besides
technical constraints, the component protects the logical invariants of the
Domain Model. At all times, the status of a task may only contain a valid value.
Also, an assignee must be set before transitioning to “in progress” and it cannot
be removed during that state. The class TaskBoard implements the Entity for a
task board, which is mainly a mutable collection of items. However, instead of
identifiers, it encloses complete Entities. The operation getTasks() provides
Entity access by returning either all tasks or the ones matching a given status.
This relationship design only makes sense when tasks are not considered to be
standalone elements. Otherwise, identifier references should be preferred.
The last example of this subsection shows the usage of the Entities for the task
board context (run code):
Task board context: Usage example
const assigneeId = generateId();
const taskBoard = new TaskBoard({id: generateId()});
const task = new Task({id: generateId(), title: 'write tests'});
taskBoard.addTask(task);
task.description = 'write unit tests for new feature';
task.assigneeId = assigneeId;
task.status = 'in progress';
console.log('tasks in progress', taskBoard.getTasks('in progress'));
console.log('attempting to unassign task');
task.assigneeId = undefined;
The final implementation provides a function to remove a team member and un-
assign all its tasks (run code):
Task board context: Unassign abandoned tasks
const removeMemberAndUnassignTasks = (teamMemberId, taskBoard) => {
const tasks = taskBoard.getTasks();
const assignedTasks = tasks.filter(task => task.assigneeId === teamMemberId);
assignedTasks.forEach(task => {
if (task.status === 'in progress') task.status = 'todo';
task.assigneeId = null;
});
team.removeMember(teamMemberId);
};
console.log(taskBoard);
removeMemberAndUnassignTasks(teamMember.id, taskBoard);
console.log(taskBoard);
The illustrated source code for the Sample Application represents a full
implementation of all defined Domain Model parts. The enforcement of the
overarching rule that affects multiple contexts is achieved with a separate
surrounding functionality. While this approach works, its code design is not
ideal. The integration of multiple independent parts can be improved through an
event-based communication, which is covered in the next chapter.
Chapter 7: Domain Events
Domain Events represent something that happened in a Domain, for example
when a user registers for a new account. [Evans, p. 20] describes them as “full-
fledged part of the domain model”. This means, they should not only appear in
the implementation but also in abstractions, concepts and visualizations. Their
purpose is to convey meaningful knowledge about important specific
occurrences. According to [Evans, p. 20], this should be something “that domain
experts care about”. Therefore, Domain Events are not meant to contain purely
technical information. The messages are used to facilitate communication,
integration and consistency between separate software parts. In that sense, they
support the correct functioning of a system. Nevertheless, the pattern is not to be
interpreted as technical mechanism. The primarily conveyed information must
always represent domain-specific knowledge.
Naming conventions
The type name for a Domain Event must be chosen carefully, as with every other
part of a Domain Model. The selected term should unambiguously describe what
happened. For that, it must be expressive and apply the respective Ubiquitous
Language. One recommendation is to combine the affected subject and the
executed action in past tense. Normally, the subject is the addressed component
and the action is the invoked behavior. For example, the function reload() on a
browser tab would produce the event type “BrowserTabReloaded”. Whenever
possible, the subject should be put first. However, sometimes this leads to
awkward names. For a newsletter function subscribe(), the event type would
be “NewsletterSubscribed”. In such cases, the word order can be changed and
additional terms may be included.
Exemplary Domain Event names
Subject Action Potential event type name
browser tab reload() BrowserTabReloaded
meeting addParticipant() MeetingParticipantAdded
guestbook writeMessage() GuestbookMessageWritten
user login() UserLoggedIn
newsletter subscribe() NewsletterSubscriptionAdded
Event creation
There are multiple aspects to consider for the implementation of Domain Events.
For one, their data structure should be homogenous across instances of the same
type. In JavaScript, this requires a controlled creation mechanism, such as a
constructor function. Secondly, a Domain Model implementation should only
provide the specialized data, while the metadata is generated by another
architectural layer. There are different solution approaches for this. Domain
Model components can exclusively yield the specialized knowledge, which is
later enriched with additional generic information. This keeps the Domain Layer
free of unrelated concerns. Alternatively, the model components can make use of
an event factory, which is additionally responsible for generating the metadata.
The advantage of this approach is that the Domain Layer always produces
complete events.
The illustrated factory component serves two important purposes. For one, it
ensures structural consistency and type-safety of data across event instances of
the same type. The implementation uses the typeof operator to primarily support
primitive types, such as boolean, number and string. As Domain Events should
only consist of simple attributes, the mechanism is sufficient. The second
purpose is to free consumers from the infrastructural responsibility of creating
metadata when instantiating events. This is achieved with an automatic and
configurable generation of identifiers and additional metadata. Domain Model
components can create full events without referring to specific infrastructural
functionalities. Even more, the component eventTypeFactory itself is not tied to
a specific implementation, as it only relies on configurable abstractions.
The next code shows an exemplary usage of the Event type factory (run code):
Events: Event Type Factory usage
const CommentWrittenEvent = eventTypeFactory.createEventType(
'CommentWritten',
{commentId: 'string', message: 'string', author: ['string', 'undefined']},
);
eventTypeFactory.setIdGenerator(generateId);
eventTypeFactory.setMetadataProvider(() => ({creationTime: new Date()}));
console.log(new CommentWrittenEvent(
{commentId: generateId(), message: 'Very nice content!'}));
console.log(new CommentWrittenEvent(
{commentId: generateId(), message: 'Contact me!', author: 'john@example.com'}));
The code starts with creating the specialized Domain Event type
CommentWrittenEvent by invoking the factory createEventType(). The passed
in data structure defines the attributes contentId, comment and author. While
the content identifier and the comment are defined as string, the author attribute
can be either a string or undefined. Next, the factory behavior is configured by
executing the functions setIdGenerator() and setMetadataProvider(). For
the identifier generation, the operation generateId() is used. For the additional
metadata creation, an anonymous function is defined that creates an object
containing a current timestamp. Finally, the event type CommentWritten is
instantiated three times. The first two executions succeed and return an event. In
contrast, the third invocation throws an exception, as the passed in data contains
an incorrect attribute type.
Default metadata creation
The code examples in this book follow a common approach for creating event metadata. The
helper function generateId() is used as the standard mechanism for the identifier generation.
Wherever needed, the additional metadata creation is handled by an operation that yields an
object with the time of occurrence.
subscribe(topic, subscriber) {
const newSubscribers = this.#getSubscribers(topic).concat([subscriber]);
this.#subscribersByTopic.set(topic, newSubscribers);
}
unsubscribe(topic, subscriber) {
const subscribers = this.#getSubscribers(topic).slice();
subscribers.splice(subscribers.indexOf(subscriber), 1);
this.#subscribersByTopic.set(topic, subscribers);
}
publish(topic, message) {
this.#getSubscribers(topic).forEach(
subscriber => setTimeout(subscriber(message), 0));
}
messageBus.subscribe('topic-a', console.log);
messageBus.publish('topic-a', {data: 'something about a'});
messageBus.publish('topic-b', {data: 'something about b'});
The next example provides the implementation of a specialized Event Bus class:
Event Bus: Basic implementation
class EventBus {
#messageBus;
subscribe(eventType, subscriber) {
return this.#messageBus.subscribe(eventType, subscriber);
}
unsubscribe(eventType, subscriber) {
return this.#messageBus.unsubscribe(eventType, subscriber);
}
publish(event) {
if (typeof event.type != 'string') throw new Error('invalid event');
return this.#messageBus.publish(event.type, event);
}
The last example of this subsection shows a usage of the Event Bus class (run
code):
Event Bus: Exemplary usage
const eventBus = new EventBus();
const firstEvent = {
type: 'UserSubscribedToMailingList', id: generateId(),
data: {userId: generateId(), username: 'first user'},
metadata: {creationTime: Date.now()},
};
const secondEvent = {
type: 'UserSubscribedToMailingList', id: generateId(),
data: {userId: generateId(), username: 'second user'},
metadata: {creationTime: Date.now()},
};
eventBus.subscribe('UserSubscribedToMailingList', subscriber);
eventBus.publish(firstEvent).then(() => console.log('published first event'));
eventBus.publish(secondEvent).then(() => console.log('published first event'));
Guaranteed delivery
The theory of Domain Events and their distribution is fairly trivial. When
subscribers only execute secondary functionalities within the same process, the
implementation is also simple. However, when a message notification triggers
crucial actions or involves communication across processes, it gets more
complex. The main challenge is the guaranteed distribution of events, which
affects all parts involved in the communication. For one, the publisher must
ensure that every event is correctly handed over to the bus. The Event Bus itself
must guarantee the delivery to all registered subscribers. Finally, the subscribers
must acknowledge each processed item. In most cases, this requires the use of
persistent queues, potentially within all three parts. When done properly, the
delivery can be guaranteed, even in case of software crashes.
The next code shows an extended implementation for the publishing operation of
the Message Bus component that awaits asynchronous subscribers:
Message Bus: Publishing with async subscriber support
async publish(topic, message) {
await Promise.all(this.#getSubscribers(topic).map(subscriber =>
new Promise(resolve => setTimeout(() => {
Promise.resolve(subscriber(message)).then(resolve);
})),
));
}
The following example illustrates its usage with asynchronous event processing
(run code):
Message Bus: Usage with asynchronous subscribers
const messageBus = new MessageBus();
The following example provides a factory function for creating a subscriber with
built-in event de-duplication (run code usage):
Domain Event: Factory for subscriber with de-duplication
const createSubscriberWithDeDuplication = originalSubscriber => {
const processedEventIds = [];
return event => {
if (processedEventIds.includes(event.id)) {
console.log(`dropping event duplicate with id: ${event.id}`);
return;
}
originalSubscriber(event);
processedEventIds.push(event.id);
};
};
eventBus.subscribe(OrderCreatedEvent.type, createSubscriberWithDeDuplication(
event => console.log(`send e-mail to ${event.data.email}`)));
eventBus.publish(event);
eventBus.publish(event);
Before developing the new functionality, one preliminary step is to review the
existing Domain Model and its implementation. Overall, the source code of the
Domain Layer consists of two individual parts. One is the Entity type for the
classified ad concept. This component encloses a unique identifier, a title, a
description and a price. Both the title and the description are represented as
simple strings. Also, the title is constrained by the invariant that its length must
not exceed 100 characters. In contrast, the description is not constrained and can
therefore be of arbitrary length. The second model part is the price Value Object
type, consisting of a value and a currency identifier.
The first examples show the code of the existing classified ad Entity and the
price Value Object:
Classified ad platform: Classified ad Entity
class ClassifiedAd {
set title(title) {
if (title.length > 100) throw new Error('max title length exceeded');
this.#title = title;
}
constructor({id}) {
Object.defineProperty(this, 'id', {value: id, writable: false});
}
addClassifiedAd(classifiedAdId) {
this.#classifiedAdIds.push(classifiedAdId);
}
The first step towards an automatic list population is to introduce an event type
for the creation of classified ads:
Classified ad platform: Classified ad creation event
const ClassifiedAdCreatedEvent = eventTypeFactory.createEventType(
'ClassifiedAdCreated',
{classifiedAdId: 'string', title: 'string', description: 'string',
priceValue: 'number', priceCurrency: 'string'},
);
The next code provides an extended version of the classified ad Entity that
publishes the new event:
Classified ad platform: Classified Ad Entity with event
class ClassifiedAd {
set title(title) {
if (title.length > 100) throw new Error('max title length exceeded');
this.#title = title;
}
The service is used for a function that enables an automatic classified ad list
population:
Classified ad watcher: Classified ad list population
const activateClassifiedAdListPopulation = ({eventBus, list, keyword}) =>
eventBus.subscribe(ClassifiedAdCreatedEvent.type, ({data}) => {
const {classifiedAdId, title, description} = data;
const containsKeyword = [title, description].some(
text => keywordService.containsKeyword(text, keyword));
if (containsKeyword) {
list.addClassifiedAd(classifiedAdId);
console.log('new classified ad with matching keyword:', classifiedAdId);
}
});
The usage of the functionality can be seen in the next example (run code):
Classified ad watcher: Classified ad list population usage
const eventBus = new EventBus();
new ClassifiedAd({
id: generateId(), title: 'Unused chair', description: 'Office chair',
price: new Price({value: 40, currency: 'EUR'}), eventBus,
});
new ClassifiedAd({
id: generateId(), title: 'Used laptop', description: 'Lenovo T400',
price: new Price({value: 200, currency: 'EUR'}), eventBus,
});
The object keywordService represents a Domain Service to determine whether a
keyword is contained in a text. Its operation containsKeyword() expects a text
and a keyword and returns true if the term is found. The operation
activateClassifiedAdListPopulation() is responsible for registering a
Domain Event subscriber that adds classified ads to a list that contain a given
term. As arguments, it expects an Event Bus, a classified ad list and a keyword.
The function subscribes to the event type “ClassifiedAdCreated”. For each
occurrence, the callback uses the operation containsKeyword() to check
whether the given keyword is contained in the new ad. If the term is found, the
classified ad identifier is added to the list.
The following code defines another Domain Event type, which represents the
addition of a classified ad to a list:
Classified ad platform: Classified ad list addition event
const ClassifiedAdAddedToListEvent = eventTypeFactory.createEventType(
'ClassifiedAdAddedToListEvent',
{classifiedAdListId: 'string', classifiedAdId: 'string'},
);
constructor({id, eventBus}) {
Object.defineProperty(this, 'id', {value: id, writable: false});
this.#eventBus = eventBus;
}
addClassifiedAd(classifiedAdId) {
this.#classifiedAdIds.push(classifiedAdId);
this.#eventBus.publish(new ClassifiedAdAddedToListEvent(
{classifiedAdListId: this.id, classifiedAdId}));
}
The final code shows an exemplary usage of the classified ad list notification
mechanism (run code usage)
Classified ad platform: Classified ad list notification usage
const eventBus = new EventBus();
new ClassifiedAd({
id: generateId(), title: 'Unused chair', description: 'Office chair',
price: new Price({value: 40, currency: 'EUR'}), eventBus,
});
new ClassifiedAd({
id: generateId(), title: 'Used laptop', description: 'Lenovo T400',
price: new Price({value: 200, currency: 'EUR'}), eventBus,
});
Overall, this example illustrates how to utilize Domain Events for a loosely
coupled information exchange and for enabling reactive behavior.
Shared functionalities
The use of Domain Events requires to add three generic functionalities to the
source code of the Sample Application. One part is the component
eventTypeFactory for creating type-safe specialized event types with
automatically generated metadata. For the distribution of Domain Events, the
Event Bus component is re-used, which includes the underlying Message Bus
component. The event type factory is placed in the shared Domain part, as it is
agnostic of specific infrastructural aspects. While the Message Bus and the
Event Bus are also free of technological details, they are placed in the shared
Infrastructure part. Consider the scenario when replacing their in-memory
mechanics with a persistent implementation. In this case, it would be obvious
that the components architecturally belong to the Infrastructure Layer.
Implementation changes
The source code from the last chapter represents a complete expression of the
Domain Model. However, the code design of the implementation to unassign
tasks upon a team member removal is questionable. This is because it introduces
an additional component around the task board and the team Entity types. These
two parts belong to the Domain Layers of different context implementations. In
fact, the approach introduces another conceptual boundary that depends on the
two areas. The introduction of Domain Events enables to mitigate this design
issue. Through the facilitation of a loosely coupled message exchange, it is
possible to remove the additional layer. Instead, the two context implementations
depend on a central messaging component and a common message format.
The first code provides the definition of a Domain Event type to represent a team
member removal:
Project context: Team member removal event
const TeamMemberRemovedFromTeamEvent = createEventType(
'TeamMemberRemovedFromTeam', {teamId: 'string', teamMemberId: 'string'});
The next implementation shows a reworked team component that publishes an
event whenever a member is removed:
Project context: Team Entity with event
class Team {
constructor({id, eventBus}) {
verify('valid id', id != null);
Object.defineProperty(this, 'id', {value: id, writable: false});
this.#eventBus = eventBus;
}
addMember(teamMemberId) {
verify('team member is new', !this.#teamMemberIds.includes(teamMemberId));
this.#teamMemberIds.push(teamMemberId);
}
removeMember(teamMemberIdToRemove) {
const indexToRemove = this.#teamMemberIds.indexOf(teamMemberIdToRemove);
if (indexToRemove === -1) return;
this.#teamMemberIds.splice(indexToRemove, 1);
this.#eventBus.publish(new TeamMemberRemovedFromTeamEvent(
{teamId: this.id, teamMemberId: teamMemberIdToRemove}));
}
activateTaskAssigneeSynchronization({eventBus, taskBoard}) {
eventBus.subscribe('TeamMemberRemovedFromTeam', ({data: {teamMemberId}}) => {
const tasks = taskBoard.getTasks();
const assignedTasks = tasks.filter(task => task.assigneeId === teamMemberId);
assignedTasks.forEach(task => {
if (task.status === 'in progress') task.status = 'todo';
task.assigneeId = undefined;
});
});
},
};
For the code design of the Domain Model implementation, the event-based
integration of separate contexts is an important improvement. The next step is to
identify and implement appropriate consistency boundaries. This is an essential
stepping stone towards a concurrent and persistent Domain Model
implementation.
Chapter 8: Aggregates
An Aggregate defines a transactional consistency boundary that encloses a
collection of related Domain Model components. [Evans, p. 126] describes it as
“cluster of associated objects that we treat as unit for the purpose of data
changes”. This means that within such a boundary each individual change must
transactionally result in a consistent state. Aggregates are essential for
concurrent component access, which typically occurs in combination with
persistence. The concept itself does not require the use of specific technical
constructs, but influences the overall Domain Model implementation. [Vernon,
p. 355] explains that Aggregates are “consistency boundaries and not driven by a
desire to design object graphs”. Still, their use is not necessarily weakening the
expression of Domain knowledge. Rather, it is embracing transactions and
consistency as important aspects.
Transactions
A transaction encloses a series of operations to be treated as a single unit of
work. This makes it possible to combine multiple steps into a logically
inseparable procedure. Typically, this mechanism is used for changes to
persistent data. With regard to databases, transactions must have four
characteristics: atomicity, consistency, isolation and durability (ACID).
Atomicity means that either all changes are applied or none of them.
Consistency enforces defined invariants to be protected. Isolation ensures that
concurrent operations produce the same outcome as if they were executed
sequentially. Durability demands that the result of a transaction is non-volatile.
Except for durability, these aspects can be interpreted as general requirements
for transactional boundaries, independent of databases. Consequently,
Aggregates should also adhere to these principles.
Breakdown of ACID principles
Atomicity: apply all changes or none
Consistency: protect invariants
Isolation: eliminate concurrency conflicts
Durability: persist changes reliably
Implicit Aggregates
Every Entity that is not considered a part of another Aggregate, is implicitly an Aggregate on its
own.
The first code shows the implementation of the message Entity component:
Message inbox: Message Entity
class Message {
constructor({id, content}) {
Object.defineProperty(this, 'id', {value: id, writable: false});
Object.defineProperty(this, 'content', {value: content, writable: false});
}
The next implementation shows the first approach for an inbox component (run
code usage):
Message inbox: Inbox with identifiers
class Inbox {
constructor({id}) {
Object.defineProperty(this, 'id', {value: id, writable: false});
}
addNewMessage(messageId) {
this.#messageIds.push(messageId);
}
getMessageIds() { return this.#messageIds.slice(); }
inbox.getMessageIds().forEach(
messageId => messageDatabase.get(messageId).markAsRead());
console.log(Array.from(
messageDatabase.values()).map(({content, isRead}) => ({content, isRead})));
The class Inbox is mainly a registry for message Entities. Invoking its command
addNewMessage() adds an identifier. The query getMessageIds() retrieves the
currently contained ones. Marking all messages as read at once is achieved by
determining all registered Entities and marking them individually. This process
is demonstrated by the usage code. The relation between identifiers and Entities
is resolved through an in-memory database. Although the implementation fulfills
the requirements, the mechanism for marking all messages is problematic. This
is because the specialized behavior resides outside a model component.
Consequently, it is impossible for the inbox to ensure the atomicity and
consistency of this process. Rather, every consumer is responsible for correctly
carrying out the operations. This issue expresses the need for a larger enclosing
Aggregate.
constructor({id}) {
Object.defineProperty(this, 'id', {value: id, writable: false});
}
addNewMessage(id, content) {
this.#messages.push(new Message({id, content}));
}
markMessageAsRead(id) {
this.#messages.find(message => message.id === id).markAsRead();
}
markMessageAsUnread(id) {
this.#messages.find(message => message.id === id).markAsUnread();
}
markAllMessagesAsRead() {
this.#messages.forEach(message => message.markAsRead());
}
inbox.markAllMessagesAsRead();
Concurrency
Relation to parallelism
Parallelism allows multiple computations to be executed at the same time. This is possible
through the use of multiple processing units. While the two concepts partially overlap, they must
also be clearly differentiated. Within the context of this book, the considerations on concurrency
apply almost equally to parallelism.
testWithTiming(524287);
testWithTiming(5);
The next code provides an asynchronous and non-blocking approach (run code):
Prime number test: Asynchronous
const isPrimeNumber = (number, config = {loopChunkSize: 500}) => {
if (number <= 1) return Promise.resolve(false);
let divisor = 2;
const checkNextChunkRecursively = resolve => {
const nextDivisorLimit = Math.min(divisor + config.loopChunkSize, number);
for (divisor; divisor < nextDivisorLimit; divisor++)
if (number % divisor === 0) return resolve(false);
if (divisor === number) resolve(true);
else setTimeout(() => checkNextChunkRecursively(resolve), 0);
};
return new Promise(checkNextChunkRecursively);
};
testWithTiming(524287);
testWithTiming(5);
Concurrency control
Concurrency control ensures that concurrently executed tasks produce correct
results. This is especially important for procedures that work on a shared
resource. One fundamental requirement is that every involved operation receives
an isolated representation of a resource. Sharing common data structures risks
inconsistencies and consequential errors. As an example, assume that two people
collaborate on a single shopping list. One of them deletes an entry, while the
other adds an item. Both then check the updated list length to verify their action.
However, the value effectively remains unchanged and may cause confusion.
When they instead work on separate representations, their local changes do not
affect each other. Persistent storage mechanisms often implicitly provide this
isolation, as consumers access replicated data instead of actual resources.
Another important aspect is that persistent changes must not overwrite each
other. Allowing modification requests regardless of their actuality can cause
corruption and data loss. This problem arises when multiple transactions
concurrently alter an isolated resource representation and save it. Reconsider the
previous shopping list example. When the two persons both simultaneously load
the list, make changes and save them, one of the transactions overwrites the
other. Generally, modifications should only be accepted when they are based on
the latest version. One solution approach is to make resource access exclusive.
This prevents overwrites altogether, but it decreases performance. Alternatively,
the actuality of changes can be verified upon saving. Outdated requests are
rejected and must be retried. This produces a better performance but increases
the complexity.
The first example shows the implementation of the shopping list Entity
component (run code usage):
Shopping list: Shopping list Entity
class ShoppingList {
id; #items;
addItem(item) {
this.#items.push(item);
}
removeItem(item) {
const index = this.#items.indexOf(item);
if (index > -1) this.#items.splice(index, 1);
}
The class ShoppingList expresses the Domain Model concept of a shopping list
as Entity type. Its attributes consist of a conceptual identity and a list of
shopping items. Adding an item to a list is done with the command addItem().
Read access is provided via a getter function. This property accessor also serves
a persistence-specific purpose. Shopping list Entities should be serialized in a
way they can easily be reconstructed again. For this reason, the constructor
defines the getter with the enumerable flag set to true. The motivation is that
the operation JSON.stringify() only includes enumerable attributes. This
implementation approach enables to serialize an Entity as JSON string without
explicit conversion code. Note that the approach is only used for this particular
example.
The next implementation demonstrates two transactions that share a single data
structure (run code):
Shopping list: Shared data structure
const shoppingList = new ShoppingList({id: generateId()});
const transaction1 = [
() => console.log(`T1: accessing list with ${shoppingList.getLength()} items`),
() => { console.log('T1: add "apples"'); shoppingList.addItem('apples'); },
() => console.log(`T1: verify list with ${shoppingList.getLength()} items`),
];
const transaction2 = [
() => console.log(`T2: accessing list with ${shoppingList.getLength()} items`),
() => { console.log('T2: add "oranges"'); shoppingList.addItem('oranges'); },
() => console.log(`T2: verify list with ${shoppingList.getLength()} items`),
];
The code starts with creating a shopping list Entity. Next, it defines two arrays of
operations, of which each represents a separate transaction. Both arrays contain
three operations. First, the shared list is accessed and its initial state is output.
Secondly, a modification is announced and the according command is executed.
As last step, the altered state is logged to verify the transactional success. For a
concurrent execution, the helper function mergeTransactions() is implemented.
When executing the code, the output indicates consistency failures due to
incorrect verification messages. Each transaction only adds a single item, but
ends up with a list length incremented by two. The commented code can be used
for a sequential execution. This consistency failure demonstrates that concurrent
access requires data isolation.
As preparation for the next use cases, the following example provides a file
storage component as persistence mechanism (run code usage):
Filesystem: JSON File storage
class JSONFileStorage {
#storageDirectory;
constructor(storageDirectory) {
fs.mkdirSync(storageDirectory, {recursive: true});
this.#storageDirectory = storageDirectory;
}
async save(id, data) {
const filePath = `${this.#storageDirectory}/${id}.json`;
const tempFilePath = `${filePath}-${generateId()}.data`;
await writeFile(tempFilePath, JSON.stringify(data));
await rename(tempFilePath, filePath);
}
load(identifier) {
const filePath = `${this.#storageDirectory}/${identifier}.json`;
return readFile(filePath, 'utf-8').then(JSON.parse);
}
The next code illustrates a change overwrite due to isolated data manipulation of
a concurrently accessed component (run code):
Shopping list: Change overwrite
const shoppingListStorage = new JSONFileStorage(storageDirectory);
As preparation for the final use case, the following example provides a helper
component for exclusive resource access (run code usage):
Shopping list: Exclusive access
class ExclusiveAccess {
requestAccess() {
return new Promise(resolve => {
if (!this.#isAccessLocked) {
this.#isAccessLocked = true;
resolve();
} else this.#pendingPromiseResolves.push(resolve);
});
}
releaseAccess() {
if (this.#pendingPromiseResolves.length > 0)
setTimeout(this.#pendingPromiseResolves.shift(), 0);
else this.#isAccessLocked = false;
}
The code extends the previous use case of overwritten changes. In addition to the
file storage component, an instance of the class ExclusiveAccess is created.
With this utility, the function addShoppingListItem() enforces every
transaction to wait for earlier ones to finish. As a consequence, concurrently
requested item additions can be processed without data loss. However, the
resulting execution is almost equivalent to sequential processing, as every
transaction waits for other pending ones to finish. While this circumstance is true
for the particular implementation approach, controlled concurrency is generally
superior. For example, transactions affecting different resources can be executed
completely concurrently. Even for a single resource, the ability to enqueue
modifications is beneficial. Furthermore, exclusive access is only one of
different possible control mechanisms.
Design considerations
There are multiple aspects to consider for the design of an Aggregate. Most
importantly, it should align with a specific concept of the respective Domain
Model. Typically, an Aggregate encloses a graph of existing Entities and Value
Objects, with one of them acting as its root. Introducing a new component that
serves as Aggregate Root only makes sense when this part in fact expresses an
actual Domain Model concept. Other than these basic recommendations, there
are some more specific and partially technological influencing factors. Most of
them are gathered around the concepts of transactions and consistency. The most
relevant Aggregate design considerations are explained and illustrated in the
following subsections.
Associations
Relationships between separate Aggregates must only be expressed via identifier
references. This means that an element of one consistency boundary must never
be placed inside another. In terms of persistence, each modification of an
Aggregate is made durable within a dedicated transaction. Therefore, a structural
consolidation of multiple such boundaries only creates the illusion of
transactional behavior. In reality, any alteration affecting more than one
transaction is always eventually consistent. For synchronous code without
persistence, an object composition of two separate Aggregate boundaries
effectively merges them into one. This underlines why the only correct way to
express an overarching association is by referencing an identifier. The demand
for further knowledge apart from an associated identity either expresses a design
issue or the need for synchronization.
True invariants
The existence of invariants significantly influences the design of Aggregates.
Generally, invariants fall into one of two categories. There are conceptual rules
that are explicitly defined in a Domain Model, and there are technical
constraints. For example, requiring non-empty usernames is an important rule,
but not necessarily a domain-specific aspect. Regardless of category, when an
invariant affects multiple components, they must exist within the same
Aggregate. In contrast, elements without shared constraints do not have to. The
decisive question is whether the change of one component affects another one
somehow. If the answer is yes, both may need to share a common transactional
boundary. However, every potential invariant should be questioned for its
existence. Computational and informational results are sometimes mistaken for
strict transactional rules.
Example: Budget planner
Consider implementing the Domain Model for a budget planner software. Its
purpose is to plan expenses within a defined budget. The Domain Model consists
of two parts. One is the expense, which encloses a title and units. The second
part is the budget plan component, which incorporates a budget and a collection
of expenses. For simplicity reasons, the monetary values are expressed as
abstract units. The required behavior is to create a plan with a budget and to add
individual expenses. On top of that, the software must be able to tell when the
sum of expenses exceeds a budget. There are different interpretation possibilities
for this aspect and therefore multiple solution approaches. The key question is
whether a budget represents an actual invariant.
The first code provides the Entity type for the expense concept (run code usage):
Budget planner: Expense Entity
class Expense {
set units(units) {
if (typeof units !== 'number') throw new Error('invalid units');
this.#units = units;
}
The class Expense captures the Domain Model concept of an individual expense.
It defines attributes for an immutable identity, a title and units. The constructor
accepts according arguments for the attributes. The implementation as an Entity
type enables to make both the title and the units mutable. This approach aims to
support use cases such as correcting spelling errors or adjusting units. While the
title attribute can be modified directly, the units are adjusted through a setter
function. This way, the code can enforce that a given value is an actual number.
Another approach would be to implement the concept as Value Object type and
perform replacements instead of modifications. In general, there is no strong
argument for one or the other implementation approach.
constructor({id, budget}) {
if (typeof budget != 'number') throw new Error('invalid budget');
Object.defineProperty(this, 'id', {value: id, writable: false});
Object.defineProperty(this, 'budget', {value: budget, writable: false});
}
changeExpenseTitle(expenseId, newTitle) {
const expense = this.#expenses.find(({id}) => id === expenseId);
expense.title = newTitle;
}
changeExpenseUnits(expenseId, newUnits) {
const expense = this.#expenses.find(({id}) => id === expenseId);
const difference = newUnits - expense.units;
if (this.getLeftUnits() < difference) throw new Error('not enough units');
expense.units = newUnits;
}
getLeftUnits() {
const usedUnits = this.#expenses.reduce((sum, {units}) => sum + units, 0);
return this.budget - usedUnits;
}
The class BudgetPlan encloses a budget and a list of expenses. Since the
component is also an Entity type, its constructor expects a separate identifier.
Other than with the expense units, the budget attribute is defined as immutable
for the example. The component implementation interprets a budget as an
invariant maximum for the sum of contained expenses. Consequently, the
command addNewExpense() first verifies that creating and adding a new expense
does not exceed the limit. The operations changeExpenseTitle() and
changeExpenseUnits() are facade functions that control the access to contained
Entities. The Aggregate design ensures that expense modifications cannot violate
the budget invariant. Satisfying the constraint is done with via the function
getLeftUnits(). This query calculates the difference between the budget and
the sum of expenses.
The next code shows an exemplary usage of the budget plan Aggregate (run
code):
Budget planner: Budget plan with Entities usage
const plan = new BudgetPlan({id: generateId(), budget: 1000});
const rentExpenseId = generateId();
const foodExpenseId = generateId();
console.log(plan.getLeftUnits());
First, a budget plan is created with 1000 as maximum value. Then, two expense
identities are defined. Next, two according expenses are added to the Aggregate
via the command addNewExpense() using the previously defined identifiers.
Afterwards, the units of the secondly added expense are decreased. As final step,
the left units of the budget plan are logged to the console. When executing the
code, an exception is thrown upon the second addNewExpense() invocation, as
there are not enough units left. Before being able to adjust the second expense in
order to fit the budget, the program exits unconditionally. This example raises
the question whether the exceeding of a budget is not an invariant, but rather a
valid state.
constructor({id, budget}) {
if (typeof budget != 'number') throw new Error('invalid budget');
Object.defineProperty(this, 'id', {value: id, writable: false});
Object.defineProperty(this, 'budget', {value: budget, writable: false});
}
addExpense(expenseId) {
if (typeof expenseId != 'string') throw new Error('invalid expense id');
this.#expenses.push(expenseId);
}
The next code shows an exemplary usage of the budget plan with IDs (run code):
Budget planner: Budget plan with IDs usage
const rentExpenseId = generateId(), foodExpenseId = generateId();
const rentExpense = new Expense({id: rentExpenseId, title: 'rent', units: 800});
const foodExpense = new Expense({id: foodExpenseId, title: 'food', units: 250});
The budget planner example illustrates multiple aspects. For one, domain-
specific conditions are not always invariants. Therefore, it is important to
challenge every rule that is defined in a Domain Model. Secondly, components
that are bound by a common invariant must exist in the same Aggregate.
Otherwise, it is impossible to guarantee transactional consistency.
Optimal size
The optimal size for Aggregates depends on various factors. The general
recommendation is to make them as small as possible. [Vernon, p. 357] suggests
that an Aggregate is ideally “just the Root Entity and a minimal number of
attributes and/or Value-typed properties”. While this is valid, the guideline is
very generic. Apart from the necessity for Entities with common invariants to
share a transactional boundary, there are further relevant aspects. One is the
resulting code complexity. Object compositions with multiple levels of elements
are more difficult to understand. Another aspect is the performance, especially
with regard to persistence. Large data sets are slower to interact with. The third
criteria is concurrency. Aggregates with many Entities increase the likelihood of
conflicts. In contrast, smaller structures promote transactional success.
Example: Calendar
One approach is to form a single Aggregate with the calendar as root and the
entries as children. This makes it easy to read data and to query for specific
dates. However, it bears issues with regard to the previously explained aspects.
Since the Aggregate must act as facade around entries, its code complexity is
higher. The performance implications are best illustrated with a calculation: Two
years with averagely two appointments per day makes 1460 calendar entries.
Even with optimizations, always loading and saving all of them is inefficient.
Many use cases only involve single entries anyway. In terms of concurrency, the
conceptual flaw is more crucial than the likelihood of conflicts. Modifications to
one entry can never affect another one, which makes conflicts impossible.
Eventual Consistency
Eventual Consistency describes the circumstance when information is consumed
across transactional boundaries and changes are synchronized non-
transactionally. This can affect identical representations, derived data structures
and even consequential actions. For example, when charging a credit card, its
balance increases immediately. However, in an online account, the information
may only be reflected later. Furthermore, the change might trigger a deferred
fraud detection mechanism. Eventual Consistency causes distributed data to
become temporarily stale after changes. Many times this is acceptable, especially
when the synchronization delay is a matter of milliseconds. The consistency
model is useful for parts that relate to each other, but cannot exist within the
same transactional boundary. The term “eventual” describes that synchronization
may take time. However, an update mechanism must never be unreliable.
Synchronization strategies
There are multiple options for synchronizing value updates. One is to let
dependent components query for changes. As analogy, consider the website of a
popular conference. People reload it repeatedly until the ticket sales start. Given
enough requests, the website crashes. While the approach has a low complexity
for the producer, it performs poorly when dealing with many consumers.
Alternatively, the initiator of a modification can actively notify others. For the
conference website, all currently connected browsers can be instructed to reload
automatically when tickets become available. While this can improve the
performance, it requires an according notification mechanism. Independent of
the respective approach, the synchronization of data demands meaningful
structures to enclose value changes. Domain Events are a fitting candidate for
this purpose.
The first code example shows the player Entity type (run code usage):
Game server: Player Entity
class Player {
constructor({id}) {
Object.defineProperty(this, 'id', {value: id, enumerable: true});
}
markAsOnline(serverId) {
this.#isOnline = true;
this.#currentServerId = serverId;
}
markAsOffline() {
this.#isOnline = false;
this.#currentServerId = null;
}
const playerDatabase = {
save: player => playersById.set(player.id, player),
load: playerId => playersById.get(playerId),
};
The component Player expresses the concept of an individual player, which can
be marked as online and offline. For every new instance, the private field
#isOnline is set to false, since a player is initially considered offline. The
command markAsOnline() alters the state by setting the attribute to true and
saving the passed in server identity. Changing the state back to offline is done
via the command markAsOffline(). For retrieving the current status, the Entity
provides the two accessor functions isOnline() and currentServerId(). The
component playerDatabase represents a simple in-memory database to save
players and load them via an identifier. Behind the scenes, the Entities are put
into a native Map object.
The next examples show the definition of two specialized Domain Event types
and the game server component (run code):
Game server: Domain Event types
const PlayerAddedToServerEvent = createEventType(
'PlayerAddedToServer', {serverId: 'string', playerId: 'string'},
);
addPlayer(playerId) {
if (this.getLeftSpots() <= 0) throw new Error('limit reached');
this.#playerIds.push(playerId);
this.#eventBus.publish(
new PlayerAddedToServerEvent({serverId: this.id, playerId}));
}
removePlayer(playerId) {
const index = this.#playerIds.indexOf(playerId);
if (index > -1) this.#playerIds.splice(index, 1);
this.#eventBus.publish(
new PlayerRemovedFromServerEvent({serverId: this.id, playerId}));
}
}
The class GameServer is responsible for managing players on a server and
ensuring that the capacity is not exceeded. Upon construction, a player identifier
collection is initialized with an empty array, and the provided capacity is saved
as attribute. The command addPlayer() expects an identifier and adds it to the
collection, given the server has enough spots left. After the state change, the
component creates and publishes the Domain Event PlayerAddedToServer with
the corresponding identifiers as custom data. The counterpart function
removePlayer() removes a given value from the player identifier collection.
Similar to the other command, it creates and publishes the event
PlayerRemovedFromServer with the affected identifiers as contained data. Both
events represent the means to synchronize each player state and to facilitate
Eventual Consistency.
The next code example provides a component for the synchronization of player
Entities:
Game server: Player state synchronization
const playerStateSynchronization = {
activate(eventBus) {
eventBus.subscribe(PlayerAddedToServerEvent.type, event => {
const player = playerDatabase.load(event.data.playerId);
player.markAsOnline(event.data.serverId);
});
eventBus.subscribe(PlayerRemovedFromServerEvent.type, event => {
const player = playerDatabase.load(event.data.playerId);
player.markAsOffline();
});
},
};
playerStateSynchronization.activate(eventBus);
const testServer =
new GameServer({id: generateId(), playerCapacity: 100, eventBus});
const player1 = new Player({id: generateId()});
playerDatabase.save(player1);
testServer.addPlayer(player1.id);
console.log(`added player ${player1.id} to server ${testServer.id}`);
testServer.removePlayer(player1.id);
console.log(`removed player ${player1.id} from server ${testServer.id}`);
The first step is the activation of the player state synchronization. This is
followed by the registration of two more event subscribers. These functions are
used to log game server events. As example Entities, a server and a player are
created. The player is saved into the database. Afterwards, it is added to the
server and immediately removed again. Both commands are accompanied by a
console.log() call. Executing the code shows that the addition and the removal
messages appear before marking the player as online or offline. The reason is
that the Event Bus notifies subscribers asynchronously. This example is an ideal
demonstration of Eventual Consistency. Although the player state is temporarily
inconsistent, it does not produce errors and is only a matter of milliseconds.
User context
Every user seemingly represents a separate Aggregate with its user Entity as root
element. The associated e-mail address and the role are contained immutable
values. However, all users modify the same e-mail registry when setting or
updating their e-mail address. Assuming that this behavior should be
transactional implies two constraints to satisfy. Atomicity demands that either
both user and registry are updated or none of them. Secondly, isolation dictates
that concurrent e-mail changes do not cause their registry modifications to
overwrite each other. Effectively, this creates one large transaction boundary
across all users. This design issue can be solved by updating the e-mail registry
in a separate transaction after a user modification. In fact, maintaining e-mail
address availability is not the responsibility of a user.
The first code shows the definition of a specialized Domain Event type:
User context: Domain Events
const UserEmailAddressAssignedEvent = createEventType(
'UserEmailAddressAssigned', {userId: 'string', emailAddress: 'string'});
The second example implements a reworked constructor for the User class:
User context: User Entity constructor with Event Bus
constructor(
{id, username, emailAddress, password, role, emailAvailability, eventBus}) {
verify('valid id', id != null);
this.#emailAvailability = emailAvailability;
this.#eventBus = eventBus;
Object.defineProperty(this, 'id', {value: id, writable: false});
Object.assign(this, {username, emailAddress, password, role});
}
The next code shows a refactored e-mail address setter function that publishes a
Domain Event (run code usage):
User context: User Entity e-mail setter
set emailAddress(emailAddress) {
verify('unused e-mail', this.#emailAvailability.isEmailAvailable(emailAddress));
verify('valid e-mail', emailAddress.constructor === EmailAddress);
this.#emailAddress = emailAddress;
this.#eventBus.publish(new UserEmailAddressAssignedEvent(
{userId: this.id, emailAddress: emailAddress.value}));
}
activate({eventBus}) {
eventBus.subscribe(UserEmailAddressAssignedEvent.type, (event) => {
emailRegistry.setUserEmailAddress(event.data.userId,
new EmailAddress(event.data.emailAddress));
});
}
};
The next example illustrates the usage of the reworked user context
implementation (run code):
User context: Usage
const eventBus = new EventBus();
emailRegistrySynchronization.activate({eventBus});
Project context
The original project context implementation already defines well-designed
transactional consistency boundaries. Each of the three Entity types team
member, team and project is the root of a cleanly separated Aggregate boundary.
An individual team member consists of a role and an associated user identity.
The team Entity represents a mutable collection of member identifiers. Projects
enclose a name and references to a team and a task board. All the relationships
between individual Aggregates are expressed with identities. This design is
feasible, as the referencing components do not require any further details. Still,
one question is whether a larger Aggregate would be more useful than three
smaller ones. In the worst case, it would introduce additional complexity and
risk concurrency conflicts. Consequently, no refactoring is applied.
The first example provides the Value Object type for describing the combination
of a task identifier and a status (run code usage):
Task board context: Task summary Value Object
const TaskSummary = function({taskId, status}) {
Object.assign(this, {taskId, status});
Object.freeze(this);
};
The next code shows a reworked task board that works with Value Objects
instead of task Entities (run code usage):
Task board context: Task board with summaries
class TaskBoard {
constructor({id}) {
verify('valid id', id != null);
Object.defineProperty(this, 'id', {value: id, writable: false});
}
addTask(taskSummary) {
verify('valid task summary', taskSummary instanceof TaskSummary);
verify('task is new', this.#getTaskSummaryIndex(taskSummary.taskId) === -1);
this.#taskSummaries.push(taskSummary);
}
removeTask(taskId) {
const index = this.#getTaskSummaryIndex(taskId);
verify('task is on board', index > -1);
this.#taskSummaries.splice(index, 1);
}
getTasks(status = '') {
if (status === '') return this.#taskSummaries.slice();
return this.#taskSummaries.filter(summary => summary.status === status);
}
updateTaskStatus(taskId, status) {
const index = this.#getTaskSummaryIndex(taskId);
verify('task is on board', index > -1);
this.#taskSummaries.splice(index, 1, new TaskSummary({taskId, status}));
}
#getTaskSummaryIndex(taskId) {
return this.#taskSummaries.findIndex(summary => summary.taskId === taskId);
}
The following code defines a specialized Domain Event type for a task status
update:
Task board context: Domain Events
const TaskStatusChangedEvent = createEventType(
'TaskStatusChanged', {taskId: 'string', status: 'string'},
);
The next example extend the task Entity type with the publishing of a Domain
Event after a status update (run code usage):
Task board context: Task constructor
constructor(
{id, title, description = '', status = 'todo', assigneeId, eventBus}) {
verify('valid id', id != null);
this.#eventBus = eventBus;
Object.defineProperty(this, 'id', {value: id, writable: false});
Object.assign(this, {title, description, status, assigneeId});
}
Task board context: Task status setter function with event
set status(status) {
verify('valid status', validStatus.includes(status));
verify('active task assignee', status !== 'in progress' || !!this.assigneeId);
this.#status = status;
this.#eventBus.publish(new TaskStatusChangedEvent({taskId: this.id, status}));
}
activateTaskAssigneeSynchronization({eventBus, taskBoard}) { /* .. */ },
activateTaskStatusSynchronization({eventBus, taskBoard}) {
eventBus.subscribe(TaskStatusChangedEvent.type, event => {
if (taskBoard.containsTask(event.data.taskId))
taskBoard.updateTaskStatus(event.data.taskId, event.data.status);
});
},
};
The class Task is changed in multiple ways. For one, its constructor expects an
Event Bus as additional argument. Secondly, the function set status() creates
and publishes the Domain Event “TaskStatusChanged” after the state change.
The component taskBoardSynchronization is extended with the operation
activateTaskStatusSynchronization() for performing task state
synchronization. As arguments, it expects an Event Bus and a task board. The
function registers a subscriber callback for the event type “TaskStatusChanged”
and conditionally updates the affected task summary status. Note that a received
Domain Event can refer to any working item and therefore affect any task board.
Prior to the execution of the function updateTaskStatus(), the operation
containsTask() is used to check whether a board is affected.
The whole synchronization is best illustrated with a usage example (run code):
Task board context: Usage with task update
const taskBoard = new TaskBoard({id: generateId()});
taskBoardSynchronization.activateTaskStatusSynchronization({eventBus, taskBoard});
The code starts with creating a task board instance. Then, the state
synchronization is activated by executing
activateTaskStatusSynchronization() and passing in an Event Bus and the
task board. Next, an exemplary task is instantiated. Afterwards, a summary is
generated by executing the factory function createFromTask() and passing in
the Entity. The returned Value Object is added to the task board. This is followed
by updating the status of the task to “in progress”. Finally, the operation
console.log() is executed twice to illustrate the asynchronicity of the state
synchronization. Both calls query the board for tasks that are in progress. The
first time, no items are output. Due to the setTimeout() call, the second log
happens after processing the Domain Event and outputs the expected task
summary.
The illustrated implementation is superior to one large Aggregate per task board
together with its tasks. This has multiple reasons. The board component has a
lower complexity, since there is no need for facade functions around tasks. Also,
boards with many working items perform better due to only referencing
summaries. Tasks can be used standalone and can be modified concurrently,
even when belonging to the same board. However, there are some disadvantages.
Eventually consistent task summaries temporally contain outdated information.
Furthermore, simultaneous status changes to different tasks can indirectly cause
conflicts in one board. This happens when multiple calls of
updateTaskStatus() are concurrently issued to one Entity. Nevertheless, the
solution is more scalable than containing tasks, where every state modification
can cause concurrency issues.
The following code example shows a reworked version of the task un-
assignment upon team member removal (run code usage):
Task board context: Task board synchronization
const taskBoardSynchronization = {
activateTaskAssigneeSynchronization({eventBus, taskDatabase}) {
eventBus.subscribe('TeamMemberRemovedFromTeam', ({data: {teamMemberId}}) => {
const tasks = taskDatabase.getTasks();
const assignedTasks = tasks.filter(task => task.assigneeId === teamMemberId);
assignedTasks.forEach(task => {
if (task.status === 'in progress') task.status = 'todo';
task.assigneeId = undefined;
});
});
},
activateTaskStatusSynchronization({eventBus, taskBoard}) { /* .. */ },
};
taskBoardSynchronization.activateTaskAssigneeSynchronization(
{eventBus, taskDatabase});
First, the code defines the Map object taskDatabase. Next, the task assignee
synchronization is activated. Instead of a task board, the reworked version
expects a task database for accessing actual Entities. Then, a Team Entity is
instantiated and a member identifier is added. This is followed by creating a task
and saving it into the database. Afterwards, the team member identifier is
assigned and the description and status are updated. This is illustrated with an
output of the task state. Then, the member identifier is removed from the team.
This action causes to create and publish the Domain Event
“TeamMemberRemoved”, which itself triggers the synchronization. As
verification for success, the task state is output again. The usage of
setTimeout() ensures that the un-assignment is completed.
Basic functionality
The basic functionality of a Repository is to save and to load individual
Aggregate instances through their root Entity. For most components, the save
operation is identical and stores a single element. What the querying part
consists of, depends on the Domain Model requirements. One essential
functionality is to load an object via its identifier. Apart from this, there are
endless possibilities. Queries may also respond with computational results such
as counts. Repositories should always explicitly convert objects into data and
vice versa to differentiate between object representation and persisted
information. Generally, it is not advisable to save elements as-is. At a minimum,
a conversion ensures data integrity. In JavaScript, Repository interfaces should
always be asynchronous, even if the underlying persistence mechanism is
synchronous.
The first code example provides a function to save a file atomically, as explained
in the previous chapter (run code usage):
Filesystem: Write file helper function
const writeFileAtomically = async (filePath, content) => {
const tempPath = `${filePath}-${generateId()}.tmp`;
await writeFile(tempPath, content);
await rename(tempPath, filePath);
};
async save(entity) {
const data = this.#convertToData(entity);
await writeFileAtomically(this.getFilePath(entity.id), JSON.stringify(data));
}
load(id) {
return readFile(this.getFilePath(id))
.then(buffer => this.#convertToEntity(JSON.parse(buffer.toString())));
}
getFilePath(id) {
if (!id) throw new Error('invalid identifier');
return `${this.storageDirectory}/${id}.json`;
}
}
The next code shows an exemplary usage of the filesystem Repository (run
code):
Repository: Filesystem Repository usage
class Counter {
id; #value;
constructor({id, start}) {
Object.defineProperty(this, 'id', {value: id, writable: false});
this.#value = start;
}
increment() { this.#value++; }
The code starts with defining the example Entity class Counter. This component
represents the concept of a simple numerical counter. As constructor argument, it
accepts a start value. The only behavior is to increment by one. For a correctly
working Repository, the according converter functions are defined. While
convertToData() is a one-to-one mapping of information, convertToEntity()
underlines the need for an explicit transformation. The function correctly
instantiates the class Counter and uses the field value as parameter start. Both
converters are passed in as arguments to the FilesystemRepository constructor.
As actual use case, a counter is instantiated, incremented once and persisted.
Afterwards, it is loaded again and the retrieved data is output. This illustrates the
ability to use the Repository component for any Entity type.
Example: Consultant profile directory
Consider implementing the Domain Model and the persistence mechanism for a
directory of consultant profiles. The purpose is to provide recruitment services
for companies that need consultants for their projects. The Domain Model
consists of two components. One is the consultant profile, which encloses an
identifier, a name and a list of skills. Both the identity and the name must be
immutable. The skills can be changed by adding new items and removing
existing ones. Each entry is represented as simple string value. The second part
is the directory of available profiles. This component must provide two querying
capabilities. For one, individual instances must be accessible via their identity.
Secondly, it must be possible to find all profiles matching a specific skill.
The first code provides the consultant profile Entity type (run code usage):
Consultant profile directory: Consultant profile Entity
class ConsultantProfile {
addSkill(skill) {
if (!this.#skills.includes(skill)) this.#skills.push(skill);
}
removeSkill(skill) {
const index = this.#skills.indexOf(skill);
if (index > -1) this.#skills.splice(index, 1);
}
}
The next implementation shows the specialized consultant profile Repository
class:
Consultant profile directory: Consultant profile Repository
class ConsultantProfileRepository extends FilesystemRepository {
constructor({storageDirectory}) {
super({storageDirectory,
convertToData: entity =>
({id: entity.id, name: entity.name, skills: entity.skills}),
convertToEntity: data => new ConsultantProfile(
{id: data.id, name: data.name, skills: data.skills})});
}
async findAllWithSkill(skill) {
const files = await readdir(this.storageDirectory);
const ids = files.map(filename => filename.replace('.json', ''));
const entities = await Promise.all(ids.map(id => this.load(id)));
return entities.filter(({skills}) => skills.includes(skill));
}
await Promise.all([repository.save(profile1),
repository.save(profile2), repository.save(profile3)]);
console.log('1st search: ', await repository.findAllWithSkill('development'));
profile1.removeSkill('testing');
await repository.save(profile1);
console.log('2nd search: ', await repository.findAllWithSkill('testing'));
The first example provides the cart item Value Object type (run code usage):
Shopping cart: Cart item
const CartItem = function({productId, quantity}) {
Object.freeze(Object.assign(this, {productId, quantity}));
};
The next code shows the original cart Entity implementation (run code):
Shopping cart: Without initial items
class ShoppingCart {
constructor({id}) {
Object.defineProperty(this, 'id', {value: id, writable: false});
}
addItem(item) {
if (!(item instanceof CartItem)) throw new Error('invalid item');
this.#items.push(item);
}
const initialItems = [
new CartItem({productId: generateId(), quantity: 3}),
new CartItem({productId: generateId(), quantity: 4}),
];
const shoppingCart = new ShoppingCart({id: generateId()});
initialItems.forEach(item => shoppingCart.addItem(item));
console.log(shoppingCart.items);
The last example provides an altered cart constructor with an argument for initial
items (run code):
Shopping cart: Constructor with initial items
constructor({id, initialItems = []}) {
Object.defineProperty(this, 'id', {value: id, writable: false});
initialItems.forEach(item => this.addItem(item));
Optimistic Concurrency
There are different strategies for the implementation of Concurrency Control.
One is called Pessimistic Locking, which is enforced through exclusive
resource access, as illustrated in Chapter 8. Another approach is Optimistic
Concurrency, which exclusively affects the write operations. Reading a
resource is always allowed, regardless of concurrency aspects. When attempting
to persist changes, the actuality of a modification request is verified first. This
causes to only accept an update when it is based on the latest status. An
implementation of this approach is the use of version numbers. For every change
request, the accompanied version is compared to the currently persisted one.
Only if they match, the request is processed and the version is incremented.
Otherwise, the request is rejected and an error is reported.
Timestamps as alternative
Instead of version numbers, it is possible to implement Optimistic Concurrency with timestamps.
However, there are some challenges to this approach. For one, the timestamp values must have a
high precision, potentially even higher than milliseconds. Also, all involved systems must share
the exact same machine time. These aspects make the approach generally more complex and
error-prone.
When facilitating Optimistic Concurrency, one design question is where to put
the concurrency-related information. With regard to transactions, a pragmatic
solution is to store it within the resource records. However, in terms of Domain
Modeling and code design there are other aspects to consider. Generally
speaking, concurrency-specific information has nothing to do with a Domain
Model and should be kept separately. On the other hand, using an attribute
directly inside an Entity is a simple solution. It may even be arguable that for
some components a version number is a meaningful attribute. In any case,
concurrency support should not introduce a lot of additional complexity. In this
book, the Repository implementations directly assign versions to Entities, while
the components themselves stay agnostic of this attribute.
#currentQueuePromise = Promise.resolve();
enqueueOperation(operation) {
this.#currentQueuePromise =
this.#currentQueuePromise.then(operation, operation);
return this.#currentQueuePromise;
}
The class AsyncQueue enqueues operations and executes them one after the
other. Adding an operation is done by calling the command
enqueueOperation(). The provided function should return a Promise object,
which resolves as soon as its work is completed. The argument is passed as
.then() handler to the currently latest Promise and the result is saved as new
latest Promise. As the argument is used for both fulfillments and rejections, the
queue remains operational in case of an exception. Furthermore, due to the
implementation of Promise.prototype.then(), it is even possible to enqueue
synchronous operations. The return value of the function enqueueOperation()
is the latest queue Promise object. This makes it possible for consumers to wait
until their operation is completed and to handle any exceptions.
The next example provides a simple error class for a concurrency conflict (run
code usage):
Repository: Concurrency conflict error
class ConcurrencyConflictError extends Error {
constructor({entityToSave, latestEntity}) {
super('ConcurrencyConflict');
Object.assign(this, {entityToSave, latestEntity});
}
save(entity) {
if (!this.#saveQueueById.has(entity.id))
this.#saveQueueById.set(entity.id, new AsyncQueue());
return this.#saveQueueById.get(entity.id).enqueueOperation(async () => {
await this.#verifyVersion(entity);
const data = this.#convertToData(entity);
const dataWithVersion = {...data, version: (entity.baseVersion || 0) + 1};
await writeFileAtomically(
this.getFilePath(entity.id), JSON.stringify(dataWithVersion));
});
}
async load(id) {
const data = await readFile(this.getFilePath(id), 'utf-8').then(JSON.parse);
return Object.assign(this.#convertToEntity(data), {baseVersion: data.version});
}
getFilePath(id) {
if (!id) throw new Error('invalid identifier');
return `${this.storageDirectory}/${id}.json`;
}
async #verifyVersion(entityToSave) {
const latestEntity = await this.load(entityToSave.id).catch(() => ({}));
if ((entityToSave.baseVersion || 0) !== (latestEntity.baseVersion || 0))
throw new ConcurrencyConflictError({entityToSave, latestEntity});
}
The function save() initially retrieves the asynchronous queue for the passed in
Entity. Then, the actual save process is enqueued, which incorporates multiple
steps. First, it compares the base version to the currently persisted value, of
which both default to 0. If the versions mismatch, a ConcurrencyConflictError
is thrown. Otherwise, the passed in Entity is converted to data. Next, an
incremented version is assigned. Finally, the result is persisted. The return value
is a Promise, which resolves after saving, but rejects for any error. While
consumers are able to react to issues, the asynchronous queue ignores them and
continues operating. The function load() is extended with the responsibility of
assigning the baseVersion attribute. This avoids requiring the converter function
convertToEntity() to be aware of versions.
The last code shows an exemplary usage of the concurrency-safe repository (run
code):
Repository: Concurrency-safe filesystem Repository usage
const convertToData = counter => ({id: counter.id, value: counter.value});
const convertToEntity = data => new Counter({id: data.id, start: data.value});
const counterRepository = new ConcurrencySafeFilesystemRepository(
{storageDirectory, convertToData, convertToEntity});
Similar to the example in the previous section, the code uses the example Entity
type Counter. Both the converter functions convertToData() and
convertToEntity() remain identical. This emphasizes the ability for the
converters to stay version-agnostic. The helper function incrementCounter()
loads a counter, increments it once and saves the resulting change. For an
exemplary usage, a counter Entity is created and persisted. Afterwards, the
instance is concurrently modified by executing the operation
incrementCounter() twice. Any occurring errors are caught and logged to the
console. Finally, the counter is loaded again and its current value is output.
Executing the code shows that one increment succeeds, while the other one
produces a concurrency conflict. This demonstrates how to use Optimistic
Concurrency and also how to make conflicts explicit.
The first example provides a Value Object type for the reservation (run code
usage):
Meetup seat reservation: Reservation Value Object
const Reservation = function({creationTime, emailAddress, numberOfSeats}) {
Object.assign(this, {creationTime, emailAddress, numberOfSeats});
Object.freeze(this);
};
The next code implements the actual meetup Entity component (run code usage):
Meetup seat reservation: Meetup Entity
class Meetup {
addReservation(reservation) {
if (!(reservation instanceof Reservation)) throw new TypeError('reservation');
if (this.getAvailableSeats() < reservation.numberOfSeats)
throw new Error('not enough seats available');
this.#reservations.push(reservation);
}
getAvailableSeats() {
const reservedSeats = this.#reservations.reduce(
(sumOfSeats, reservation) => sumOfSeats + reservation.numberOfSeats, 0);
return this.seatCapacity - reservedSeats;
}
The last example in this subsection illustrates an exemplary usage of the meetup
seat reservation (run code):
Meetup seat reservation: Overall usage
const meetupRepository = new ConcurrencySafeFilesystemRepository(
{storageDirectory, convertToData, convertToEntity});
await meetupRepository.save(meetup);
try {
await Promise.all([
addReservation({meetupId, emailAddress: 'jim@example.com', numberOfSeats: 15}),
addReservation({meetupId, emailAddress: 'joe@example.com', numberOfSeats: 15}),
]);
} catch ({entityToSave, latestEntity}) {
console.log('concurrency conflict');
console.log('entity to save reservations:', entityToSave.reservations);
console.log('latest entity reservations:', latestEntity.reservations);
}
Consider extending the Domain Model and the implementation for the meetup
seat reservation. While the initial version works without conflicts for meetups
with moderate demand, the situation is different for popular events. When two or
more persons concurrently attempt to add a reservation, only one of them
succeeds. Given enough simultaneous requests, this leads to an unsatisfying
experience for many users. As mentioned previously, regularly occurring
conflicts can hint to an issue in the Domain Model. However, for this example,
refactoring the consistency boundaries is not an option. The reservations for a
meetup must exist within one Aggregate since they share a common invariant.
Their sum must not exceed the maximum seat capacity. One pragmatic solution
approach is to implement automatic retries for reservation requests.
await meetupRepository.save(meetup);
await Promise.all([
addReservation({meetupId, emailAddress: 'jim@example.com', numberOfSeats: 10}),
addReservation({meetupId, emailAddress: 'ben@example.com', numberOfSeats: 10}),
addReservation({meetupId, emailAddress: 'jane@example.com', numberOfSeats: 10}),
]);
const savedMeetup = await meetupRepository.load(meetupId);
console.log(`reservations: ${JSON.stringify(savedMeetup.reservations, null, 2)}`);
The usage example is similar to the final implementation from the previous
subsection. First, a meetup is created and persisted. Afterwards, there are three
addReservation() calls, of which none exceeds the seat capacity by itself.
Finally, the Entity is loaded again and its reservations are output. The key
difference is the implementation of the addReservation() command. When
adding a reservation fails, the error is caught and analyzed. In case of a
concurrency conflict, the function calls itself recursively. This is repeated until
the operation succeeds or the maximum retry number is reached. Other errors,
such as invariant violation attempts, are re-thrown. Executing the code shows
that one request succeeds immediately, while all others are retried automatically.
This behavior improves the user experience for popular meetups.
await meetupRepository.save(meetup);
await addReservation(
{meetupId, emailAddress: 'jim@example.com', numberOfSeats: 15});
await Promise.all([
addReservation({meetupId, emailAddress: 'ben@example.com', numberOfSeats: 1}),
addReservation({meetupId, emailAddress: 'john@example.com', numberOfSeats: 5}),
]);
const savedMeetup = await meetupRepository.load(meetupId);
console.log(`reservations: ${JSON.stringify(savedMeetup.reservations, null, 2)}`);
The following subsections explain the necessary steps and the different
approaches for correctly publishing Domain Events in combination with
persistence. This is preceded by the introduction of an example topic. The
example is used throughout the subsections to illustrate the individual steps and
the different implementation approaches.
The first example shows the bid Value Object type (run code usage):
Auction platform: Bid Value Object
const Bid = function({bidderId, price}) {
Object.freeze(Object.assign(this, {bidderId, price}));
};
The next code provides the definition of the Domain Event for when a bid is
added:
Auction platform: Domain Event
const BidAddedEvent = createEventType('BidAdded',
{auctionId: 'string', newHighestBidderId: 'string', newHighestPrice: 'number',
oldHighestBidderId: 'string', oldHighestPrice: 'number'});
The third example shows the initial code for the auction Entity type:
Auction platform: Auction Entity
class Auction {
bid(newBid) {
const highestBid = this.#bids[this.#bids.length - 1];
this.#verifyBid(newBid, highestBid ? highestBid.price : this.startingPrice);
this.#bids.push(newBid);
this.#eventBus.publish(new BidAddedEvent({
auctionId: this.id,
newHighestBidderId: newBid.bidderId, newHighestPrice: newBid.price,
oldHighestBidderId: highestBid ? highestBid.bidderId : '',
oldHighestPrice: highestBid ? highestBid.price : this.startingPrice,
}));
}
#verifyBids(startingPrice, bids) {
bids.forEach((bid, index) => this.#verifyBid(bid,
index > 0 ? bids[index - 1].price : startingPrice));
}
#verfiyBid(bid, priceToOutbid) {
if (!(bid instanceof Bid)) throw new Error('invalid bid');
if (bid.price <= priceToOutbid) throw new Error('bid too low');
}
The class Bid expresses the bid concept as Value Object. For describing the
occurrence of a bid addition, the specialized Domain Event type BidAddedEvent
is defined. The auction concept is implemented as the Entity class Auction. Its
constructor accepts initial bids for reconstituting persisted instances. Also, it
expects an Event Bus, which is saved as private attribute. Adding a new bid is
done via the command bid(). Every submission is first verified by checking its
type and validating its price. The first bid is compared to the auction starting
price, all other ones to the respective previous bid. In case of success, the
function creates and publishes a “BidAdded” Domain Event. This message is
used to trigger a notification for telling users they were outbid.
Collecting events
The creation and the publishing of Domain Events must be separated when
working with persistent components. Therefore, all occurring events need to be
collected to the point they can be distributed. In its simplest form, this can be
implemented with a plain collection. Similar to the topic of concurrency-related
information, the question is where to best place this data. When treating it as
technical information, it is clear that it should not be contained inside Domain
Model components. However, Domain Events are in fact parts of the domain-
related abstractions. Consequently, it can be acceptable, yet even useful, to
maintain them as a direct attribute. In the following code examples and the
sample implementations, all occurring events are kept as a collection inside the
Entities.
The first example shows the bid() function of a reworked Auction class (run
code usage):
Auction platform: Auction Entity with events collection
bid(newBid) {
const highestBid = this.#bids[this.#bids.length - 1];
this.#verifyBid(newBid, highestBid ? highestBid.price : this.startingPrice);
this.#bids.push(newBid);
this.#newDomainEvents.push(new BidAddedEvent({
auctionId: this.id,
newHighestBidderId: newBid.bidderId, newHighestPrice: newBid.price,
oldHighestBidderId: highestBid ? highestBid.bidderId : '',
oldHighestPrice: highestBid ? highestBid.price : 0,
}));
}
#eventBus;
async save(entity) {
await super.save(entity);
for (const event of entity.newDomainEvents) await this.#eventBus.publish(event);
}
The following code implements a helper function to load an auction, add a bid
and persist the change:
Auction platform: Add bid helper
const addBid = async ({repository, auctionId, bid}) => {
const auction = await repository.load(auctionId);
console.log(`adding a bid of ${bid.price}`);
auction.bid(bid);
console.log(`saving bid addition of ${bid.price}`);
await repository.save(auction);
};
await repository.save(auction);
await addBid({repository, auctionId, bid: new Bid({bidderId, price: 110})});
await addBid({repository, auctionId, bid: new Bid({bidderId, price: 120})});
const savedAuction = await repository.load(auctionId);
console.log('final loaded bids:', savedAuction.bids);
#convertToData;
async save(entity) {
const data = {...this.#convertToData(entity), newDomainEvents: undefined};
const savedEvents = await this.loadDomainEvents(entity.id);
const allEvents = savedEvents.concat(entity.newDomainEvents);
const fullData = {...data, domainEvents: allEvents};
const filePath = this.getFilePath(entity.id);
await writeFileAtomically(filePath, JSON.stringify(fullData));
}
loadDomainEvents(id) {
return readFile(this.getFilePath(id), 'utf-8')
.then(JSON.parse).then(data => data.domainEvents || []).catch(() => []);
}
#publishedEventIdsDirectory; #eventBus;
async save(entity) {
await super.save(entity);
await Promise.all(entity.newDomainEvents.map(
domainEvent => this.#eventBus.publish(domainEvent)));
await this.#saveLastPublishedEventId(entity.id, entity.newDomainEvents);
}
async publishOverdueDomainEvents(entityId) {
const allEvents = await super.loadDomainEvents(entityId);
const lastPublishedEventId = await this.#getLastPublishedEventId(entityId);
const eventsToPublish = allEvents.slice(
allEvents.findIndex(event => event.id === lastPublishedEventId) + 1);
for (const event of eventsToPublish) await this.#eventBus.publish(event);
await this.#saveLastPublishedEventId(entityId, eventsToPublish);
}
#getLastPublishedEventId(entityId) {
const filePath = `${this.#publishedEventIdsDirectory}/${entityId}`;
return readFile(filePath, 'utf-8').catch(() => '');
}
As preparation for the usage example, the following code provides a function to
simulate an Event Bus publishing failure:
Auction platform: Make Event Bus publish fail helper
const makeEventBusPublishFailOnce = eventBus => {
const originalPublish = eventBus.publish;
eventBus.publish = () => {
eventBus.publish = originalPublish;
throw new Error('publish failed');
};
};
await repository.save(auction);
await addBid({repository, auctionId, bid: new Bid({bidderId, price: 110})});
await makeEventBusPublishFailOnce(eventBus);
try {
await addBid({repository, auctionId, bid: new Bid({bidderId, price: 120})});
} catch (error) {
console.log(error.message);
}
const savedAuction = await repository.load(auctionId);
console.log('current bids:', savedAuction.bids);
console.log('publishing overdue events');
repository.publishOverdueDomainEvents(auctionId);
The next example provides a component to consume Domain Events and trigger
their publishing via the Event Bus:
Domain Event: Domain Event Publisher
class DomainEventPublisher {
activate() {
this.#repository.subscribeToEntityChanges(
id => this.publishDueDomainEvents(id));
}
publishDueDomainEvents(entityId) {
this.#publishingQueue.enqueueOperation(async () => {
const allEvents = await this.#repository.loadDomainEvents(entityId);
const lastPublishedEventId = await this.#getLastPublishedEventId(entityId);
const eventsToPublish = allEvents.slice(
allEvents.findIndex(event => event.id === lastPublishedEventId) + 1);
await Promise.all(eventsToPublish.map(event => this.#eventBus.publish(event)));
await this.#saveLastPublishedEventId(entityId, eventsToPublish);
});
}
_getLastPublishedEventId(entityId) { /* .. */ }
_saveLastPublishedEventId(entityId, publishedEvents) { /* .. */ }
domainEventPublisher.activate();
await repository.save(auction);
await addBid({repository, auctionId, bid: new Bid({bidderId, price: 110})});
await addBid({repository, auctionId, bid: new Bid({bidderId, price: 120})});
console.log('adding bids completed');
Again, the code starts with creating an instance of the event-saving filesystem
Repository component. Afterwards, the class DomainEventPublisher is
instantiated and enabled by executing activate(). Also, a subscriber function
for the “BidAdded” event is registered. Then, all necessary identifiers are
defined and an example auction is created and persisted. The actual use case
consists of two subsequent invocations of the addBid() helper. Running the code
can produce different results, depending on the execution environment. This is
due to using fs.watch() for Entity change notifications. The event publishing
may only happen after both bid additions are completed. In this case, Node.js
merges the filesystem changes into one notification. Overall, this example
demonstrates how to combine persistence and reliable Domain Event publishing
without polluting Repository implementations.
The first code shows the loadAll() function of the combined concurrency-safe
filesystem Repository class (run code usage):
Infrastructure code: Additional Repository function
async loadAll() {
const files = await readdir(this.storageDirectory);
const jsonFiles = files.filter(filename => filename.endsWith('.json'));
const ids = jsonFiles.map(filename => filename.replace('.json', ''));
return Promise.all(ids.map(id => this.load(id)));
}
As helper for subsequent usage examples, the next code provides a function that
resolves after a given amount of time (run code usage):
Infrastructure code: Timeout function
const timeout =
duration => new Promise(resolve => setTimeout(resolve, duration));
console.log('start');
await timeout(1000);
console.log('continue');
Project context
The project context implementation requires the least complex source code
modifications and additions to be made for enabling persistence. Both the Entity
classes TeamMember and Project and the Value Object type Role remain
unchanged. In contrast, the class Team is extended with multiple aspects. For
one, it defines the collection #newDomainEvents and the getter
newDomainEvents() for event persistence and distribution. The constructor
argument for the Event Bus is removed. Furthermore, the component accepts
initial member identifiers for Entity reconstitution. When removing a team
member, the Domain Event “TeamMemberRemoved” is created and added to the
event collection. For each of the Entity types of the project context, a Repository
is implemented as subclass of the ConcurrencySafeFilesystemRepository
component.
The first examples shows the refactored parts of the Team Entity:
Project context: Team constructor
constructor({id, initialTeamMemberIds = []}) {
verify('valid id', id != null);
Object.defineProperty(this, 'id', {value: id, writable: false});
this.#teamMemberIds = [];
initialTeamMemberIds.forEach(teamMemberId => this.addMember(teamMemberId));
}
The next examples provide all Repository classes for the project context:
Project context: Team member Repository
class TeamMemberRepository extends ConcurrencySafeFilesystemRepository {
constructor({storageDirectory}) {
super({storageDirectory,
convertToData: entity => ({id: entity.id, userId: entity.userId,
role: entity.role.name}),
convertToEntity: data => new TeamMember(
{id: data.id, userId: data.userId, role: new Role(data.role)})});
}
async findTeamMembersByUser(userId) {
const members = await this.loadAll();
return members.filter(
member => member.userId === userId).map(member => member.id);
}
constructor({storageDirectory}) {
super({storageDirectory,
convertToData: ({id, teamMemberIds}) => ({id, teamMemberIds}),
convertToEntity: data => new Team(
{id: data.id, initialTeamMemberIds: data.teamMemberIds})});
}
async findTeamByTeamMember(teamMemberId) {
const teams = await this.loadAll();
return teams.find(team => team.teamMemberIds.includes(teamMemberId));
}
constructor({storageDirectory}) {
super({storageDirectory,
convertToData: ({id, name, ownerId, teamId, taskBoardId}) =>
({id, name, ownerId, teamId, taskBoardId}),
convertToEntity: ({id, name, ownerId, teamId, taskBoardId}) =>
new Project({id, name, ownerId, teamId, taskBoardId})});
}
async findProjectsByOwner(ownerId) {
const projects = await this.loadAll();
return projects.find(project => project.ownerId === ownerId);
}
async findProjectByTeam(teamId) {
const projects = await this.loadAll();
return projects.find(project => project.teamId === teamId);
}
The next code shows an exemplary usage of the refactored context together with
the DomainEventPublisher component for publishing Domain Events (run code
usage):
Project context: Overall usage
const teamMemberRepository = new TeamMemberRepository(
{storageDirectory: `${rootStorageDirectory}/team-member`});
const teamRepository = new TeamRepository(
{storageDirectory: `${rootStorageDirectory}/team`});
const projectRepository = new ProjectRepository(
{storageDirectory: `${rootStorageDirectory}/project`});
publisher.activate();
eventBus.subscribe(TeamMemberRemovedFromTeamEvent.type, console.log);
User context
Although the user context implementation contains only one Aggregate, it
requires more diverse adaptations compared to the previous context. For design
reasons, the e-mail registry object is replaced with a class. The Aggregate Root
Entity class User is changed in multiple ways. Its constructor also removes the
Event Bus argument and introduces an event collection. Furthermore, it accepts
the parameter isExistingUser to differentiate between new and existing
Entities. For each new user, the e-mail setter operation is executed, which causes
to create and add a “UserEmailAddressAssigned” event. For existing users, the
attribute value is set directly. Both the types EmailAddress and Role remain
unchanged. The factory UserFactory is introduced for encapsulating creation
and reconstitution. For the persistence, the component UserRepository is
implemented as subclass of ConcurrencySafeFilesystemRepository.
The first examples show the refactored constructor and the e-mail setter function
of the User Entity:
User context: User constructor
constructor({id, username, emailAddress, password, role,
emailAvailability, isExistingUser}) {
verify('valid id', id != null);
this.#emailAvailability = emailAvailability;
Object.defineProperty(this, 'id', {value: id, writable: false});
Object.assign(this, {username, password, role});
if (isExistingUser) this.#emailAddress = emailAddress;
else this.emailAddress = emailAddress;
}
The next examples provide the class UserFactory and the Repository
component UserRepository:
User context: User factory
class UserFactory {
#emailRegistry;
constructor({emailRegistry}) {
this.#emailRegistry = emailRegistry;
}
constructor({storageDirectory, userFactory}) {
super({
storageDirectory,
convertToData: ({id, username, emailAddress, password, role: {name: role}}) =>
({id, username, emailAddress: emailAddress.value, password, role: role}),
convertToEntity: ({id, username, emailAddress, password, role}) =>
userFactory.reconstituteUser({id, username, password,
emailAddress: new EmailAddress(emailAddress), role: new Role(role)}),
});
}
async findUserByEmail(emailAddress) {
const users = await this.loadAll();
return users.find(user => user.emailAddress.equals(emailAddress));
}
The class UserFactory defines functions for instantiating two different types of
User objects. Creating a new and previously unsaved Entity is done with the
operation createNewUser(). Internally, it invokes the user constructor with the
parameter isNewUser set to true. This ensures that the Domain Event
“UserEmailAddressAssigned” is created and added to the event collection. In
contrast, the function reconstituteUser() is responsible for re-creating an
existing Entity. In this case, the parameter isNewUser is set to false, which
causes no new Domain Event to be added. The component UserRepository
extends the class ConcurrencySafeFilesystemRepository and passes custom
converters to the parent constructor. Also, it defines the specialized query
findUserByEmailAddress(). This enables the identification of an Entity via its
assigned e-mail address, for example when authenticating a user.
The following code shows an exemplary usage of the User context with regard
to persistence and event publishing (run code usage):
User context: Overall usage
const emailRegistry = new EmailRegistry();
const userFactory = new UserFactory({emailRegistry});
const userRepository = new UserRepository({storageDirectory, userFactory});
const publisher = new DomainEventPublisher({repository: userRepository, eventBus,
publishedEventIdsDirectory: `${storageDirectory}/published-event-ids`});
publisher.activate();
eventBus.subscribe(UserEmailAddressAssignedEvent.type, console.log);
await setupEmailRegistry(emailRegistry);
const user = userFactory.createUser({id, username, role,
password: createMd5Hash(password), emailAddress});
await userRepository.save(user);
const savedUser = await userRepository.load(id);
savedUser.emailAddress = newEmailAddress;
await userRepository.save(savedUser);
console.log(await userRepository.findUserByEmail(newEmailAddress));
The first examples show the reworked constructor and the status setter of the
task Entity:
Task board context: Task constructor
constructor(
{id, title, description = '', status = 'todo', assigneeId, isExistingTask}) {
verify('valid id', id != null);
Object.defineProperty(this, 'id', {value: id, writable: false});
Object.assign(this, {title, description, assigneeId});
if (isExistingTask) this.#status = status;
else this.status = status;
}
The following code shows the refactored version of the task board constructor
that accepts initial task summaries for Entity reconstitution:
Task board context: Task board constructor
constructor({id, initialTaskSummaries = []}) {
verify('valid id', id != null);
Object.defineProperty(this, 'id', {value: id, writable: false});
initialTaskSummaries.forEach(taskSummary => this.addTask(taskSummary));
}
constructor({storageDirectory}) {
super({
storageDirectory,
convertToData: ({id, title, description, status, assigneeId}) =>
({id, title, description, status, assigneeId}),
convertToEntity: ({id, title, description, status, assigneeId}) => new Task(
{id, title, description, status, assigneeId, isExistingTask: true}),
});
}
findTasksByAssigneeId(assigneeId) {
return this.loadAll()
.then(tasks => tasks.filter(task => task.assigneeId === assigneeId));
}
async findTaskBoardByTaskId(taskId) {
const taskBoards = await this.loadAll();
return taskBoards.find(taskBoard => taskBoard.containsTask(taskId));
}
The next code shows a refactored version of the task board synchronization
component:
Task board context: Task board synchronization
const taskBoardSynchronization = {
};
The last example illustrates the usage of the task board context with persistence
(run code usage):
Task board context: Overall usage
const taskRepository = new TaskRepository({storageDirectory: taskStorageDirectory});
const taskBoardRepository = new TaskBoardRepository(
{storageDirectory: taskBoardStorageDirectory});
Instead of replicating and synchronizing task state, the statuses of all tasks on a
board could be determined via Repositories. While it is an essential functionality,
it is not necessarily the responsibility of the TaskBoard Entity type. The behavior
could be extracted, as it is read-only and therefore not involved in any invariant.
This approach would require to load a task board and then to retrieve all Entities
for the referenced task identifiers. Ultimately, it could eliminate the need for
state synchronization. However, the complexity would only be shifted.
Furthermore, the result would still be eventually consistent, as it spans across
multiple transactional boundaries. This design issue is solved automatically
when introducing CQRS. Therefore, the code requires currently no refactoring.
Service design
There are multiple options for the design and the implementation of Application
Services. [Vernon, p. 143] describes three different implementation styles. All of
them share the principle that every use case is expressed as individual action.
The “dedicated style” implements each service as separate structure with one
main function. Ideally, the name of this operation is identical across components.
The “categorized style” gathers multiple related Application Services into one
common structure and exposes each of them as individual operation. While the
first approach produces small units with single responsibilities, the second
variant is typically easier to understand. Finally, the “messaging style” expects to
receive service execution requests as asynchronous notifications. As this
approach is more of an infrastructural topic, it is not further discussed at this
point.
The following code shows an abstract example for dedicated Application Service
classes:
Application Services: Dedicated style
class FirstUseCaseService {
execute() { /* .. */ }
}
class SecondUseCaseService {
execute() { /* .. */ }
}
Example: Notepad
Consider implementing the Domain Model, the Repositories and the Application
Services for a notepad software. The goal is to be able to create and update text
notes. Each note incorporates a title and arbitrary content. Apart from creating
new instances, the main required functionality is to update both of the attributes
individually. This requires them to be mutable. Also, notes need to be
distinguishable independent of their content. Consequently, they must be
implemented as Entity type with a conceptual identity. Other than with previous
example implementations, the Application Layer is also required in this case.
Consumers must be able to create notes, update their attributes and to retrieve
the data of specific instances. All this must be possible without directly
depending on the Domain Layer.
The first code shows the note Entity implementation (run code usage):
Notepad: Note Entity
class Note {
The second code provides a specialized data structure to be used as return value
from Application Services (run code usage):
Notepad: Note values
const NoteValues = function({title, content}) {
Object.freeze(Object.assign(this, {title, content}));
};
#noteRepository;
constructor({noteRepository}) {
this.#noteRepository = noteRepository;
}
async getNote({noteId}) {
return NoteValues.createFromNote(await this.#noteRepository.load(noteId));
}
The first example shows the service operation for generating a title based on
given content:
Notepad: Generate title function
const generateNoteTitle = content => content.split(' ').slice(0, 3).join(' ');
Processes
Every use case that incorporates multiple transactional steps is a so-called
Process. The implementation of such a procedure can be done in different ways.
One option is to execute everything within a single service. While this approach
has a low complexity, it is only feasible when an in-between interruption cannot
produce corrupted states. The second possibility is to expose standalone services
for each step of a process. This method makes sense when the orchestration is
the responsibility of the consumer. Another option is to connect individual steps
via event notifications. With this approach, a single entry point is exposed that
initiates the process by executing its first step. The consequential state change
triggers an asynchronous event notification, which causes the process to advance
further.
Process implementation approaches
Approach Advantage Disadvantage
Single Service Low complexity Interruption leads to in-between states
Individual Services Low complexity Responsibility is shifted to consumers
Event notifications High resilience Additional complexity & infrastructure
Orchestration vs. Choreography
Processes can be managed in two different ways. They can be orchestrated through a centralized
Process Manager that controls and tracks their progress. Alternatively, multiple individual parts
can perform a Choreography, where each reacts to a trigger and performs a single step. Typically,
the Orchestration approach makes most sense for complex processes with many steps or non-
linear paths.
The relevant Domain Model part consists of two components. One is the user
account, which contains an immutable identifier, an e-mail address and a list of
classified ad references. The second part is the classified ad, which encloses
attributes for identifier, seller identity, title, description and price. Both the
identifier and the reference to the seller must be constant values. For simplicity
reasons, the price is implemented as plain numerical value. The required
functionalities are to create user accounts and to place classified ads, which itself
consists of multiple individual steps. First, an ad Entity must be created. Then,
its identity must be added to the collection of the respective user account.
Finally, a confirmation message should be triggered to report the operational
success.
The first two examples provide the Entity type for the user account and the
classified ad (run code usage):
Classified ads platform: User account Entity
class UserAccount {
id; emailAddress; #classifiedAdIds;
addClassifiedAd(classifiedAdId) {
if (!this.canCreateClassifiedAd()) throw new Error('not allowed');
this.#classifiedAdIds.push(classifiedAdId);
}
The third example shows the initial implementation of the Application Services
component:
Classified ads platform: Application Services with multiple transactions
class ApplicationServices {
#userAccountRepository; #classifiedAdRepository;
constructor({userAccountRepository, classifiedAdRepository}) {
this.#userAccountRepository = userAccountRepository;
this.#classifiedAdRepository = classifiedAdRepository;
}
const applicationServices =
new ApplicationServices({userAccountRepository, classifiedAdRepository});
await applicationServices.createUserAccount(
{userAccountId, emailAddress: 'test@example.com'});
await applicationServices.createClassifiedAd(
{classifiedAdId, sellerId: userAccountId, title: 'Some ad', price: 1000});
clearInterval(interval);
logPersistedEntities();
The code starts with creating one Repository for user accounts and one for
classified ads. Next, the class ApplicationServices is instantiated with the
Repositories as arguments. This is followed by defining two identifiers.
Afterwards, the service createUserAccount() is executed. Then, an interval is
created to periodically log all persisted Entities using the helper
logPersistedEntities(). As next step, the service createClassifiedAd() is
invoked. After its successful completion, the interval is stopped. Finally, the state
of both Entities is logged again. Executing the code eventually produces an
output where a classified ad exists without being referenced from a user account.
This circumstance itself merely demonstrates the Eventual Consistency of the
process. The actual issue is that a service interruption can cause this in-between
state to exist permanently.
sendClassifiedAdConfirmation({userAccountId, classifiedAdId}) {
console.log(`classified ad ${classifiedAdId} published by ${userAccountId}`);
}
As preparation for the third approach, the first examples introduce Domain
Events to the Entities (run code usage):
Classified ads platform: Domain Events
const ClassifiedAdCreatedEvent = createEventType('ClassifiedAdCreated', {
classifiedAdId: 'string', sellerId: 'string', title: 'string',
description: 'string', price: 'number',
});
addClassifiedAd(classifiedAdId) {
if (!this.canCreateClassifiedAd()) throw new Error('not allowed');
this.#classifiedAdIds.push(classifiedAdId);
this.#newDomainEvents.push(new ClassifiedAdAddedToUserAccountEvent(
{userAccountId: this.id, classifiedAdId}));
}
#userAccountRepository; #classifiedAdRepository;
constructor({userAccountRepository, classifiedAdRepository}) {
this.#userAccountRepository = userAccountRepository;
this.#classifiedAdRepository = classifiedAdRepository;
}
}
This is complemented with an event handlers component that connects the steps
of the ad creation process via event notifications:
Classified ads platform: Domain Event handlers
class DomainEventHandlers {
#userAccountRepository; #eventBus;
constructor({userAccountRepository, eventBus}) {
this.#userAccountRepository = userAccountRepository;
this.#eventBus = eventBus;
}
activate() {
this.#eventBus.subscribe(ClassifiedAdCreatedEvent.type, async event => {
const {data: {classifiedAdId, sellerId}} = event;
const userAccount = await this.#userAccountRepository.load(sellerId);
userAccount.addClassifiedAd(classifiedAdId);
await this.#userAccountRepository.save(userAccount);
});
this.#eventBus.subscribe(ClassifiedAdAddedToUserAccountEvent.type, event => {
const {data: {userAccountId, classifiedAdId}} = event;
console.log(`classified ad ${classifiedAdId} published by ${userAccountId}`);
});
}
Finally, the next code demonstrates the use of the refactored implementation (run
code):
Classified ads platform: Usage with Eventual Consistency
const userAccountRepository = new EventPublishingFilesystemRepository({
storageDirectory: `${rootStorageDirectory}/user-account`, eventBus,
convertToData: entity => ({...entity}),
convertToEntity: data => new UserAccount(data),
});
const classifiedAdRepository = new EventPublishingFilesystemRepository({
storageDirectory: `${rootStorageDirectory}/classified-ad`, eventBus,
convertToData: entity => ({...entity}),
convertToEntity: data => new ClassifiedAd({...data, isNewAd: false}),
});
The code starts with creating one Repository for user accounts and one for
classified ads using the class EventPublishingFilesystemRepository. While it
is not recommended to publish Domain Events directly from a Repository, it
avoids additional complexity for the example. Afterwards, the class
ApplicationServices is instantiated. Also, an instance of the component
DomainEventHandlers is created and its command activate() is invoked.
Finally, the services createUserAccount() and createClassifiedAd() are
executed. The code looks similar to the first approach. Even more, the system is
also temporarily accessible in an in-between state. The difference is that the ad
creation is implemented as a chain of asynchronous steps with a transactional
entry point. Furthermore, the risk of permanent state corruption can be
eliminated through a guaranteed event distribution.
Cross-cutting concerns
In addition to its main responsibility, the Application Layer commonly also
manages generic aspects that spread across multiple use cases. These cross-
cutting concerns are secondary functionalities that belong either to the
Application part itself or to the Infrastructure Layer. Typical examples are
security, validation, error detection, caching and logging. Directly integrating
such aspects into multiple individual services can cause them to scatter across a
software. Keeping the Application Layer clean and free from concrete
dependencies can be achieved through a unified and abstract integration. There
are two common design patterns that help with implementing this approach.
Both the concepts Decorator and Middleware enable to enrich or modify existing
functionality in an abstract way. Although they have slightly different
characteristics, the patterns aim for the same goal.
The next example shows a usage of the decorator utility (run code):
Decoration: Decorate function utility usage
const sum = (a, b) => a + b;
sumWithLogging(3, 5);
const sumWithLoggingAndValidation =
decorateFunction(sumWithLogging, validationDecorator);
sumWithLoggingAndValidation(1, 'a');
The code starts with defining the mathematical computation sum(). Next, it
creates the decorator loggingDecorator(), which consists of multiple steps.
First, it logs the name of a function and the provided arguments. Then, it
executes the original and logs the result. Finally, the value is returned to the
caller. The sum() calculation is decorated by invoking decorateFunction() and
passing in the logging mechanism. After an exemplary execution of
sumWithLogging(), the code defines the operation validationDecorator().
This decorator ensures that all given arguments are numbers before executing the
original. If a parameter has another type, it throws an exception. The original
calculation is further extended with the validation aspect. As last step, the
operation sumWithLoggingAndValidation() is executed. This example
illustrates various scenarios for the helper decorateFunction().
The next code combines the Middleware pattern with a Proxy for adding cross-
cutting concerns to all functions of an object:
Decoration: Middleware Proxy
const MiddlewareProxy = function(target) {
const executeOriginal =
({originalFunction, args}) => originalFunction.apply(target, args);
The following example illustrates the usage of the middleware proxy component
(run code):
Decoration: Middleware proxy usage
const calculator = {
storedValue: 42,
sum: (a, b) => a + b,
product: (a, b) => a * b,
};
calculatorWithMiddleware.addMiddleware(loggingMiddleware);
calculatorWithMiddleware.addMiddleware(validationMiddleware);
console.log(calculatorWithMiddleware.sum(3, 5));
console.log(calculatorWithMiddleware.storedValue);
console.log(calculatorWithMiddleware.product('a', 5));
The usage code initially defines the object calculator. This component consists
of the functions sum() and product() as well as a stored value. Next, the class
MiddlewareProxy is instantiated with the calculator as argument. Then, the
function loggingMiddleware() is defined. Again, this operation consists of
multiple steps. First, it logs the name of the original function together with the
respective arguments. Then, it continues the execution of the chain. Afterwards,
it logs the result and returns it back to the caller. The second middleware is the
function validationMiddleware(), which verifies that all given arguments are
numbers. Whenever it encounters another type, it throws an error. Both
decorating operations are added to the proxy via the command
addMiddleware(). Finally, multiple example invocations demonstrate the overall
behavior.
Example: Chat
Consider extending and improving the implementation of a chat software. The
underlying Domain Model contains two components. One part is the chat
message, which consists of an identifier, an author reference, a timestamp and a
text. All of its attributes are defined as immutable. Secondly, there is the chat
itself, which encloses a constant identifier and a collection of messages. New
entries can be appended and existing ones can be deleted. Both components
require a conceptual identity and are therefore implemented as Entity type. With
regard to persistence and consistency, they are combined into a single Aggregate
with the chat as root Entity. Consequently, there is one single Repository
component. For orchestrating and executing the actual uses cases, the software
provides the according Application Services.
The first two examples show the implementations of the Entities for chat and
chat message (run code usage):
Chat: Chat message Entity
class ChatMessage {
id; authorId; time; text;
id; #messages;
appendMessage(message) {
this.#messages.push(message);
}
deleteMessage(messageId) {
const index = this.#messages.findIndex(message => message.id === messageId);
if (index === -1) throw new Error('invalid message');
this.#messages.splice(index, 1);
}
The third example provides the Repository component for the chat Aggregate:
Chat: Chat Repository
class ChatRepository extends ConcurrencySafeFilesystemRepository {
constructor({storageDirectory}) {
super({storageDirectory,
convertToEntity: data => new Chat({id: data.id,
messages: data.messages.map(messageData => new ChatMessage(messageData))}),
convertToData: ({id, messages}) => ({id, messages})});
}
#chatRepository;
constructor({chatRepository}) {
this.#chatRepository = chatRepository;
}
async createChat({chatId}) {
await this.#chatRepository.save(new Chat({id: chatId}));
}
async getChatMessages({chatId}) {
const chat = await this.#chatRepository.load(chatId);
return chat.messages.map(message => {
const formattedTime = new Date(message.time).toLocaleString();
return `${message.authorId} (${formattedTime}): ${message.text}`;
}).join('\n');
}
The next example demonstrates a basic usage of the chat software (run code):
Chat: Usage
const chatRepository = new ChatRepository({storageDirectory});
const applicationServices = new ApplicationServices({chatRepository});
The code starts with creating an object of the ChatRepository component. Next,
the class ApplicationServices is instantiated with the Repository as
constructor argument. Afterwards, the identifiers for a chat, an exemplary
message and an author are generated. This is followed by creating a chat
Aggregate via the service createChat(). Then, a single message is written and
appended through the execution of writeChatMessage(). Verifying the
operational success is done with the help of the query getChatMessages(). The
appended message is removed again by invoking deleteChatMessage(). Finally,
the removal is verified with another call to getChatMessages(). This example
illustrates the basic usage of the chat software. The implementation fulfills the
described functional requirements. However, there are some scenarios that can
potentially cause problems with regard to concurrency.
The next example illustrates a usage that produces concurrency conflicts (run
code):
Chat: Usage with conflicts
const chatId = generateId();
const authorId1 = generateId(), messageId1 = generateId();
const authorId2 = generateId(), messageId2 = generateId();
await applicationServices.createChat({chatId});
await Promise.all([
applicationServices.writeChatMessage({chatId, messageId: messageId1,
authorId: authorId1, text: 'Hello!'}),
applicationServices.writeChatMessage({chatId, messageId: messageId2,
authorId: authorId2, text: 'Hello!'}),
]);
console.log(await applicationServices.getChatMessages({chatId}));
Similar to the previous example, the code initially defines all required
identifiers. In this case, two pairs of identities for an author and a message are
generated. Then, a chat Aggregate is created by executing the command
createChat(). This is followed by concurrently writing two messages.
Afterwards, the current status is output using the query getChatMessages().
Executing the code produces a concurrency conflict during the attempt to write
the messages. The crucial question is whether this circumstance is hinting to a
design issue. For one, grouping messages together as common Aggregate
ensures a definite order of items. At the same time, writing messages
concurrently should be supported and must not produce a conflict. One possible
solution for this is to implement an automatic retry behavior.
The next code provides a factory for creating a middleware to automatically
retry failed service executions:
Chat: Create retry middleware
const createRetryMiddleware = ({retries}) =>
async ({originalFunction, args, next}) => {
while (retries-- > 0)
try {
return await next({originalFunction, args});
} catch (error) {
const isConcurrencyConflict = error.message === 'ConcurrencyConflict';
if (!isConcurrencyConflict || retries <= 0) throw error;
await timeout(15);
}
};
The last example shows a usage of the previously introduced middleware proxy
and the retry middleware (run code):
Chat: Usage with retry
const applicationServices =
new MiddlewareProxy(new ApplicationServices({chatRepository}));
applicationServices.addMiddleware(createRetryMiddleware({retries: 5}));
#storageDirectory;
constructor({storageDirectory}) {
this.#storageDirectory = storageDirectory;
mkdirSync(this.#storageDirectory, {recursive: true});
}
isTokenValid(subjectId, token) {
return access(this.#getFilePath(subjectId, token))
.then(() => true).catch(() => false);
}
#getFilePath(subjectId, token) {
return `${this.#storageDirectory}/${subjectId}.${token}`;
}
The next example shows a registry component for access paths (run code usage):
Security: Access registry
class AccessRegistry {
#storageDirectory;
constructor({storageDirectory}) {
this.#storageDirectory = storageDirectory;
}
#convertToDirectory(accessPath) {
this.#verifyAccessPath(accessPath);
return `${this.#storageDirectory}/${accessPath}`;
}
#verifyAccessPath(accessPath) {
if (/[^a-z0-9/-]/gi.test(accessPath)) throw new Error('invalid path');
}
The first examples show the implementations of the Entities for users and files
(run code usage):
File sharing: User Entity
const User = function({id, username, password}) {
Object.freeze(Object.assign(this, {id, username, password}));
};
addUserAccess(userId) {
this.#usersWithAccess.push(userId);
}
constructor({userRepository, fileRepository,
authenticationTokenRegistry, accessRegistry}) {
this.#userRepository = userRepository;
this.#fileRepository = fileRepository;
this.#authenticationTokenRegistry = authenticationTokenRegistry;
this.#accessRegistry = accessRegistry;
this.createAndLoginUser.bypassAuthentication = true;
}
The class ApplicationServices implements all the described use cases for the
file sharing software. Its constructor requires Repositories for both files and
users, an authentication token registry and an access registry. For brevity
reasons, the creation of a user and its login are combined into a single use case.
Overall, the component demonstrates different levels of Authentication and
Authorization. Creating a new user can be done anonymously. Uploading a file
as well as granting access to others is only allowed for the owner of a document.
For both of these actions, the authorization consists of comparing authentication
information to Domain Model knowledge. The third scenario is downloading a
file. In this case, the functionality is secured through checking the access registry
for an according path.
The final example demonstrates the usage of all components (run code):
File sharing: Usage code
const userRepository = new FilesystemRepository({
storageDirectory: `${rootDirectory}/user`,
convertToData: entity => entity, convertToEntity: data => new User(data)});
const fileRepository = new FilesystemRepository({
storageDirectory: `${rootDirectory}/file`,
convertToData: entity => entity, convertToEntity: data => new File(data)});
The code starts with creating one Repository for users and another one for file
Entities. Next, both the classes AuthenticationTokenRegistry and
AccessRegistry are instantiated. Afterwards, an instance of the component
ApplicationServices is created and decorated with a middleware proxy. This is
followed by defining an extractor operation that retrieves authentication
information from a metadata argument. As next step, an authentication
middleware function is created and added to the Application Services instance.
As preparation for the actual usage example, the identifiers and the metadata for
two users and one file are defined. Also, the helper function tryDownloadFile()
is implemented. This operation attempts to download a file and outputs an
informational message to the console.
The following code shows the additional functions for the access registry
component:
Infrastructure code: Additional access registry functions
grantFullAccess(subjectId) {
return this.grantAccess(subjectId, 'full-access');
}
async revokeImplicitAccess(implicitAccessPath) {
const symlinkDirectory = this.#convertToDirectory(implicitAccessPath);
await rmdir(symlinkDirectory);
}
Admin access paths
Executing the command grantFullAccess() makes the operation verifyAccess() always
succeed for a given subject identifier. This is even true for paths to which no subject has explicitly
access to. This behavior enables to use separate values for different admin actions, such as
“admin/create-user”. The approach can be beneficial when aiming to introduce the values as real
paths in the future.
User context
Besides its domain-specific purpose, the user context implementation also
contains the foundation for authentication through user creation and identity
verification. Although access control is mostly an infrastructural topic, a Domain
Model is typically required to provide a mechanism for validating credentials.
The user context code provides two functionalities for this purpose. One is the
Repository query findUserByEmail(), which enables to identify a user via its e-
mail address. Secondly, there is the Entity function isPasswordMatching(),
which determines the validity of a password. These operations can be used for
the initial authentication process. In case of valid credentials, an artificial token
is assigned and used for subsequent identity verification. Note that this token
only exists in the Infrastructure Layer and is not part of the Domain Model.
The following code shows the most relevant parts of the Application Services
class for the user context:
User context: Application Services
class UserApplicationServices {
async initializeEmailRegistry() {
(await this.#userRepository.loadAll()).forEach(user =>
this.#emailRegistry.setUserEmailAddress(user.id, user.emailAddress));
}
constructor({emailRegistry, eventBus}) {
this.activate = () => {
eventBus.subscribe(UserEmailAddressAssignedEvent.type, ({data}) => {
emailRegistry.setUserEmailAddress(
data.userId, new EmailAddress(data.emailAddress));
});
};
await authenticationTokenRegistry.assignToken(
authenticationMetadata.system.subjectId, authenticationMetadata.system.token);
await accessRegistry.grantFullAccess(systemUserId);
await userApplicationServices.initializeEmailRegistry();
The usage code illustrates how to create an initial admin user. First, a system-
owned subject identifier and an authentication token are created. Then, these
values are passed to both the token registry and the access registry. Afterwards,
the service function createUser() can be executed with the system
authentication information passed in as metadata.
Project context
The project context incorporates two use cases that affect multiple consistency
boundaries. For one, adding a new team member includes creating a member and
updating the target team. For this part, it is acceptable to combine both
transactions in a single service. In the worst case, creating a member without
adding it to a team produces an orphaned Aggregate. The second process is the
introduction of a new project, which includes the creation of a team and a task
board. Unlike the local team component, the task board is part of another
conceptual boundary. In this case, it makes most sense to connect the individual
steps via event notifications. Consequently, the project Entity implementation
must yield a Domain Event whenever a new instance is created.
The first example introduces a Domain Event that represents the creation of a
new project:
Project context: Domain Event
const ProjectCreatedEvent = createEventType('ProjectCreated',
{projectId: 'string', name: 'string', teamId: 'string', taskBoardId: 'string'});
The second code shows a reworked version of the constructor for the project
Entity:
Project context: Project constructor
constructor({id, name, ownerId, teamId, taskBoardId, isExistingProject = false}) {
verify(id && name && ownerId && teamId && taskBoardId, 'valid data');
Object.defineProperties(this, {
id: {value: id, writable: false},
ownerId: {value: ownerId, writable: false},
teamId: {value: teamId, writable: false},
taskBoardId: {value: taskBoardId, writable: false},
});
this.name = name;
if (!isExistingProject) this.#newDomainEvents.push(
new ProjectCreatedEvent({projectId: id, name, teamId, taskBoardId}));
}
The next example introduces a factory for creating and reconstituting project
Entities:
Project context: Project factory
class ProjectFactory {
Overall, the project context contains use cases for projects, teams and individual
members. Other than the user part, the Domain Model consists of multiple
separate Aggregate types. One is for team members, a second one is for teams
and the third is for projects. Therefore, the design question is how to structure
the according Application Services implementation. One approach is to
introduce a separate class per Aggregate type. This aims to create a clear
mapping between consistency boundaries and service functionalities. Another
option is to gather all services in a single component. This approach helps to
avoid an artificial separation of logically related use cases. Due to the overall
low complexity of this context, the second approach is likely to be more useful.
The following code provides the relevant parts of the Application Services for
the project context:
Project context: Application Services
class ProjectApplicationServices {
async findProjectsByCollaboratingUser({userId}) {
const members = await this.#teamMemberRepository.findTeamMembersByUser(userId);
const projects = await Promise.all(members.map(async teamMemberId => {
const team = await this.#teamRepository.findTeamByTeamMember(teamMemberId);
return this.#projectRepository.findProjectByTeam(team.id);
}));
return projects.map(({id, name, ownerId, teamId, taskBoardId}) =>
({id, name, ownerId, teamId, taskBoardId}));
}
async findProjectsByOwner({userId}) {
const projects = this.#projectsRepository.findProjectsByOwner(userId);
return projects.map(({id, name, ownerId, teamId, taskBoardId}) =>
({id, name, ownerId, teamId, taskBoardId}));
}
/* .. removeTeamMemberFromTeam(), updateTeamMemberRole() .. */
The next example provides a Domain Event Handlers component that creates a
new team for every new project:
Project context: Domain Event handlers
class ProjectDomainEventHandlers {
await authenticationTokenRegistry.assignToken(
authenticationMetadata.user.subjectId, authenticationMetadata.user.token);
await authenticationTokenRegistry.assignToken(
authenticationMetadata.admin.subjectId, authenticationMetadata.admin.token);
await accessRegistry.grantFullAccess(adminId);
await projectApplicationServices.createProject(
{projectId, name: 'Test Project', ownerId: adminId, teamId, taskBoardId},
{authentication: authenticationMetadata.admin});
await timeout(125);
await projectApplicationServices.addTeamMemberToTeam({teamId, teamMemberId,
userId, role: 'developer'}, {authentication: authenticationMetadata.admin});
await projectApplicationServices.updateProjectName(
{projectId, name: 'Test Project v2'},
{authentication: authenticationMetadata.user});
console.log(await projectApplicationServices.findProjectsByCollaboratingUser(
{userId}, {authentication: authenticationMetadata.user}));
The first example shows the most important use case operations of the
Application Services for the task board context:
Task board context: Application Services
class TaskBoardApplicationServices {
The next code implements a component for all necessary Domain Event
handlers:
Task board context: Domain Event handlers
class TaskBoardDomainEventHandlers {
this.activate = () => {
eventBus.subscribe('ProjectCreated', async ({data}) => {
await taskBoardRepository.save(new TaskBoard({id: data.taskBoardId}));
await accessRegistry.grantImplicitAccess(
`team/${data.teamId}`, `task-board/${data.taskBoardId}`);
});
eventBus.subscribe(TaskStatusChangedEvent.type, async ({data}) => {
const taskBoard =
await taskBoardRepository.findTaskBoardByTaskId(data.taskId);
if (!taskBoard) return;
taskBoard.updateTaskStatus(data.taskId, data.status);
await taskBoardRepository.save(taskBoard);
});
eventBus.subscribe('TeamMemberRemovedFromTeam', async event => {
const {data: {teamMemberId}} = event;
const tasks = await taskRepository.findTasksByAssigneeId(teamMemberId);
await Promise.all(tasks.map(async task => {
if (task.status === 'in progress') task.status = 'todo';
task.assigneeId = undefined;
await taskRepository.save(task);
}));
});
};
The class TaskBoardApplicationServices implements all use cases for the task
board context. Creating a new task and adding it to a board is done via the
command addNewTaskToTaskBoard(). The operation first verifies the required
access. In case of success, it persists a new task and attaches an according
summary to the target board. Also, the service grants implicit access to all
subjects with board access. The operation updateTaskTitle() is one of multiple
services for task modifications. The command removeTaskFromTaskBoard() is
an example for using the operation revokeImplicitAccess(). Domain Event
notifications are processed by the component TaskBoardDomainEventHandlers.
For every new project, a task board is created and persisted. Team member
removals cause to un-assign affected tasks. Finally, every task status change
triggers a task summary update.
The final code example shows an exemplary usage of the Application Services
for the task board context (run code):
Task board context: Overall usage
const taskBoardApplicationServices =
new MiddlewareProxy(new TaskBoardApplicationServices(
{taskRepository, taskBoardRepository, accessRegistry}));
const authenticationExtractor = (_, metadata = {}) => metadata.authentication;
const authenticationMiddleware = createAuthenticationMiddleware(
{authenticationTokenRegistry, authenticationExtractor});
taskBoardApplicationServices.addMiddleware(authenticationMiddleware);
const taskBoardDomainEventHandlers = new TaskBoardDomainEventHandlers(
{taskRepository, taskBoardRepository, accessRegistry, eventBus});
taskBoardDomainEventHandlers.activate();
await authenticationTokenRegistry.assignToken(
authenticationMetadata.subjectId, authenticationMetadata.token);
await accessRegistry.grantFullAccess(adminUserId);
Architectural overview
CQRS divides a software into a write side and a read side. Typically, the pattern
is applied to a selected area, but not to a complete system. For DDD-based
software, this can map to a context implementation. The concept itself does not
dictate specific technological decisions, but only implies that write and read
concerns are somehow separated. Still, in many cases the two sides employ
individual data storages and establish a synchronization mechanism. On top of
that, they can apply custom concepts, run in different processes and even use
distinct technologies. While a one-to-one alignment of both sides can be useful,
it is not mandatory. One write side may affect multiple read aspects and one
query functionality can aggregate information from several write sides.
High-level picture
The User Interface Layer is responsible for translating each meaningful user
action into either a Command or a Query. The resulting messages are delivered
to their responsible handler, which belongs to the Application Layer. Command
Handlers load data via Repositories, construct Write Model components and
execute a target action with the provided arguments. When encountering an
error, the overall process is aborted. In case of success, the resulting state change
is persisted. The synchronization mechanism consumes Write Model state and
forwards it to domain-specific transformations that produce Read Model data.
This information can be persisted with Repositories, but may also be saved with
other storage mechanisms. Query Handlers load data based on given parameters,
optionally construct domain-specific components and return a result.
Implicit CQRS
Many software projects implicitly apply CQRS without consciously aiming for
it. This is because it often makes sense to execute different types of use cases in
different ways. As explained earlier, it can be about data structures, consistency
models, performance aspects or associated code complexity. Even the sole use of
a caching mechanism can be interpreted as some form of CQRS. While
Command Handlers operate on current state, the according Query Handlers
respond with cached data. Another common scenario is when a software
provides reporting functionalities. Typically, this is achieved by exposing
preprocessed and aggregated data from a specialized storage. As example,
consider a large online software platform that determines its number of users.
Most certainly, this is not done by querying a production database.
Different terminologies
The Write Model and the Read Model can alternatively be called Command Model and Query
Model. However, the terms Command and Query describe concepts that are part of the
Application Layer. As the model implementation belongs to the Domain Layer, the use of distinct
terms helps to differentiate between those areas.
Write Model
The Write Model and its implementation consist of all the actions that result in
state changes. Effectively, a Write Model component is an Aggregate that
contains only command operations. Consequently, the enclosed structure can be
an arbitrary combination of Entities and Value Objects with one designated root.
The associated Repository component must exclusively provide the means to
load an instance via identifier and to save an Aggregate. Additional querying
capabilities are not required. Overall, the write part of a Domain Model
implementation deals with transactions, consistency and invariants. All the
contained actions always operate on current state. For many systems, this is the
more critical part of a Domain Model. Therefore, it can make sense to focus the
majority of development efforts on this area.
The first two examples show the code for the review and the product Entity
classes:
Product reviews: Product Entity
class Product {
set rating(rating) {
if (typeof rating != 'number' || rating < 0 || rating > 5)
throw new Error('invalid rating');
this.#rating = rating;
}
The next code implements a dedicated service for calculating the average rating
of a product (run code usage):
Product reviews: Rating calculator service
const ratingCalculator = {
getAverageRating: ratings => {
if (ratings.length === 0) return 0;
return ratings.reduce((sum, rating) => sum + rating, 0) / ratings.length;
},
};
The class Product represents the product Entity type. Its constructor accepts
arguments for an identifier, a name and a category. All variables are assigned to
according properties. The class Review expresses the Domain Model concept of
a review. Again, all the passed in constructor arguments are assigned to their
corresponding fields. However, the rating is defined as a private property
together with according accessors. The setter function ensures that every passed
in value is a valid number within the allowed range. The Domain Service
ratingCalculator implements the operation getAverageRating() to calculate
the average rating from a list of ratings. The combination of the three
components enables to cover all previously defined use cases. Consequently,
they represent a complete expression of the Domain Model.
The next examples implement the Repository components for both Entity types:
Product reviews: Product Repository
class ProductRepository extends ConcurrencySafeFilesystemRepository {
constructor({storageDirectory}) {
super({storageDirectory,
convertToData: ({id, name, category}) => ({id, name, category}),
convertToEntity: data => new Product(data)});
}
constructor({storageDirectory}) {
super({storageDirectory,
convertToData:
({id, productId, comment, rating}) => ({id, productId, comment, rating}),
convertToEntity: data => new Review(data)});
}
async findReviewsForProduct(productId) {
const files = await readdir(this.storageDirectory);
const ids = files.map(filename => filename.replace('.json', ''));
const reviews = await Promise.all(ids.map(id => this.load(id)));
return reviews.filter(review => review.productId === productId);
}
The code starts with instantiating Repository components for products and for
reviews. This is followed by creating an exemplary product with the category
“Laptop”. As next step, two new reviews are added. One of them has a low
rating and an additional comment. Afterwards, all existing reviews for the
product are retrieved via the query findReviewsForProduct(). The returned
items are used to calculate the average rating using the service operation
getAverageRating(). Then, the review with the low rating is updated and
saved. Finally, the query findReviewsForProduct() is invoked again, and the
average rating is recalculated and logged. While this implementation works, it
has a poor performance, as it requires to always load all reviews for a product.
This problem can be mitigated by applying CQRS.
The next example shows a Read Model storage component for product ratings
(run code usage):
Product reviews: Product ratings
class ProductRatings {
#storage;
constructor({storageDirectory}) {
this.#storage = new JSONFileStorage(storageDirectory);
}
async addProduct(productId) {
await this.#storage.save(productId, {ratings: {}});
}
async getAverageRating(productId) {
return (await this.#storage.load(productId)).averageRating;
}
The final code shows an exemplary usage of the implementation with CQRS
(run code):
Product reviews: Usage of CQRS variant
const productRepository = new ProductRepository(productRepositoryConfig);
const reviewRepository = new ReviewRepository(reviewRepositoryConfig);
const productRatings = new ProductRatings(productRatingsConfig);
The code starts with creating Repository instances for products and for reviews.
Both components are reduced to their constructor without custom queries. Next,
the class ProductRatings is instantiated. Then, the helper functions
createProduct() and createReview() are defined. Each of them is responsible
for creating a respective Entity, persisting it and updating the Read Model. The
actual use case code is equivalent to the first approach. Determining the average
product rating is done by executing the function getAverageRating() of the
product ratings component. The performance issue from the previous
implementation is eliminated. For determining the average rating, only a single
file must be loaded. The manual update of a Read Model is one of multiple
synchronization approaches. The most common ones are explained in the next
section.
There are multiple possibilities for triggering Read Model updates in response to
Write Model changes. The following subsections describe and illustrate the most
common technical synchronization approaches. This is preceded by the
introduction of an example topic, which is used for the implementation of each
individual approach.
The first examples show the implementations of the Entity types for songs and
users (run code usage):
Song recommendation: Song Entity
class Song {
constructor({id, email}) {
Object.defineProperty(this, 'id', {value: id, writable: false});
Object.defineProperty(this, 'email', {value: email, writable: false});
}
addLike(songId) { this.#likedSongs.push(songId); }
removeLike(songId) {
this.#likedSongs.splice(this.#likedSongs.indexOf(songId), 1);
}
The next code provides a usage example that serves as blueprint for the
interfaces of the subsequent synchronization approaches:
Song recommendation: Abstract usage example
const userId1 = generateId(), userId2 = generateId();
const songId1 = generateId(), songId2 = generateId();
The code starts with generating identifiers for two users and two songs. Next, it
creates and saves the according user objects via the function createUser(). This
is followed by instantiating and saving two song Entities with the operation
createSong(). Afterwards, the command addLike() is invoked four times,
resulting in adding one like per user to each song. Then, the operation
removeLike() is invoked to remove a single like again. Finally, a timeout is
requested, in which the most liked song of the according category is logged.
When executing this code with one of the following synchronization approaches,
the query getMostLikedSong() should return the second song. While both
Entities are equally popular at some point, the number of likes for the first song
is decreased again.
Pushing updates
One synchronization approach is to actively trigger Read Model updates from
the write side. Whenever a state change is persisted successfully, all derived data
structures are requested to be updated. Typically, this is done by forwarding all
modified values. As explained earlier, the process should only happen
asynchronously. In general, this approach makes sense for smaller systems and
when first introducing CQRS. Its implementation is straight forward and results
in a low consistency delay. However, for most software products, it is not a long-
term solution. The key disadvantage is that the write side becomes dependent on
each downstream read part. This is because it must know which data structures
to update and how this is achieved. Also, the synchronization requires custom
efforts to ensure reliability.
The following code example provides a component that maintains Read Models
of songs and their likes (run code usage):
Song recommendation: Most liked songs component
class MostLikedSongs {
addLike(songId) {
const song = this.#songs.get(songId);
song.likes += 1;
const mostLikedSong = this.#mostLikedSongByGenre.get(song.genre);
if (!mostLikedSong || song.likes >= mostLikedSong.likes)
this.#mostLikedSongByGenre.set(song.genre, song);
}
removeLike(songId) {
const song = this.#songs.get(songId);
song.likes -= 1;
const mostLikedSong = this.#mostLikedSongByGenre.get(song.genre);
if (song.id === mostLikedSong.id) {
const songList = Array.from(this.#songs.values());
const songsWithSameGenre = songList.filter(({genre}) => genre === song.genre);
const newMostLiked = songsWithSameGenre.sort((a, b) => b.likes - a.likes)[0];
this.#mostLikedSongByGenre.set(song.genre, newMostLiked);
}
}
The class MostLikedSongs enables to determine the most liked song per genre.
For this purpose, it maintains two Read Models, both implemented as Map
instances. While the object songs contains all songs together with their likes, the
most popular ones are stored redundantly in mostLikedSongByGenre. Adding a
new entry is done with the operation addSong(). The command addLike()
retrieves the song for a given identity and adds a like. Also, it updates the most
popular one if the changed song has at least the same like count. The command
removeLike() retrieves a song entry and decrements its likes. Whenever this
affects the most liked item in a genre, the operation re-evaluates this aspect.
Finally, the query getMostLikedSong() returns the current favorite for a given
genre.
Mixing of Infrastructure and Domain
The component for maintaining Read Models in the example partially mixes infrastructural
concerns with domain-related aspects. Generally speaking, the two parts should be separated from
each other. At the same time, it is advisable to be pragmatic when dealing with rather trivial or
generic Read Model transformations.
The next implementation provides the operations for all use cases with interfaces
that match the previously shown abstract usage code (run code usage):
Song recommendation: Push updates
const songDatabase = new Map();
const userDatabase = new Map();
const mostLikedSongs = new MostLikedSongs();
The code starts with creating Map objects as databases for songs and users. This
is followed by instantiating the MostLikedSongs class. Next, all use case
operations are implemented. The function createUser() instantiates and persists
a user Entity. Creating and saving a song is done with the command
createSong(). This function also executes the operation addSong() of the
MostLikedSongs instance. The commands addLike() and removeLike() add or
remove a liked song for a user. Internally, they both call the matching update
action on the MostLikedSongs instance. Finally, the query getMostLikedSong()
returns the most liked item in a genre. Overall, this approach provides a good
performance. One disadvantage is that the write side depends on the read side.
Also, the implementation fails to guarantee a reliable synchronization.
On-demand computation
Another possibility for synchronization is to lazily compute Read Models. Upon
requesting, the required write parts are accessed and all derived data is
computed. The amount of total computations can be reduced by caching the
results. This approach has multiple advantages. For one, any Read-Model-related
processing is deferred until needed. Secondly, when combined with caching, the
consistency delay can be configured. One disadvantage is that every first request
for a data set has an increased latency. For complex computations, this might not
even be feasible. Compared to the first approach, the write side stays
independent of the read part. However, it must provide querying capabilities to
expose required information. Also, the reliability is slightly better, as Read
Model computations can be retried when failing.
The next code shows the implementation of the defined use cases with an on-
demand Read Model computation (run code usage):
Song recommendation: On-demand computation
const songDatabase = new Map();
const userDatabase = new Map();
The code also starts with creating two Map objects as databases for both Entity
types. All the use case operations related to state changes exclusively update the
Write Model. In contrast, the query getMostLikedSong() computes the most
liked song for a given genre on demand. The operation retrieves all songs from
the database, filters by affected genre, sorts by likes and returns the first item.
Caching is enabled by decorating the operation with the helper
createCachedFunction(). Overall, this approach has a low complexity. The
main disadvantage is the poor read performance. Determining the most liked
song for a genre requires looping through all song Entities. While the caching
helps to reduce subsequent computations, the approach is not feasible for a larger
number of persistent songs.
Event notifications
The third synchronization approach is to use Event notifications for triggering
Read Model updates. This especially makes sense when the implementation is
based on DDD and incorporates Domain Events or applies Event Sourcing. For
each state change that is successfully persisted, all related events are distributed.
The synchronization mechanism for associated Read Models receives the event
notifications and requests to update the derived data structures. This approach
has multiple advantages. For one, the write part stays completely agnostic of any
read concerns. Also, every read part receives all required information without
the need to actively query for it. Furthermore, since the event distribution should
provide a guaranteed delivery, the synchronization can also be considered
reliable. One potential downside of this approach is the additional complexity.
The following code implements the specified use case operations with Domain
Event notifications to trigger the Read Model synchronization (run code usage):
Song recommendation: Event notifications
const SongCreatedEvent = createEventType(
'SongCreated', {songId: 'string', name: 'string', genre: 'string'});
const UserLikedSongEvent = createEventType(
'UserLikedSong', {userId: 'string', songId: 'string'});
const UserRemovedSongLikeEvent = createEventType(
'UserRemovedSongLike', {userId: 'string', songId: 'string'});
const eventHandlers = {
SongCreated: ({data: {songId, name, genre}}) =>
mostLikedSongs.addSong({id: songId, name, genre}),
UserLikedSong: ({data}) => mostLikedSongs.addLike(data.songId),
UserRemovedSongLike: ({data}) => mostLikedSongs.removeLike(data.songId),
};
Rebuilds
Depending on the synchronization mechanism and the use of persistence, Read
Models may need to be rebuilt at certain points. Especially when derived data
structures are exclusively kept in memory, there must be a possibility to recreate
them completely. One common scenario is when a software needs to be
restarted. Even more, there are scenarios where persistent Read Models need to
be recomputed. For example, when the transformation mechanism for a de-
normalized data structure contains errors. Also, when additional information is
added retroactively to Write Model components, rebuilding related Read Models
can be necessary. In general, there is no silver bullet for implementing an
adequate mechanism to enable this functionality. However, it commonly requires
to expose querying capabilities on the related write parts.
Naming conventions
The type names for Commands and Queries must be chosen with care. In
general, each one should describe the respective use case to execute.
Furthermore, the selected terms should be expressive and ideally apply the
prevailing Ubiquitous Language. One useful pattern is to combine the action to
execute in present tense with the primarily affected subject. For Commands, the
subject is the associated Write Model component and the action is the operation
to invoke. For Queries, the subject is typically the type of data to be retrieved.
The action is either a single operation or a process, which can be substituted with
verbs like “get” or “find”. Whenever a use case does not align one-to-one with
Domain Model components, the naming should express this circumstance.
The following tables show Command and Query naming examples, of which
some adhere to the general recommendation:
Exemplary Command names
Component Action Potential Command type name
browser tab reload() ReloadBrowserTab
meeting addParticipant() AddMeetingParticipant
guestbook writeMessage() WriteGuestbookMessage
user login() LoginUser
newsletter subscribe() SubscribeToNewsletter
const messageTypeFactory =
{createMessageType, setIdGenerator, setDefaultMetadataProvider};
The usage of this component can be seen in the next example (run code):
Messages: Message Type factory usage
const CreateUserCommand = createMessageType('CreateUser',
{userId: 'string', email: 'string', username: ['string', 'undefined']});
const GetUserQuery = createMessageType('GetUser', {userId: 'string'});
messageTypeFactory.setIdGenerator(generateId);
messageTypeFactory.setDefaultMetadataProvider(() => ({creationTime: new Date()}));
console.log(new CreateUserCommand(
{data: {userId: '1', email: 'alex@example.com'}, metadata: {}}));
console.log(new CreateUserCommand(
{data: {userId: '1', email: 'james@example.com', username: 'james'}}));
console.log(new GetUserQuery({data: {userId: '1'},
metadata: {authentication: {subjectId: '123'}}}));
Passive-aggressive Command
When a Command is mistakenly modeled as Domain Event, its type name typically reflects this
circumstance. [Fowler] calls such a message a “passive-aggressive command”, as it seemingly
describes an occurrence, but in fact is an instruction. This also means that the type name itself can
serve as a hint to an underlying design issue.
The usage of this functionality can be seen in the next example (run code):
Messages: Message forwarder factory usage
class CommandHandlers {
handleSecondUseCaseCommand(command) {
console.log('executing second use case with', command.data);
}
The first example defines Domain Event types for starting and stopping an
individual activity:
Time Tracking: Domain Events
const TimeTrackingActivityStartedEvent = createEventType(
'TimeTrackingActivityStarted', {timeTrackingId: 'string',
activityId: 'string', activityLabel: 'string', startTime: 'number'});
const TimeTrackingActivityStoppedEvent = createEventType(
'TimeTrackingActivityStopped',
{timeTrackingId: 'string', activityId: 'string', endTime: 'number'});
The next code implements the time tracking Entity with activities as contained
anonymous objects (run code):
Time Tracking: Time Tracking Entity
class TimeTracking {
constructor({id, eventBus}) {
Object.defineProperty(this, 'id', {value: id, writable: false});
this.#eventBus = eventBus;
}
stopActivity({activityId, endTime}) {
this.#activitiesById[activityId].endTime = endTime;
this.#eventBus.publish(new TimeTrackingActivityStoppedEvent(
{timeTrackingId: this.id, activityId, endTime}));
}
The following example provides the time sheet report Read Model (run code):
Time Tracking: Time Sheet
class TimeSheet {
recordActivityStop({activityId, endTime}) {
const activity = this.#activitiesById[activityId];
Object.assign(activity, {endTime, duration: endTime - activity.startTime});
const indexToInsert = this.#completedActivities.findIndex(
activity => activity.duration < activity.duration);
this.#completedActivities.splice(indexToInsert, 0, activity);
}
}
The last code implements a class for event handlers that synchronizes the Read
Model data upon Domain Event notifications:
Time Tracking: Read Model synchronization
class ReadModelSynchronization {
#timeSheetStorage;
constructor({timeSheetStorage, eventBus}) {
this.#timeSheetStorage = timeSheetStorage;
this.activate = () =>
['TimeTrackingActivityStarted', 'TimeTrackingActivityStopped'].forEach(
type => eventBus.subscribe(type, this.handleEvent.bind(this)));
}
async handleTimeTrackingActivityStartedEvent(event) {
const {timeTrackingId, activityId, activityLabel, startTime} = event.data;
if (!this.#timeSheetStorage.has(timeTrackingId))
this.#timeSheetStorage.set(timeTrackingId, new TimeTrackingReport());
const timeSheet = this.#timeSheetStorage.get(timeTrackingId);
timeSheet.recordActivityStart({activityId, activityLabel, startTime});
}
async handleTimeTrackingActivityStoppedEvent(event) {
const {timeTrackingId, activityId, endTime} = event.data;
const timeSheet = this.#timeSheetStorage.get(timeTrackingId);
timeSheet.recordActivityStop({activityId, endTime});
}
The illustrated code provides the implementation for the write side and the read
side together with a synchronization mechanism. This example functionality is
extended and completed throughout the following subsections that describe and
illustrate Command and Query handling. For simplicity reasons, the following
code uses plain objects for Commands and Queries instead defining specialized
message types.
Command handling
Command Handlers process Commands and affect the write side of a software.
Typically, they load selected Write Model components via Repositories, operate
on them with given inputs and persist all state changes. When encountering an
error, they should throw an exception. This can be due to an infrastructural
aspect or a domain-specific concern, such as an invariant protection. Other than
that, Command Handlers should have no return value. Nevertheless, simple
operational status codes can be acceptable. As with any Application Service
implementation, one individual service should at most affect a single transaction.
There may be situations where processing a Command requires information
from a Query Handler. This type of dependency should ideally be avoided, as it
makes a write side dependent on a read side.
The following code shows a class for the categorized Command Handlers of the
previously introduced time tracking functionality:
Time Tracking: Command Handlers
class CommandHandlers {
#timeTrackingStorage; #eventBus;
constructor({timeTrackingStorage, eventBus}) {
this.#timeTrackingStorage = timeTrackingStorage;
this.#eventBus = eventBus;
}
async handleCreateTimeTrackingCommand(command) {
const {timeTrackingId} = command.data;
if (this.#timeTrackingStorage.has(timeTrackingId)) return;
this.#timeTrackingStorage.set(timeTrackingId,
new TimeTracking({id: timeTrackingId, eventBus: this.#eventBus}));
}
async handleStartTimeTrackingActivityCommand(command) {
const {timeTrackingId, activityId, activityLabel, startTime} = command.data;
const timeTracking = this.#timeTrackingStorage.get(timeTrackingId);
timeTracking.startActivity({activityId, activityLabel, startTime});
}
async handleStopTimeTrackingActivityCommand(command) {
const {timeTrackingId, activityId, endTime} = command.data;
const timeTracking = this.#timeTrackingStorage.get(timeTrackingId);
timeTracking.stopActivity({activityId, endTime});
}
}
The usage of this part can be seen in the next example (run code):
Time Tracking: Command Handlers usage
const timeTrackingStorage = new Map();
const commandHandlers = new CommandHandlers({timeTrackingStorage, eventBus});
const commands = [
{type: 'CreateTimeTracking', data: {timeTrackingId}},
{type: 'StartTimeTrackingActivity',
data: {timeTrackingId, activityId, activityLabel, startTime: Date.now()}},
{type: 'StopTimeTrackingActivity',
data: {timeTrackingId, activityId, endTime: Date.now() + 1000 * 3600}},
];
for (const command of commands)
await commandHandlers.handleCommand(command);
The class CommandHandlers implements all write-related use cases for the time
tracking functionality. Its constructor expects a storage component and an Event
Bus. Also, it creates a message forwarder mechanism and assigns it to the field
handleCommand. The operation handleCreateTimeTrackingCommand() is
responsible for creating a time tracking and saving it. Starting and respectively
stopping an activity is the responsibility of the functions
handleStartTimeTrackingActivity() and
handleStopTimeTrackingActivity(). The usage example starts with
instantiating a Map for saving time tracking instances. Next, it creates an instance
of the component CommandHandlers with the storage component and an Event
Bus as arguments. Afterwards, the code defines two identifiers and an activity
label. Finally, an array of exemplary commands is created and issued to the
Command Handlers in an asynchronous loop.
Misuse of commands
There are resources and technologies that promote the idea of passing
Commands directly as arguments to Domain Layer components. In my opinion,
this is problematic, especially with regard to Software Architecture. For one,
Commands are a concept of the Application Layer and not the Domain part.
Domain Model components should not be aware of their existence. Secondly,
they are a description of a request to execute a use case. Consequently,
Commands seem ill-suited as arguments for functions of domain-related
components. Also, a Domain Model implementation can use rich data structures
for function arguments, such as Value Objects. In contrast, a Command must
contain simple data independent of such structures. Finally, a Command may not
even map to a specific Domain Layer component at all.
Query handling
Query Handlers respond to Queries and access the read part of a software. Their
mechanics can vary depending on the applied patterns and used technologies.
The main responsibility is to retrieve data based on given input criteria.
Typically, this is done by either accessing a persistent storage or an in-memory
representation. The stored data can be structured or unstructured, in which case it
may be transformed into specialized Read Model components. The final result is
returned to the Query sender. Errors that occur during the process should not be
forwarded as exceptions. Ideally, they are translated into meaningful return
values. Generally, Query Handlers should be free of side effects and must
therefore not perform any state changes. Also, they should not depend on
Command Handlers.
The following code shows a class for categorized Query Handlers of the time
tracking implementation:
Time Tracking: Query Handlers
class QueryHandlers {
#timeSheetStorage;
constructor({timeSheetStorage}) {
this.#timeSheetStorage = timeSheetStorage;
}
async handleGetTimeSheetQuery(query) {
const {timeTrackingId} = query.data;
const timeSheet = this.#timeSheetStorage.get(timeTrackingId);
return timeSheet.getCompletedActivities();
}
const readModelSynchronization =
new ReadModelSynchronization({timeSheetStorage, eventBus});
const queryHandlers = new QueryHandlers({timeSheetStorage});
readModelSynchronization.activate();
const commands = [
{type: 'CreateTimeTracking', data: {timeTrackingId}},
{type: 'StartTimeTrackingActivity', data: {timeTrackingId,
activityId: activity1Id, activityLabel: 'coding', startTime: time1}},
{type: 'StopTimeTrackingActivity', data: {timeTrackingId,
activityId: activity1Id, endTime: time2}},
{type: 'StartTimeTrackingActivity', data: {timeTrackingId,
activityId: activity2Id, activityLabel: 'testing', startTime: time2}},
{type: 'StopTimeTrackingActivity', data: {timeTrackingId,
activityId: activity2Id, endTime: time3}},
];
for (const command of commands)
await commandHandlers.handleCommand(command);
await timeout(125);
const query = {type: 'GetTimeSheet', data: {timeTrackingId}};
console.log(await queryHandlers.handleQuery(query));
class CommandHandlers {
handleUseCaseCommand(command) { /* .. */ }
}
constructor({indexes = []}) {
this.#indexes = indexes;
this.#indexMaps = Object.fromEntries(indexes.map(index => [index, new Map()]));
}
update(id, updates) {
const oldRecord = this.#recordsById.get(id) || {};
const newRecord = {...oldRecord, ...updates};
this.#recordsById.set(id, newRecord);
const indexesToUpdate = this.#indexes.filter(index => index in updates);
for (const index of indexesToUpdate) {
if (index in oldRecord) {
const oldIndexIds = this.#indexMaps[index].get(`${oldRecord[index]}`);
const idIndex = oldIndexIds.indexOf(id);
if (idIndex > -1) oldIndexIds.splice(idIndex, 1);
}
const newIndexIds = this.#indexMaps[index].get(`${newRecord[index]}`) || [];
this.#indexMaps[index].set(`${newRecord[index]}`, newIndexIds.concat(id));
}
}
load(id) {
return this.#recordsById.get(id);
}
findByIndex(index, indexValue) {
return (this.#indexMaps[index].get(indexValue) || [])
.map(id => this.#recordsById.get(id));
}
Project context
The Application Services component of the project context is split into write and
read concerns. All the write-related behavior is transformed into Commands and
according Command Handlers. This incorporates the use cases for creating
projects, updating their names, adding and removing team members, and
updating member roles. In contrast, the read-related service for finding all
projects of individual users is implemented as a Query functionality. The
Domain Layer of the context is adapted in different ways. For one, specialized
Domain Event types are introduced for all relevant state changes. Secondly, the
Write Model implementation is extended to create the according event instances.
For the synchronization of Read Model data, the Application Layer is extended
with a component that processes selected event notifications.
The first examples provide the definitions for all Commands and Queries:
Project context: Commands
const CreateProjectCommand = createMessageType('CreateProject', {name: 'string',
projectId: 'string', ownerId: 'string', teamId: 'string', taskBoardId: 'string'});
const AddTeamMemberToTeamCommand = createMessageType('AddTeamMemberToTeam',
{teamId: 'string', teamMemberId: 'string', userId: 'string', role: 'string'});
const RemoveTeamMemberFromTeamCommand = createMessageType(
'RemoveTeamMemberFromTeam', {teamId: 'string', teamMemberId: 'string'});
const UpdateTeamMemberRoleCommand = createMessageType(
'UpdateTeamMemberRole', {teamMemberId: 'string', role: 'string'});
const UpdateProjectNameCommand = createMessageType(
'UpdateProjectName', {projectId: 'string', name: 'string'});
The next code shows the full set of Domain Event types:
Project context: Domain Events
const ProjectCreatedEvent = createEventType('ProjectCreated', {projectId: 'string',
name: 'string', ownerId: 'string', teamId: 'string', taskBoardId: 'string'});
const ProjectRenamedEvent = createEventType(
'ProjectRenamed', {projectId: 'string', name: 'string'});
const TeamMemberCreatedEvent = createEventType(
'TeamMemberCreated', {teamMemberId: 'string', userId: 'string', role: 'string'});
const TeamMemberAddedToTeamEvent = createEventType(
'TeamMemberAddedToTeam', {teamId: 'string', teamMemberId: 'string'});
const TeamMemberRemovedFromTeamEvent = createEventType(
'TeamMemberRemovedFromTeam', {teamId: 'string', teamMemberId: 'string'});
The following three examples illustrate the required changes to the Write Model
implementation:
Project context: Project Entity name setter
set name(name) {
verify('valid name', typeof name == 'string' && !!name);
this.#name = name;
this.#newDomainEvents.push(
new ProjectRenamedEvent({projectId: this.id, name: this.name}));
}
The Command definitions represent all message types that are related to write
use cases. For the read side, two specialized Query types are introduced. The
Domain Event definitions extend the previously existing types
ProjectCreatedEvent and TeamMemberRemovedFromTeamEvent. For the project
Entity, the function set name() is adapted to create a “ProjectRenamed” event.
The constructor of the TeamMember class is extended with a flag for
reconstitution and the creation of a “TeamMemberCreated” event. Furthermore,
the Entity type is complemented with the factory TeamMemberFactory. The
operation addMember() of the Team component is refactored to create an instance
of the TeamMemberAddedToTeamEvent type. Finally, all Repository components
are reduced to their constructor. The previously introduced custom query
operations are obsolete due to the introduction of a Read Model.
activate() {
const domainEventsToHandle = [ProjectCreatedEvent,
ProjectRenamedEvent, TeamMemberCreatedEvent,
TeamMemberAddedToTeamEvent, TeamMemberRemovedFromTeamEvent];
domainEventsToHandle.forEach(Event =>
this.#eventBus.subscribe(Event.type, event => this.handleEvent(event)));
}
async handleProjectCreatedEvent(event) {
const {data: {projectId, name, ownerId, teamId, taskBoardId}} = event;
const updates = {id: projectId, name, ownerId, teamId, taskBoardId};
await this.#projectReadModelStorage.update(projectId, updates);
}
#projectReadModelStorage; #teamMemberReadModelStorage;
constructor({projectReadModelStorage, teamMemberReadModelStorage}) {
this.#projectReadModelStorage = projectReadModelStorage;
this.#teamMemberReadModelStorage = teamMemberReadModelStorage;
}
The usage of the full project context implementation with CQRS can be seen in
the next example (run code):
Project context: Overall usage
const projectCommandHandlers = new ProjectCommandHandlers({teamRepository,
teamMemberRepository, projectRepository, teamMemberFactory, projectFactory});
const projectDomainEventHandlers = new ProjectDomainEventHandlers(
{teamRepository, eventBus});
projectDomainEventHandlers.activate();
const projectReadModelStorage =
new IndexedStorage({indexes: ['ownerId', 'teamId']});
const teamMemberReadModelStorage = new IndexedStorage({indexes: ['userId']});
const projectReadModelSynchronization = new ProjectReadModelSynchronization(
{projectReadModelStorage, teamMemberReadModelStorage, eventBus});
projectReadModelSynchronization.activate();
const projectQueryHandlers = new ProjectQueryHandlers(
{projectReadModelStorage, teamMemberReadModelStorage});
const teamId = generateId(), teamMemberId = generateId(), userId = generateId();
const projectId = generateId(), taskBoardId = generateId();
projectCommandHandlers.handleCommand(new CreateProjectCommand(
{data: {projectId, name: 'Test Project', ownerId: userId, teamId, taskBoardId}}));
await timeout(125);
projectCommandHandlers.handleCommand(new AddTeamMemberToTeamCommand(
{data: {teamId, teamMemberId, userId, role: 'developer'}}));
projectCommandHandlers.handleCommand(new UpdateProjectNameCommand(
{data: {projectId, name: 'Test Project v2'}}));
await timeout(125);
console.log(await projectQueryHandlers.handleQuery(
new FindProjectsByOwnerQuery({data: {userId}})));
console.log(await projectQueryHandlers.handleQuery(
new FindProjectsByCollaboratingUserQuery({data: {userId}})));
User context
At first glance, the Application Services for the user context exclusively consist
of write-related behavior. Creating a user, performing a login and updating
existing fields are all Command operations. However, there are two implicit read
concerns. One is that the login service queries the user Repository by e-mail
addresses. This aspect is extracted into a Query functionality for retrieving
individual user data. For that, the Domain Model implementation is extended
with new Domain Event types and Entities are adapted accordingly. Creating and
updating the required Read Model structure is done by a synchronization
component. The second read aspect is the e-mail registry, which contains
eventually consistent data. Strictly speaking, this part is already a Read Model,
despite the fact it exists within the write side.
The changes to the user Entity can be seen in the next examples:
User context: User constructor
constructor({id, username, emailAddress, password, role,
emailAvailability, isExistingUser}) {
verify('valid id', id != null);
verify('valid username', typeof username == 'string' && !!username);
verify('valid e-mail address', emailAddress instanceof EmailAddress);
verify('unused e-mail',
isExistingUser || emailAvailability.isEmailAvailable(emailAddress));
verify('valid role', role.constructor === Role);
Object.defineProperty(this, 'id', {value: id, writable: false});
this.#emailAvailability = emailAvailability;
this.#username = username;
this.#emailAddress = emailAddress;
this.#role = role;
this.password = password;
if (!isExistingUser) this.#newDomainEvents.push(new UserCreatedEvent(
{userId: id, username, emailAddress: emailAddress.value, role: role.name}));
}
The Command types define all the messages related to write use cases. For the
functionality of retrieving a user profile, the Query type
FindUserByEmailAddressQuery is introduced. The updated Domain Event
definitions contain a total of four types that represent relevant state changes. As
replacement for the previously existing UserEmailAddressAssignedEvent, the
two more explicit types “UserCreated” and “UserEmailAddressChanged” are
added. The user Entity is extended to create according events in its constructor
and in both the operations set name() and set role(). As there are no changes
to arguments or attributes, the user factory remains unchanged. In contrast, the
Repository component is reduced to its constructor by removing the custom
query findUserByEmailAddress(). The e-mail registry Read Model component
requires no refactoring.
#userReadModelStorage; #eventBus;
constructor({userReadModelStorage, eventBus}) {
this.#userReadModelStorage = userReadModelStorage;
this.#eventBus = eventBus;
}
activate() {
const domainEventsToHandle = [UserCreatedEvent, UsernameChangedEvent,
UserEmailAddressChangedEvent, UserRoleChangedEvent];
domainEventsToHandle.forEach(Event =>
this.#eventBus.subscribe(Event.type, event => this.handleEvent(event)));
}
/ * .. handleUserEmailAddressChangedEvent(), handleUserRoleChangedEvent() .. */
}
The next example shows the Query Handlers component for the user context:
User context: Query Handlers
class UserQueryHandlers {
#userReadModelStorage;
constructor({userReadModelStorage}) {
this.#userReadModelStorage = userReadModelStorage;
}
The last example of this subsection provides an example usage of the user
context (run code):
User context: Overall usage
const userCommandHandlers = new UserCommandHandlers(
{userRepository, userFactory, hashPassword: createMd5Hash});
const userDomainEventHandlers = new UserDomainEventHandlers(
{emailRegistry, eventBus});
userDomainEventHandlers.activate();
The first examples provide the types for all Commands, Queries and Domain
Events:
Task board context: Commands
const AddNewTaskToTaskBoardCommand = createMessageType('AddNewTaskToTaskBoard', {
taskId: 'string', taskBoardId: 'string', title: 'string', description: 'string',
status: ['string', 'undefined'], assigneeId: ['string', 'undefined'],
});
const UpdateTaskTitleCommand = createMessageType(
'UpdateTaskTitle', {taskId: 'string', title: 'string'});
const UpdateTaskDescriptionCommand = createMessageType(
'UpdateTaskDescription', {taskId: 'string', description: 'string'});
const UpdateTaskStatusCommand = createMessageType(
'UpdateTaskStatus', {taskId: 'string', status: 'string'});
const UpdateTaskAssigneeCommand = createMessageType(
'UpdateTaskAssignee', {taskId: 'string', assigneeId: 'string'});
const RemoveTaskFromTaskBoardCommand = createMessageType(
'RemoveTaskFromTaskBoard', {taskBoardId: 'string', taskId: 'string'});
The next code snippets show the required changes to the Task Entity class:
Task Board context: Task constructor
constructor(
{id, title, description = '', status = 'todo', assigneeId, isExistingTask}) {
verify('valid id', id != null);
verify('valid title', typeof title == 'string' && !!title);
verify('valid status', validStatus.includes(status));
verify('active task assignee', status !== 'in progress' || assigneeId);
Object.defineProperty(this, 'id', {value: id, writable: false});
this.#title = title;
this.#description = description;
this.#status = status;
this.#assigneeId = assigneeId;
if (!isExistingTask) this.#newDomainEvents.push(
new TaskCreatedEvent({taskId: id, title, description, status, assigneeId}));
}
The following example provides a reworked version of the Task Board Entity
type:
Task Board context: Task Board
class TaskBoard {
addTask(taskId) {
verify('valid task id', taskId != null);
this.#taskIds.push(taskId);
this.#newDomainEvents.push(
new TaskAddedToTaskBoardEvent({taskBoardId: this.id, taskId}));
}
removeTask(taskId) {
const index = this.#taskIds.indexOf(taskId);
verify('task is on board', index > -1);
this.#taskIds.splice(index, 1);
this.#newDomainEvents.push(
new TaskRemovedFromTaskBoardEvent({taskBoardId: this.id, taskId}));
}
The next code shows an extended variant of the Domain Event handlers
component:
Task Board context: Domain Event Handlers
class TaskBoardDomainEventHandlers {
constructor({taskRepository, taskBoardRepository,
taskAssigneeReadModelStorage, eventBus}) {
this.activate = () => {
eventBus.subscribe('ProjectCreated', () => {/* .. */});
eventBus.subscribe(TaskCreatedEvent.type, ({data: {taskId, assigneeId}}) => {
taskAssigneeReadModelStorage.update(taskId, {id: taskId, assigneeId});
});
eventBus.subscribe(TaskAssigneeChangedEvent.type, event => {
const {data: {taskId, assigneeId}} = event;
taskAssigneeReadModelStorage.update(taskId, {id: taskId, assigneeId});
});
eventBus.subscribe('TeamMemberRemovedFromTeam', async ({data}) => {
const tasks =
taskAssigneeReadModelStorage.findByIndex('assigneeId', data.teamMemberId);
await Promise.all(taskAssignees.map(async ({id}) => {
const task = await taskRepository.load(id);
if (task.status === 'in progress') task.status = 'todo';
task.assigneeId = undefined;
await taskRepository.save(task);
}));
});
};
}
The Command types define the messages for all write-related use cases. Finding
tasks on a board is represented by the Query type “FindTasksOnTaskBoard”.
The Domain Event types define all relevant state changes that can occur within
the Domain Model. For the task Entity, the constructor and the setters for title,
description and assignee are extended to create according events. The task board
Entity is completely refactored. Instead of working with specialized Value
Objects, the component exclusively references task identifiers. The Task class
accepts an additional constructor argument for Entity reconstitution. For this
reason, an according factory is implemented. The custom query
findTasksByAssigneeId() is removed from the task Repository. As
replacement, the component TaskBoardDomainEventHandlers is adapted to
maintain a Read Model of task assignees inside an InMemoryIndexedStorage
instance.
The next code provides a synchronization component for task Read Models:
Task board context: Read Model synchronization
class TaskBoardReadModelSynchronization {
#taskReadModelStorage; #eventBus;
constructor({taskReadModelStorage, eventBus}) {
this.#taskReadModelStorage = taskReadModelStorage;
this.#eventBus = eventBus;
}
activate() {
const domainEventsToHandle = [TaskCreatedEvent, TaskAddedToTaskBoardEvent,
TaskRemovedFromTaskBoardEvent, TaskDescriptionChangedEvent,
TaskTitleChangedEvent, TaskAssigneeChangedEvent, TaskStatusChangedEvent];
domainEventsToHandle.forEach(Event =>
this.#eventBus.subscribe(Event.type, event => this.handleEvent(event)));
}
async handleTaskCreatedEvent({data}) {
const {taskId, title, description, status, assigneeId} = data;
const updates = {id: taskId, title, description, status, assigneeId};
await this.#taskReadModelStorage.update(taskId, updates);
}
/* .. handleTaskRemovedFromTaskBoardEvent() .. */
/* .. handleTaskDescriptionChangedEvent() .. */
/* .. handleTaskAssigneeChangedEvent() .. */
/* .. handleTaskStatusChangedEvent() .. */
#taskReadModelStorage;
constructor({taskReadModelStorage}) {
this.#taskReadModelStorage = taskReadModelStorage;
}
The final example of this chapter shows an exemplary usage of the task board
context (run code):
Task board context: Overall usage
const taskAssigneeReadModelStorage =
new InMemoryIndexedStorage({indexes: ['assigneeId']});
const taskBoardCommandHandlers = new TaskBoardCommandHandlers(
{taskRepository, taskBoardRepository, taskFactory});
const taskBoardDomainEventHandlers = new TaskBoardDomainEventHandlers(
{taskRepository, taskBoardRepository, taskAssigneeReadModelStorage, eventBus});
taskBoardDomainEventHandlers.activate();
The Sample Application implementation with CQRS cleanly separates the write
side and the read side of each individual context. Also, it eliminates performance
problems due to expensive Repository queries through optimized Read Model
structures. However, for a productive use, the in-memory storage component
must be complemented with a rebuild mechanism. The next step is to transform
the Domain Models and their implementations into an event-based state
representation.
Chapter 12: Event Sourcing
Event Sourcing is an architectural pattern where state is represented as a
sequence of change events. The events are treated as immutable and get
persisted in an append-only log. Any current data is computed on demand and
generally considered transient. Capturing state as a set of transitions enables to
emphasize intentions and behavior instead of focusing primarily on data
structures. According to [Young], another benefit is that “time becomes an
important factor” of a system. Yet, the most distinct advantage is that historical
event logs allow to derive specialized data for read-related scenarios, even
retroactively. Event Sourcing can only be applied adequately together with some
form of CQRS. Independent of this constraint, the pattern is commonly used in
combination with both Domain Events and Event-Driven Architecture.
Architectural overview
Event Sourcing models state as a series of transitioning events. As with CQRS,
the pattern typically targets subordinate areas or specific contexts, but rarely a
complete software. Even more, individual parts of a system should not rely on
the fact that other areas apply this pattern. Instead, it should be treated as an
internal implementation detail. However, it is acceptable to provide query
functionalities that return historical or event-related information in a controlled
manner. Other than with CQRS, Event Sourcing only affects certain architectural
parts. This primarily includes the Write Model implementation, its associated
infrastructural functionalities and the synchronization mechanism for Read
Model data. In contrast, most parts of the read side, its storage components and
the User Interface Layer largely remain unaffected.
The write side retrieves events from the Event Store, rebuilds state
representations for Write Model components and executes target behavior. All
new events are appended to their according stream inside the store. For DDD-
based software, the event streams are typically separated by Aggregates. After
persisting, all interested subscribers are informed about the state changes. The
read side operates Projection components, which subscribe at the Event Store to
update Read Model structures. Upon notification, they access data from their
associated storage, optionally construct Read Model components and perform
necessary updates. For specialized use cases, the read side may even expose
direct event stream access. In terms of CQRS, the Event Store is both the
persistence mechanism of the write side and part of the synchronization
mechanism.
Event anatomy
The recommendations for Domain Events with regard to types, structure and
content largely also apply to Event Sourcing. Every event should have an
expressive and unambiguous type name that adequately describes the enclosed
change. The term should be a combination of the affected subject and the
executed action in past tense. For each event, the contents should consist of both
data and metadata, where individual fields are represented as simple attributes.
The data part should include the type name, the identity of the affected subject
and all changed values. Other than with Domain Events, there should be no
derived or extraneous data. The metadata section should contain generic
information independent of event types. When using Event Sourcing, this part
often includes additional persistence-related fields.
Consider implementing the Event Sourcing events and the Domain Event
creation mechanism for channel subscriptions of a video platform. The goal is to
be able to inform channel owners whenever a subscription is added or removed.
Overall, the possible state transitions for an individual subscription consist of
adding it, updating its notification settings and removing it. Both the addition
and the removal must be treated as public Domain Events in order to notify
channel owners. In contrast, the notification settings update is considered a
private detail. Even though this occurrence may also be useful for external
consumption, there is initially no need for exposing it. The mechanism for
yielding Domain Events should only forward the public event types and filter out
all private items.
The following code provides the event definitions together with an operation for
yielding Domain Events (run code):
Video platform: Channel subscription events
const SubscriptionAddedEvent = createEventType('SubscriptionAdded',
{subscriptionId: 'string', userId: 'string', channelId: 'string'});
const events = [
new SubscriptionAddedEvent({subscriptionId, userId, channelId}),
new SubscriptionNotificationsSetEvent({subscriptionId, notifications: true}),
new SubscriptionRemovedEvent({subscriptionId}),
];
The following code provides the definitions for the events of the Domain Model:
Support ticket system: Event definitions
const SupportTicketOpenedEvent = createEventType('SupportTicketOpened',
{ticketId: 'string', authorId: 'string', title: 'string', description: 'string'});
Object-oriented approach
The implementation of an event-sourced Write Model using an object-oriented
approach is seemingly the more common variant. One possible reason is the
overall popularity of the programming paradigm. The idea is to combine both
behavior and state into a single class. Typically, the constructor expects a list of
events to build an initial state representation. Executing a behavioral function
causes to add the resulting events to an internal collection. As a side effect, the
associated state representation is updated. All new events must be accessed
manually on the outside. One advantage of this approach is that the continuous
state update allows the execution of multiple consecutive actions. One downside
is the need to maintain an internal event collection and to provide an according
retrieval mechanism.
constructor(events = []) {
events.forEach(event => this.#applyEvent(event));
}
#applyEvent({type, data}) {
if (type === SupportTicketOpenedEvent.type) {
this.#ticketId = data.ticketId;
this.#isOpen = true;
}
if (type === SupportTicketClosedEvent.type) this.#isOpen = false;
if (type === SupportTicketReOpenedEvent.type) this.#isOpen = true;
}
close({resolution}) {
if (!['solved', 'closed'].includes(resolution))
throw new Error('invalid resolution');
this.#applyNewEvent(
new SupportTicketClosedEvent({ticketId: this.#ticketId, resolution}));
}
#applyNewEvent(event) {
this.#newEvents.push(event);
this.#applyEvent(event);
}
The class SupportTicket expresses the support ticket concept using the object-
oriented paradigm. As constructor argument, it expects an event collection for
building its state representation. This is done by consecutively invoking the
function #applyEvent(), which updates internal data accordingly. The
operations open(), comment() and close() implement the defined use cases. All
of them follow a similar pattern. They perform invariant protection, create
events and execute the operation #applyNewEvent(). One exception is the
function comment(), which also conditionally produces a
“SupportTicketReOpened” event. The operation #applyNewEvent() updates the
event collection and invokes the function #applyEvent(). Retrieving new events
on the outside is done via the accessor get newEvents(). One notable aspect is
that the component only maintains the state information it requires, without
accommodating for persistence purposes.
Ticket resolutions
The Domain Model description states that support tickets can either be solved or considered
invalid. This aspect is implemented with the resolution attribute that is expected as parameter
when closing a ticket. Its possible values are “solved” and “closed”.
The next example illustrates the usage of the component (run code):
Support ticket system: Usage of object-oriented approach
const ticketId = generateId();
const issuerId = generateId(), supporterId = generateId();
The code starts with defining identifiers for a ticket, an issuer and a supporter.
Then, the class SupportTicket is instantiated. This is followed by executing the
use cases of opening a ticket, adding two comments and closing the ticket again.
As previously explained, one advantage of the object-oriented approach is the
continuous state update that allows consecutive actions. Next, all new events are
retrieved and the class SupportTicket is instantiated again using them as
argument. Another comment is added to the rebuilt ticket instance. Finally, all
new events are retrieved and logged to the console. Executing this code shows
that the last comment addition produces both a “SupportTicketReOpened” and a
“SupportTicketCommented” event. Overall, the component usage is almost
equal to an implementation without Event Sourcing.
Separated state
As variation to the first approach, the behavioral parts and the state
representation can be strictly separated from each other. [Vernon, p. 547]
suggests to “split the [..] implementation into two distinct classes [..], with the
state object being held by the behavioral”. When executing a use case operation,
the state object is replaced with a newer version that incorporates all required
updates. The advantage of this variation is that it provides a clean separation of
behavior and state. In fact, it represents a stepping stone towards a more
functional approach. However, this aspect can also be considered a disadvantage.
While the implementation seems more functional due to immutable state objects,
it faces the same issues as the previous approach.
The next code shows the relevant parts of a reworked support ticket Aggregate
type (run code usage):
Support ticket system: Support Ticket state separation
#state = {}; #newEvents = [];
constructor(events = []) {
events.forEach(event => this.#applyEvent(event));
}
#applyEvent({type, data}) {
if (type === SupportTicketOpenedEvent.type)
this.#state = {...this.#state, ticketId: data.ticketId, isOpen: true};
if (type === SupportTicketClosedEvent.type)
this.#state = {...this.#state, isOpen: false};
if (type === SupportTicketReOpenedEvent.type)
this.#state = {...this.#state, isOpen: true};
}
The reworked class SupportTicket declares the private variable #state and
initializes it with an empty object. The function #applyEvent() replaces the
state object in response to events instead of mutating individual private fields.
This is done by creating a new object, cloning the existing state representation
and updating selected properties. All the behavioral functions that require to
access state information are adjusted accordingly. As explained earlier, this
variant can be considered a stepping stone towards a functional approach. The
immutable state objects suggest that the implementation is less error-prone due
to a lower chance of unintended side effects. However, since this aspect is
mainly an internal implementation detail, it has no noticeable effect on the actual
component usage.
Functional approach
The following example provides a functional approach for the support ticket
Aggregate type:
Support ticket system: Functional Support Ticket
const applySupportTicketEvents = (state = {}, events = []) =>
events.reduce(applyEvent, state);
This next code shows an exemplary usage of the functional component (run
code):
Support ticket system: Usage of functional approach
const userId = generateId(), supporterId = generateId();
console.log(reopeningCommentEvents);
Similar to the usage of the object-oriented variant, the code starts with defining
all required identifiers. Next, a mutable state variable is declared and initialized
with an empty object. This is followed by a use case sequence of opening a
ticket, adding comments, closing the ticket and adding another comment. As
additional step after each behavior execution, the function
applySupportTicketEvents() is invoked and the state variable is re-assigned.
This is required to perform an execution of multiple consecutive actions. Other
than with a stateful component, the functional approach shifts the responsibility
of state management to the consumer. As final step, the events returned from the
last comment addition are logged to the console. Again, this consists of both a
“SupportTicketReOpened” and a “SupportTicketCommented” event.
Event Store
The Event Store is responsible for the persistence of event-sourced state. For
that, it must provide the possibility to save new events and to retrieve existing
ones in order of their persisting. Its usage makes Repositories obsolete. The
stored items are categorized in Event Streams. As mentioned earlier, for
software that applies DDD, the streams are separated by Aggregate identity. This
means, one individual stream contains the state for one Aggregate. The Event
Store must be accessible to both the write side and the read side of a context.
However, it should not be exposed to the outside. For write use cases, events are
loaded to rebuild state representations and new entries are saved. On the read
side, events are consumed to build Read Models.
Global Event Store
Event Sourcing is typically applied to selected context implementations. Consequently, an Event
Store is a local component that should be encapsulated within its respective boundaries. However,
if a system is applying Event Sourcing as part of its macro-architecture, there can be a global
store.
Basic functionality
At a minimum, an Event Store must provide one operation for reading a stream
and another one for appending new events. Each stream has a unique identifier
that is represented as a simple string. The command to append new items must
be able to persist multiple events in a transactional way. This is because a single
write use case may produce more than one event. The collection of items to be
saved within one transaction is typically referred to as Commit. In addition to
existing metadata, an Event Store may add persistence-related information to
individual records. Most commonly, each event is complemented with a number
that represents the current version of the affected stream. This information can
be used for Concurrency Control mechanisms.
storageDirectory;
constructor({storageDirectory}) {
mkdirSync(storageDirectory, {recursive: true});
Object.defineProperty(
this, 'storageDirectory', {value: storageDirectory, writable: false});
}
#getStreamVersion(streamDirectory) {
return readFileWithFallback(`${streamDirectory}/_version`, '0').then(Number);
}
The next example provides a simplified usage of the Event Store component (run
code):
Event Store: Filesystem Event Store usage
const eventStore = new FilesystemEventStore({storageDirectory});
await eventStore.save(`browser-tab/${browserTabId}`,
[{id: generateId(), type: 'BrowserTabOpened', data: {browserTabId}}]);
await eventStore.save(`browser-tab/${browserTabId}`,
[{id: generateId(), type: 'BrowserTabClosed', data: {browserTabId}}]);
console.log(await eventStore.load(`browser-tab/${browserTabId}`));
Optimistic concurrency
Independent of further functionalities, an Event Store should support concurrent
resource access. Similar to the Repository components introduced in Chapter 9,
one option is to enable Optimistic Concurrency. With this approach, read
operations are always allowed, while write operations require to be based on the
latest status. Since most Event Store implementations already incorporate stream
version numbers, it makes sense to re-use this information for Concurrency
Control. Whenever new events are requested to be added, an expected version
can be provided as additional argument. Only if the value matches with the
actual stream version, the new records are stored. Otherwise, a concurrency
conflict is reported. Ideally, this mechanism is optional, as there are valid use
cases where stream versions are counterproductive.
The first example shows a simple error class for concurrency conflicts:
Event Store: Stream version mismatch error
class StreamVersionMismatchError extends Error {
constructor({expectedVersion, currentVersion}) {
super('StreamVersionMismatch');
Object.assign(this, {expectedVersion, currentVersion});
}
constructor({storageDirectory}) {
mkdirSync(storageDirectory, {recursive: true});
Object.defineProperty(
this, 'storageDirectory', {value: storageDirectory, writable: false});
}
async save(streamId, events, {expectedVersion = undefined} = {}) {
if (!this.#saveQueueByStreamId.has(streamId))
this.#saveQueueByStreamId.set(streamId, new AsyncQueue());
const saveQueue = this.#saveQueueByStreamId.get(streamId);
const streamDirectory = `${this.storageDirectory}/${streamId}`;
await mkdir(streamDirectory, {recursive: true});
return saveQueue.enqueueOperation(async () => {
const currentVersion = await this.#getStreamVersion(streamDirectory);
if (expectedVersion != null && expectedVersion !== currentVersion)
throw new StreamVersionMismatchError({expectedVersion, currentVersion});
await Promise.all(events.map((event, index) => {
const numberedEvent = {...event, number: currentVersion + index + 1};
const eventFilename = `${streamDirectory}/${numberedEvent.number}.json`;
return writeFileAtomically(eventFilename, JSON.stringify(numberedEvent));
}));
const newVersion = currentVersion + events.length;
await writeFile(`${streamDirectory}/_version`, `${newVersion}`);
});
}
#getStreamVersion(streamDirectory) {
return readFileWithFallback(`${streamDirectory}/_version`, '0').then(Number);
}
The next example illustrates the usage of the concurrency-safe Event Store (run
code):
Event Store: Concurrency-safe Event Store usage
const eventStore = new ConcurrencySafeFilesystemEventStore({storageDirectory});
await eventStore.save(`meeting/${meetingId}`,
[{id: generateId(), type: 'MeetingScheduled', data: {meetingId}}]);
const {currentVersion} = await eventStore.load(`meeting/${meetingId}`);
Stream subscriptions
On top of saving and loading events, an Event Store should also support stream
subscriptions. The idea is to treat streams as a continuous source of events
without differentiating between existing records and future items. This
mechanism is especially useful for the creation of Read Models. In theory, the
desired behavior can be emulated with periodical loading. However, it makes
more sense for the Event Store to support it as native functionality. This keeps
the responsibility within the component and avoids manual polling, which can be
wasteful in terms of resources. The subscription mechanism can also support
asynchronous subscribers by queuing and awaiting callback executions.
However, this functionality must not be misused for outbound event distribution.
Subscriptions should only be used inside the owning context.
async initialize() {
await mkdir(this.#streamDirectory, {recursive: true});
await writeFile(this.#versionFilePath, '', {flag: 'wx'}).catch(() => {});
this.#watcher = watch(this.#versionFilePath, () => this.#processNewEvents());
this.#processNewEvents();
}
#processNewEvents() {
this.#callbackExecutionQueue.enqueueOperation(async () => {
const availableVersion = Number(
await readFileWithFallback(this.#versionFilePath, '0'));
while (this.#processedVersion < availableVersion) {
const event = JSON.parse(await readFile(
`${this.#streamDirectory}/${++this.#processedVersion}.json`, 'utf-8'));
await this.#callback(event);
}
});
}
destroy() { this.#watcher.close(); }
unsubscribe(callback) {
this.#subscriptionByCallback.get(callback).destroy();
this.#subscriptionByCallback.delete(callback);
}
The usage of the Event Store with stream subscriptions can be seen in the
following example (run code):
Event Store: Event Store with subscriptions usage
const eventStore = new FilesystemEventStoreWithSubscription({storageDirectory});
await eventStore.save(`guestbook/${guestbookId}`,
[{id: generateId(), type: 'GuestbookMessageWritten',
data: {guestbookId, entryId: generateId(), message: 'Hello!'}}]);
await eventStore.subscribe(`guestbook/${guestbookId}`, console.log);
await timeout(1000);
await eventStore.save(`guestbook/${guestbookId}`,
[{id: generateId(), type: 'GuestbookMessageWritten',
data: {guestbookId, entryId: generateId(), message: 'Great page'}}]);
The next example implements a class for an Event Store component with a
global stream:
Event Store: Event Store with global stream
class FilesystemEventStoreWithGlobalStream
extends FilesystemEventStoreWithSubscription {
#subscribedStreams = [];
async #linkStreamToGlobalStream(streamId) {
this.#subscribedStreams.push(streamId);
const streamVersionDirectory =
`${this.storageDirectory}/$global/stream-versions/${streamId}`;
await mkdir(streamVersionDirectory, {recursive: true});
const processedVersion = Number(
await readFileWithFallback(`${streamVersionDirectory}/_version`, '0'));
await this.subscribe(streamId, async event => {
const metadata = {...event.metadata,
originStreamId: streamId, originNumber: event.number};
const events = [{...event, number: undefined, metadata}];
await this.save('$global', events);
await writeFile(`${streamVersionDirectory}/_version`, `${event.number}`);
}, {startVersion: processedVersion + 1});
}
eventStore.subscribe('$global', console.log);
await eventStore.save(`user/${userId}`,
[{id: generateId(), type: 'UserLoggedIn', data: {userId}}]);
await eventStore.save(`user/${userId}`,
[{id: generateId(), type: 'UserLoggedOut', data: {userId}}]);
On-demand projections
Projections can be on-demand one-time operations that retrieve events, perform
transformations and respond with derived data. Upon execution, the events from
all affected streams are loaded, the domain-specific transformations are applied
and the result is returned. Typically, the computed Read Model data is
considered transient and therefore not retained. This approach is useful for
specialized read use cases that require up-to-date information and are executed
infrequently. For a generic and re-usable projection mechanism, selected parts of
the transformation logic can even be parametrized. One potential downside of
this approach is its performance implications. All required events must be loaded
from the Event Store and the transformation logic must be executed for all of
them. Most commonly, on-demand projections are used for static reporting data.
Example: Playtime statistics
The first example provides the relevant event definitions for the player
component:
Playtime statistics: Event definitions
const PlayerStartedPlayingGameEvent = createEventType('PlayerStaredPlayingGame',
{playerId: 'string', gameId: 'string', time: 'number'});
#eventStore;
constructor({eventStore}) {
this.#eventStore = eventStore;
}
async getPlaytime(playerId) {
let result = 0, lastStart = 0;
const {events} = await this.#eventStore.load(`player/${playerId}`);
events.forEach(({data, type}) => {
if (type === PlayerStartedPlayingGameEvent.type) lastStart = data.time;
if (type === PlayerStoppedPlayingEvent.type) result += data.time - lastStart;
});
return result;
}
The last example shows an exemplary usage code of the previous components
(run code):
Playtime statistics: Usage
const eventStore = new FilesystemEventStore({storageDirectory});
const events = [
new PlayerStartedPlayingGameEvent({playerId, gameId: game1Id, time: 0}),
new PlayerStoppedPlayingEvent({playerId, time: 1000}),
new PlayerStartedPlayingGameEvent({playerId, gameId: game2Id, time: 4000}),
new PlayerStoppedPlayingEvent({playerId, time: 5000}),
new PlayerStartedPlayingGameEvent({playerId, gameId: game1Id, time: 6000}),
new PlayerStoppedPlayingEvent({playerId, time: 10000}),
];
Continuous projections
Projections can also be long-running operations that consume event notifications
and continuously update Read Model data. Upon initialization, they create
subscriptions for all associated streams of an Event Store. The registered
callback operations are responsible for invoking the transformation logic that
adjusts the derived data structures. Effectively, this mechanism facilitates
eventually consistent Read Models that are continuously updated in response to
state changes. When a read use case is executed, the data is directly retrieved
from its storage without interacting with the projection. This approach is useful
for frequently accessed information and for large event streams, where on-
demand computations can be too expensive. Typically, continuous projections
subscribe to categorized or global streams and maintain Read Models that are
part of a running software.
Example: Warehouse stock
The first example provides the event definitions for the good component:
Warehouse stock: Event definitions
const GoodRegisteredEvent = createEventType('GoodRegistered',
{goodId: 'string', title: 'string'});
#eventStore; #stockStorage;
constructor({eventStore, stockStorage}) {
this.#eventStore = eventStore;
this.#stockStorage = stockStorage;
}
activate() {
this.#eventStore.subscribe('$global', event =>
this.#stockStorage.set(event.data.goodId,
updateStock(this.#stockStorage.get(event.data.goodId) || {}, event)));
}
await projection.activate();
const {currentVersion} = await eventStore.load(streamId);
if (currentVersion === 0) await eventStore.save(
streamId, [new GoodRegisteredEvent({goodId, title: 'USB Charger (5v / 2A)'})]);
await eventStore.save(streamId, [new GoodStockedEvent({goodId, units: 100}),
new GoodDestockedEvent({goodId, units: 95})]);
await timeout(125);
console.log(stockStorage.get(goodId));
The following example shows a persistent projection component for stock Read
Models:
Warehouse stock: Persistent projection
class PersistentGoodStockProjection {
async activate() {
const processedVersion =
Number(await readFileWithFallback(this.#versionFilePath, '0'));
this.#eventStore.subscribe('$global', async event => {
const {number, data} = event;
if (!data.goodId) return;
const stock = await this.#stockStorage.load(data.goodId).catch(() => ({}));
await this.#stockStorage.save(data.goodId, updateStock(stock, event));
await writeFile(this.#versionFilePath, `${number}`);
}, {startVersion: processedVersion + 1});
}
The last example shows an exemplary usage code of the previous components
(run code):
Warehouse stock: Usage with persistent projection
const eventStore = new FilesystemEventStoreWithGlobalStream(
{storageDirectory: `${storageDirectory}/event-store`});
const stockStorage = new JSONFileStorage(`${storageDirectory}/stock`);
const projection = new PersistentGoodStockProjection(
{eventStore, stockStorage, versionDirectory: `${storageDirectory}/projection`});
await projection.activate();
const {currentVersion} = await eventStore.load(streamId);
if (currentVersion === 0) await eventStore.save(
streamId, [new GoodRegisteredEvent({goodId, title})]);
await eventStore.save(streamId, [new GoodStockedEvent({goodId, units: 100}),
new GoodDestockedEvent({goodId, units: 95})]);
await timeout(125);
console.log(await stockStorage.load(goodId));
The code is very similar to the in-memory projection approach. Two notable
differences are the usages of the component JSONFileStorage and the class
PersistentGoodStockProjection as projection. Other than that, the overall
flow is almost identical. Also, executing the code produces the same output as
with the previous approach. Even more, additional executions have the same
effect of accumulating the stock value. Overall, both approaches have their
advantages and disadvantages. The in-memory variant can have a better
performance when there are not many stored events. This is because there is no
computational overhead due to persistence. However, as soon as a warehouse
reaches a certain size, it makes more sense to use persistent projections. They
help to ensure faster startup times.
async activate() {
const processedVersion =
Number(await readFileWithFallback(this.#versionFilePath, '0'));
this.#eventStore.subscribe('$global', async event => {
const shouldPublishEvent = this.#publicEventTypes.includes(event.type);
if (shouldPublishEvent) await this.#eventBus.publish(event);
await writeFile(this.#versionFilePath, `${event.number}`);
}, {startVersion: processedVersion + 1});
}
}
The class DomainEventPublisher is a component that selectively forwards
Event Sourcing records to an Event Bus. Its name is identical to a related
functionality introduced in Chapter 9. As arguments, it expects a storage
directory, an Event Store, an Event Bus and an array of public event types. Its
function activate() configures the forwarding mechanism. First, it retrieves the
last processed stream version with 0 as fallback value. Then, it subscribes to the
global event stream. For each incoming event, the subscription callback tests
whether its type is considered public. If that is the case, the event is published
via the Event Bus. Finally, the processed stream version is persisted. Similar to
previously illustrated persistent projections, the component establishes an “at
least once” delivery guarantee.
eventBus.subscribe('UserCreated', console.log);
eventBus.subscribe('UserRemoved', console.log);
await eventStore.save(`user/${userId}`,
[{id: generateId(), type: 'UserCreated', data: {userId}}]);
await eventStore.save(`user/${userId}`,
[{id: generateId(), type: 'UserLoggedIn', data: {userId}}]);
await eventStore.save(`user/${userId}`,
[{id: generateId(), type: 'UserRemoved', data: {userId}}]);
User context
The user context implementation requires the least complex modifications to be
made for correctly apply Event Sourcing. Overall, this part contains four
Domain components, of which only the user represents an Aggregate type. The
other components are Value Objects. While the user Aggregate contains
numerous behavioral operations, most of them are similar and exclusively
update an individual attribute. The event types of the context represent an almost
complete set of possible state transitions. In addition to that, none of them must
be forwarded as Domain Events to an Event Bus. One distinct aspect of the
context is the e-mail registry component and its according synchronization
mechanism. However, refactoring this part to work with Event Sourcing is
straight forward and requires only minimal changes.
The next code shows the event-sourced implementation of the user Entity type:
User context: User Write Model
const applyUserEvents = (state, events) => events.reduce(applyEvent, state);
The event definitions are extended with two aspects. For one, the type
UserCreatedEvent expects an additional password field. Secondly, the type
UserPasswordChangedEvent is introduced. For the Write Model part, the
previously existing class User is replaced with a functional event-sourced
variant. The operation applyUserEvents() is responsible for building a state
representation and internally executes the function applyEvent() repeatedly.
Creating a new user is done with the command createUser(). Modifying an
attribute of an existing Aggregate is achieved with the respective update
command. In fact, the event-sourced user component is shorter compared to the
implementation without Event Sourcing. One reason is that there is no stateful
structure involved. Also, the Domain Model component does not require any
accommodations for persistence such as getter functions.
The following example shows the most relevant parts of the refactored
Command Handlers class:
User context: Command Handlers
class UserCommandHandlers {
async handleCreateUserCommand(command) {
const {data: {userId, username, emailAddress, password, role}} = command;
const events = createUser({id: userId, username,
password: this.#hashPassword(password), role: new Role(role),
emailAddress: new EmailAddress(emailAddress),
emailAvailability: this.#emailRegistry});
await this.#eventStore.save(`user/${userId}`, events, {expectedVersion: 0});
}
/* .. handleUpdateUsernameCommand() .. */
/* .. handleUpdateUserPasswordCommand() .. */
/* .. handleUpdateUserRoleCommand() .. */
}
The class UserCommandHandlers is adapted to work with an Event Store and an
event-sourced Write Model. Each handler operation follows the same pattern.
First, the affected event stream is loaded. Naturally, the operation
handleCreateUserCommand() skips this step. Next, a state representation is built
by invoking the function applyUserEvents() with the retrieved events. Then,
the target behavior is executed with the state and the command arguments as
input. Finally, the new events are appended to the affected stream. As part of
every change operation, the current stream version is retrieved and later used as
expectedVersion parameter upon saving. This is done to enforce Optimistic
Concurrency. If multiple change operations to the same Aggregate are issued
concurrently, one fails due to a version mismatch.
The last example of this subsection provides an example usage of the user
context (run code):
User context: Overall usage
const eventStore = new ConcurrencySafeFilesystemEventStore(
{storageDirectory: `${rootStorageDirectory}/event-store`});
const commandsToExecute = [
new CreateUserCommand({data: {userId: userId1, username: 'johnathan',
emailAddress: emailAddress1, password: 'pw1', role: 'user'}}),
new UpdateUserEmailAddressCommand(
{data: {userId: userId1, emailAddress: emailAddress2}}),
];
for (const command of commandsToExecute)
await userCommandHandlers.handleCommand(command);
await timeout(125);
const result = await userQueryHandlers.handleQuery(
new FindUserByEmailAddressQuery({data: {emailAddress: emailAddress2}}));
console.log(result);
await userCommandHandlers.handleCommand(new CreateUserCommand({data:
{userId: userId2, username: 'john',
emailAddress: emailAddress2, password: 'pw1', role: 'user'}}));
Project context
The project context implementation requires more adaptations in order to work
with Event Sourcing. Compared to the other contexts, its Domain Layer contains
the highest amount of individual parts. There are four separate components, of
which three represent Aggregate types. One is for projects, the second one is for
team members and the third is for teams. The existing event definitions lack
multiple items in order to create a complete set of state changes. This
circumstance underlines the difference between Domain Events and Event
Sourcing records. Other than the user part, the project context requires to
forward one type to the Event Bus as Domain Event. The resulting notifications
are consumed both internally and externally by the task board context.
The next three examples show the refactored Write Model components for
projects, teams and team members:
Project context: Project Write Model
const applyProjectEvents = (state, events) => events.reduce(applyEvent, state);
const applyEvent = (state, event) => event.type === ProjectCreatedEvent.type ?
{...state, id: event.data.projectId} : {...state};
#eventStore;
constructor({eventStore}) {
this.#eventStore = eventStore;
}
async handleCreateProjectCommand({data}) {
const {projectId: id, name, ownerId, teamId, taskBoardId} = data;
const events = createProject({id, name, ownerId, teamId, taskBoardId});
await this.#eventStore.save(
`project/${id}`, events, {expectedVersion: 0});
}
async handleAddTeamMemberToTeamCommand(command) {
const {data: {teamId, teamMemberId, userId, role}} = command;
const teamMemberEvents = createTeamMember(
{id: teamMemberId, userId, role: new Role(role)});
await this.#eventStore.save(`team-member/${teamMemberId}`, teamMemberEvents);
const {events, currentVersion} = await this.#eventStore.load(`team/${teamId}`);
const newTeamEvents = addTeamMember(applyTeamEvents({}, events), teamMemberId);
await this.#eventStore.save(
`team/${teamId}`, newTeamEvents, {expectedVersion: currentVersion});
}
/* .. handleRemoveTeamMemberFromTeamCommand() .. */
/* .. handleUpdateTeamMemberRoleCommand() .. */
As with the user context, other components of the Application Layer in the
project context are also adjusted. The class ProjectDomainEventHandlers is
changed to work with an event-sourced team Aggregate type. However, the
subscription to the Event Bus remains, as the creation of a project is considered a
Domain Event. For the read part, the synchronization class
ProjectReadModelSynchronization is converted to a projection component and
named ProjectReadModelProjection. The Query Handlers of the project part
require no changes.
The following example shows a usage of the project context with Event
Sourcing and separated Domain Event publishing (run code):
Project context: Overall usage
const projectCommandHandlers = new ProjectCommandHandlers({eventStore});
const projectDomainEventHandlers = new ProjectDomainEventHandlers(
{eventStore, eventBus});
projectDomainEventHandlers.activate();
const projectReadModelStorage =
new InMemoryIndexedStorage({indexes: ['teamId', 'ownerId']});
const teamMemberReadModelStorage = new InMemoryIndexedStorage({indexes: ['userId']});
const projectReadModelProjection = new ProjectReadModelProjection(
{projectReadModelStorage, teamMemberReadModelStorage, eventStore});
projectReadModelProjection.activate();
const projectQueryHandlers = new ProjectQueryHandlers(
{projectReadModelStorage, teamMemberReadModelStorage});
The first code shows the full event definitions for the task board context:
Task board context: Events
const TaskCreatedEvent = createEventType('TaskCreated',
{taskId: 'string', title: 'string', description: 'string',
status: ['string', 'undefined'], assigneeId: ['string', 'undefined']});
const TaskTitleChangedEvent = createEventType(
'TaskTitleChanged', {taskId: 'string', title: 'string'});
const TaskDescriptionChangedEvent = createEventType(
'TaskDescriptionChanged', {taskId: 'string', description: 'string'});
const TaskAssigneeChangedEvent = createEventType(
'TaskAssigneeChanged', {taskId: 'string', assigneeId: ['string', 'undefined']});
const TaskStatusChangedEvent = createEventType(
'TaskStatusChanged', {taskId: 'string', status: 'string'});
const TaskBoardCreatedEvent = createEventType(
'TaskBoardCreated', {taskBoardId: 'string'});
const TaskAddedToTaskBoardEvent = createEventType(
'TaskAddedToTaskBoard', {taskBoardId: 'string', taskId: 'string'});
const TaskRemovedFromTaskBoardEvent = createEventType(
'TaskRemovedFromTaskBoard', {taskBoardId: 'string', taskId: 'string'});
The following two examples provide the reworked Aggregate types for task and
task board:
Task board context: Task Write Model
const validStatus = ['todo', 'in progress', 'done'];
The next code shows the Command Handlers class for the event-sourced
implementation:
Task board context: Command Handlers
class TaskBoardCommandHandlers {
#eventStore;
constructor({eventStore}) {
this.#eventStore = eventStore;
}
async handleAddNewTaskToTaskBoardCommand({data}) {
const {taskId, taskBoardId, title, description, status, assigneeId} = data;
const events = createTask({id: taskId, title, description, status, assigneeId});
await this.#eventStore.save(`task/${taskId}`, events, {expectedVersion: 0});
const {events: taskBoardEvents, currentVersion} =
await this.#eventStore.load(`task-board/${taskBoardId}`);
const newTaskBoardEvents = addTask(
applyTaskBoardEvents({}, taskBoardEvents), taskId);
await this.#eventStore.save(`task-board/${taskBoardId}`,
newTaskBoardEvents, {expectedVersion: currentVersion});
}
/* .. handleUpdateTaskDescriptionCommand() .. */
/* .. handleUpdateTaskAssigneeCommand() .. */
/* .. handleUpdateTaskStatusCommand() .. */
}
The class TaskBoardCommandHandlers is changed similarly to the counterparts
of the other context implementations. Its operation
handleAddNewTaskToTaskBoardCommand() is another example for an event-
sourced use case that affects multiple transactions. The component
TaskBoardDomainEventHandlers is changed in different ways. For maintaining
the task assignee Read Models, it consumes events from the Event Store instead
of listening to the Event Bus. The other Event Bus subscribers for the event
types “ProjectCreated” and “TeamMemberRemovedFromTeam” remain active.
However, their callback implementations are adjusted to work with event-
sourced Write Model components. Furthermore, un-assigning tasks from a
removed team member is an example for executing multiple actions on the same
event-sourced component. The class TaskBoardReadModelSynchronization is
transformed into the projection component TaskBoardReadModelProjection.
Again, the Query Handlers remain unchanged.
The usage of the event-sourced project context can be seen in the following
example (run code):
Task board context: Overall usage
const taskBoardCommandHandlers = new TaskBoardCommandHandlers(
{eventStore});
const taskBoardDomainEventHandlers = new TaskBoardDomainEventHandlers(
{eventStore, eventBus, taskAssigneeReadModelStorage});
taskBoardDomainEventHandlers.activate();
Program layouts
Figure 13.1: Program layouts
There are different layout possibilities for dividing a software into executable
programs. One option is to separate architectural layers. For example, the
Domain part can be operated independently of others. Another possibility is to
align with conceptual boundaries. The implementations for individual contexts
can be operated as standalone programs. This produces a one-to-one alignment
of Domain Models and runtime units. When applying CQRS, the write side and
the read side of an implementation should also be separated. With regard to
Event Sourcing, the Event Store must support inter-process access. Furthermore,
a program layout can also mix multiple approaches. Independent of layouts, one
option for scaling is to operate multiple processes of the same part. However,
this requires infrastructural components to support inter-process concurrent
usage.
Terms related to programs and processes
Term Meaning
Software Combination of programs and data
Program Collection of instructions
Process Execution of a program
Thread Component of a process
One process per functionality
The examples and the Sample Application implementation in this chapter operate at most one
Node.js process per individual functionality. While scaling software parts beyond a single process
is an important aspect, it is not the focus of this chapter. Enabling the exemplary filesystem-based
infrastructure components to correctly work across threads would introduce unnecessary
complexity.
Context-level communication
For software that applies DDD and CQRS, there are two types of context-level
communication. The first one is to request a use case execution by issuing a
Command or a Query. This procedure requires a bidirectional one-to-one
information exchange, during which both parties must remain available. The
requester provides inputs and awaits a response, either in form of data or an
operational status. Most commonly, a remote use case execution is issued by a
user. The second communication type is the notification about an occurrence by
distributing a Domain Event via an Event Bus. This mechanism establishes a
unidirectional information flow with one sender and many receivers. The
producer of a message exclusively awaits acknowledgment from the Event Bus,
independent of the actual consumers.
Relation to IPC
Inter-process Communication (IPC) is the exchange of information across processes. Every
Operating System provides various mechanisms for this purpose. This includes communication
via the filesystem, through sockets or with shared memory. The approaches differ in aspects such
as whether they are unidirectional or bidirectional, the number of recipients and their reliability.
On top of native mechanisms, custom protocols can be implemented.
As preparation for a usage example, the next code provides helper operations to
issue POST and GET requests:
HTTP: Request helper functions
const post = (url, data) => new Promise(resolve => {
http.request(urlWithProtocol(url), {method: 'POST'}, response => {
parseHttpStream(response).then(resolve);
}).end(JSON.stringify(data));
});
The last example shows an exemplary usage the HTTP interface factory function
(run code):
HTTP: Create HTTP interface factory usage
const handleCommand = async command => console.log(command);
const handleQuery = async query => query;
http.createServer(commandHandlersHttpInterface).listen(50000);
http.createServer(queryHandlersHttpInterface).listen(50001);
console.log(
await post('http://localhost:50000/', {type: 'ChangeData', data: {foo: 'bar'}}));
console.log(
await get('http://localhost:50001/', {type: 'GetData', data: {id: '123'}}));
The function post() is responsible for issuing a POST request with custom data.
Sending a GET request with custom query parameters is done via the operation
get(). The example usage starts with defining placeholder handler functions for
Commands and Queries. While the operation handleCommand() logs every
received message to the console, the function handleQuery() simply returns
back each Query. As next step, two HTTP interfaces are created with the factory
createHttpInterface(). Then, two HTTP servers are instantiated, the listener
operations are passed in and the servers are started. The actual use cases consists
of sending one POST request and one GET request, each to the according server.
Executing the code demonstrates that the HTTP interfaces correctly invoke the
target function and forward its return value.
subscribe(topic, subscriber) {
const newSubscribers = this.#getSubscribers(topic).concat([subscriber]);
this.#subscribersByTopic.set(topic, newSubscribers);
}
unsubscribe(topic, subscriber) {
const subscribers = this.#getSubscribers(topic);
subscribers.splice(subscribers.indexOf(subscriber), 1);
this.#subscribersByTopic.set(topic, subscribers);
}
#processMessage(messageId) {
this.#processingQueue.enqueueOperation(async () => {
const isNewItem = await access(`${this.#processedIdsDirectory}/${messageId}`)
.then(() => false).catch(() => true);
if (!isNewItem) return;
const {topic, message} = JSON.parse(await readFile(
`${this.#inboxDirectory}/${messageId}.json`, 'utf-8'));
await Promise.all(this.#getSubscribers(topic).map(subscriber =>
new Promise(resolve => setTimeout(() => {
Promise.resolve(subscriber(message)).then(resolve);
})),
));
await writeFile(`${this.#processedIdsDirectory}/${messageId}`, '');
});
}
eventBus.subscribe('MessageFromProcessB',
event => console.log('received in process a', event));
setTimeout(() =>
eventBus.publish({id: generateId(), type: 'MessageFromProcessA'}), 125);
eventBus.subscribe('MessageFromProcessA',
event => console.log('received in process b', event));
setTimeout(() =>
eventBus.publish({id: generateId(), type: 'MessageFromProcessB'}), 125);
Program layout
The division of the Sample Application into multiple runtime parts can be done
in different ways. One option is to separate the overall implementation by
architectural layers. While this eventually increases performance and load
capacity, it risks to make different contexts interdependent. The more useful
approach is to align with the conceptual boundaries. In fact, the source code for
the contexts is already separated through the existing directory structure.
Furthermore, every context-overarching communication is done via an Event
Bus. Consequently, the implementations can easily be operated as standalone
programs. However, every part contains a write side and a read side that serve
different purposes. These two should also be separated. Therefore, each of the
Sample Application context implementations is split into two separate programs.
Infrastructure code
The separation into multiple executable programs requires to introduce
additional generic components. For an inter-process event distribution, the class
FilesystemMessageBus is re-used. The remote execution of use cases is enabled
through the factory createHttpInterface() and its related operations
createQueryString(), parseQueryString() and parseHttpStream(). Also, the
HTTP helper functions get() and post() are added to the Sample Application
codebase. Although there is no context-to-context use case execution, the
introduction of HTTP interfaces allows examples to run as separate processes.
Furthermore, it serves as preparation for the User Interface Layer covered in the
next chapter. In order to avoid communicating with six individual HTTP
endpoints, this section introduces a proxy server factory. The component allows
to create a server that routes requests based on a custom URL resolution.
The next code shows an exemplary usage of the proxy server component (run
code):
Infrastructure code: Proxy server factory usage
http.createServer((_, response) =>
response.end('response from the write side')).listen(50001);
http.createServer(httpProxy).listen(50000);
console.log(await get('http://localhost:50000/write-side'));
console.log(await get('http://localhost:50000/read-side'));
Context configurations
Every Sample Application context implementation is extended with two runtime
configurations. One is for the write side, the other one is for the read side.
Similar to the usage examples of the previous chapter, each configuration pair
performs the necessary setup for a context. This consists of instantiating
infrastructural functionalities as well as Application Layer components. For a
write side, the infrastructure part typically includes an Event Store, an Event Bus
and a Domain Event publisher. For each read side, it further incorporates the
setup of Read Model storage mechanisms. The Application part setup consists of
Application Services and optional projection components. On top of that, every
configuration is complemented with the creation of an HTTP interface and an
HTTP server.
The first two examples show the configuration for the write side and the read
side of the user context:
User context: Write side
eventTypeFactory.setIdGenerator(generateId);
eventTypeFactory.setMetadataProvider(() => ({creationTime: new Date()}));
const userReadModelStorage =
new InMemoryIndexedStorage({indexes: ['emailAddress']});
new UserReadModelProjection({userReadModelStorage, eventStore}).activate();
const queryHandlers = new UserQueryHandlers({userReadModelStorage});
The next examples implement the configuration of the task board context:
Task board context: Write side
const createDefaultMetadata = () => ({creationTime: new Date()});
eventTypeFactory.setIdGenerator(generateId);
eventTypeFactory.setMetadataProvider(createDefaultMetadata);
const taskAssigneeReadModelStorage =
new InMemoryIndexedStorage({indexes: ['assigneeId']});
const commandHandlers = new TaskBoardCommandHandlers({eventStore});
new TaskBoardDomainEventHandlers(
{eventStore, eventBus, taskAssigneeReadModelStorage}).activate();
The final two examples provide the configuration for the project context:
Project context: Write side
eventTypeFactory.setIdGenerator(generateId);
eventTypeFactory.setMetadataProvider(() => ({creationTime: new Date()}));
There are many similarities in the runtime configurations for the different
context implementations. Each write side configures the identifier generator and
the metadata provider of the component eventTypeFactory. This is required for
a correct event creation. Also, every configuration instantiates the EventStore
class. While the write sides save to a store and read from it, the read sides
exclusively subscribe for notifications. The task board part and the project part
further set up an EventBus instance on their write side. This component
internally uses the class FilesystemMessageBus for inter-process distribution.
The project context implementation instantiates the DomainEventPublisher
class for Domain Event distribution. Every read side sets up Read Model
storages and projection components. Finally, each configuration invokes the
factory createHttpInterface(), instantiates a server and starts it.
The following example shows the boostrapping of all individual parts as well as
the setup of the proxy server:
Sample Application: Proxy server
const programMappings = {
'user/write-side': 50001,
'user/read-side': 50002,
'project/write-side': 50003,
'project/read-side': 50004,
'task-board/write-side': 50005,
'task-board/read-side': 50006,
};
http.createServer(httpProxy).listen(50000);
The configuration code first defines the object programMappings, which contains
key-value pairs for program directories and HTTP ports. For each entry, a
Node.js process is spawned via the function child_process.spawn() with the
respective program directory as argument. Also, the environment variables for a
root storage directory and an HTTP port are passed in. Next, the code creates an
HTTP proxy via the function createHttpProxy(). The URL resolver callback
first determines the target program directory using the request URL path and
method. Then, the target HTTP port is retrieved and the final URL is returned.
Note that the query string is also forwarded to retain any message data for GET
requests. Finally, an HTTP server is instantiated and started with the proxy as
listener function.
The final code shows an exemplary usage of the Sample Application (run code):
Sample Application: Overall usage
messageTypeFactory.setIdGenerator(generateId);
messageTypeFactory.setDefaultMetadataProvider(() => ({creationTime: new Date()}));
await import('../index.js');
await timeout(1000);
The separation of the Sample Application into six individual programs increases
the overall resilience and performance. Furthermore, it ensures that the
individual Domain Model implementations enclosed in their own Bounded
Context do not affect each other’s runtime. Even more, the write side and the
read side of every context are cleanly separated from each other. The
introduction of HTTP interfaces together with a unifying proxy server enables
remote access to the Sample Application. The next and final step is to implement
the User Interface Layer.
Chapter 14: User Interface
The User Interface enables humans and machines to interact with a software. For
this purpose, it must display information, accept and process inputs, request state
changes or behavior execution, and report results. In many cases, it also
incorporates input validation. Typically, a User Interface renders visual elements
that can be interacted with through input devices. For web-based software, this is
commonly achieved with HTML, CSS and client-side JavaScript. In most cases,
an according implementation consists of individual purpose-specific components
that are composed with each other. The User Interface Layer does not
necessarily only incorporate client-side components, but can also include server
functionalities. Generally speaking, it contains everything that is necessary for
clients to communicate with the Application Layer of a software.
The first code shows a factory function for creating a simple HTTP filesystem
interface:
HTTP: HTTP Filesystem interface factory
const createHttpFilesystemInterface = pathResolver =>
async (request, response) => {
const filePath = pathResolver(request);
try {
await stat(filePath);
const fileExtension = path.extname(filePath);
const contentType = contentTypeByExtension[fileExtension] || 'text/plain';
response.writeHead(200, {'Content-Type': contentType});
createReadStream(filePath).pipe(response);
} catch (error) {
response.writeHead(404);
response.end();
}
};
const contentTypeByExtension = {
'.js': 'application/javascript',
'.html': 'text/html',
'.css': 'text/css',
};
The web page is served through an HTTP server with an according filesystem
interface (run code):
HTTP: HTTP Filesystem interface factory usage
const httpFilesystemInterface = createHttpFilesystemInterface(request => {
const {pathname} = url.parse(request.url);
const filePath = pathname === '/' ? '/index.html' : pathname;
return `${rootDirectory}/http-filesystem-interface-factory${filePath}`;
});
http.createServer(httpFilesystemInterface).listen(50000);
console.log('<iframe src="http://localhost:50000"></iframe>');
Task-based UI
Task-based UI is a User Interface design approach that puts emphasis on a
Domain and its use cases. Conceptually, it stands in contrast with the approach
CRUD-based UI, which focuses on records and data modification. While the
pattern is mandatory for software that is based on DDD, it is often applied
implicitly. Through the use of a Ubiquitous Language, the User Interface
adequately expresses its underlying abstractions. Every meaningful interaction is
linked to the execution of a specific use case. When applying CQRS, this means
to create a Command or a Query and to send it to the Application Layer. The
messages ensure to capture both intent and data as specific structures.
Furthermore, Query results and Domain Events also encapsulate information in a
domain-specific format.
Example: Blog
Consider implementing a blog software that can be used for personal websites.
The respective Domain Model defines use cases for writing, publishing, un-
publishing and editing individual posts. Writing a new post requires to provide
an identity, a title and content. Editing an existing item similarly incorporates an
identifier as well as a new title and new content. For publishing a post, the only
required argument is the corresponding identity. The action indicates that the
content is made available to a public audience. Un-publishing represents the
opposite action, which is required for different reasons, such as legal challenges
or writing mistakes. This example focuses on the functionalities for writing,
publishing and un-publishing posts. For simplicity reasons, it excludes the use
case of editing existing items.
#eventStore;
constructor({eventStore}) {
this.#eventStore = eventStore;
}
The following example provides a Query Handlers class with lazy Read Model
computation:
Blog software: Query Handlers
class BlogPostQueryHandlers {
#eventStore;
constructor({eventStore}) {
this.#eventStore = eventStore;
}
async handleFindBlogPostsQuery() {
const {events} = await this.#eventStore.load('$global');
const blogPostsById = {};
events.forEach(event => {
if (event.type === BlogPostWrittenEvent.type) {
const {blogPostId: id, title, content} = event.data;
blogPostsById[id] = {id, title, content, isPublished: false};
}
if (event.type === BlogPostPublishedEvent.type)
blogPostsById[event.data.blogPostId].isPublished = true;
if (event.type === BlogPostUnpublishedEvent.type)
blogPostsById[event.data.blogPostId].isPublished = false;
});
return blogPostsById;
}
All relevant state transitions for a blog post are represented by the event types
“BlogPostWritten”, “BlogPostPublished” and “BlogPostUnpublished”. For
write-related use cases, the class BlogPostCommandHandlers implements the
according Command Handlers. As constructor argument, it expects an Event
Store. Each handler operation transforms given arguments into an event and
appends it to a stream. The class BlogPostQueryHandlers implements the read-
related behavior. For the example, the component simply computes the full Read
Model upon each request. Its constructor expects an Event Store instance as
argument. The service operation handleFindBlogPostsQuery() loads all events
from the global stream, creates a blog posts collection and returns it. Effectively,
this approach is an on-demand Read Model computation, as explained in
Chapter 11. One potential improvement is to introduce a caching behavior.
The next code shows the HTML document for the User Interface:
Blog software: HTML file
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Blog</title>
<link rel="stylesheet" href="/blog/ui/index.css">
</head>
<body>
<h2>Write blog post</h2>
<form method="POST" class="write-post">
<input type="text" name="title" required placeholder="Title" />
<textarea name="content" cols="30" required placeholder="Content"></textarea>
<input type="submit" value="Write blog post">
</form>
<h2>Written blog posts</h2>
<section class="blog-posts"></section>
<script type="module" src="/blog/ui/index.js"></script>
</body>
</html>
The HTML file for the blog software consists of two parts. One is the <form>
element that is used for writing a new blog post. For demonstration purposes,
both the <input> element and the <textarea> element are equipped with a
validation attribute. The other part of the HTML document is a container
element for rendering all existing blog posts. Also, the markup references a basic
stylesheet and the client-side JavaScript as additional assets. The request helper
functions getJSON() and post() are browser-based counterparts to the Node.js
variants introduced in Chapter 12. Instead of using the http module, the
operations utilize the browser Fetch API. In addition to performing a GET
request, the operation getJSON() also converts a response automatically to
JSON.
The next example implements the client-side logic for the UI part:
Blog software: Client-side JavaScript
getJSON('/query', {type: 'FindBlogPosts'}).then(postsById => {
document.querySelector('.blog-posts').innerHTML = Object.values(postsById).map(
({id, title, content, isPublished}) => `<article>
<strong>${title}</strong> (${isPublished ? 'published' : 'unpublished'})<br>
<p>${content}</p>
<button
data-blog-post-id="${id}"
data-command-type="${isPublished ? 'Unpublish' : 'Publish'}"
>${isPublished ? 'Unpublish' : 'Publish'}</button>
</article>`).join('');
});
The code starts with sending a “FindBlogPosts” Query and rendering the
received posts. Every rendered <article> element consists of title, publishing
status, content and a button for toggling the publishing status. Each button
receives data-* attributes for the respective post identifier and Command type.
Next, a submit event listener is registered on the form to write a post. The
handler operation retrieves all submitted data and issues a “WriteBlogPost”
Command. Also, it prevents form submission by invoking the function
event.preventDefault(). After the Command execution, the page is reloaded
via the operation window.location.reload(). Finally, the code registers a click
event listener on the post list. For every click on a toggle button, the handler
issues either a “PublishBlogPost” or “UnpublishBlogPost” Command and
reloads the page.
Differences to a CRUD-based UI
The User Interface implementation for the example provides visual structures that guide a user
towards domain-specific use cases. In contrast, a CRUD-based counterpart could provide a single
form element, both for writing new posts and editing existing ones. Changing the publishing
status of a post would be represented as generic record modification, without adequately
capturing the domain-specific meaning.
The next example provides the configuration for the blog software (run code):
Blog software: Configuration and usage
const eventStore = new FilesystemEventStoreWithGlobalStream({storageDirectory});
const commandHandlers = new BlogPostCommandHandlers({eventStore});
const queryHandlers = new BlogPostQueryHandlers({eventStore});
console.log('<iframe src="http://localhost:50000"></iframe>');
The code starts with instantiating the components
FilesystemEventStoreWithGlobalStream, BlogPostCommandHandlers and
BlogPostQueryHandlers. For the Command Handlers and the Query Handlers,
the Event Store is used as constructor argument. Afterwards, the HTTP
interfaces for Command Handlers and Query Handlers are created using the
operation createHttpInterface(). Then, an HTTP filesystem interface is
constructed via the function createHttpFilesystemInterface(). Next, the
class http.Server is instantiated with a custom listener function. Requests with
the path “/command” or “/query” are forwarded to the according Application
Layer interface. All other requests are processed by the filesystem interface.
Lastly, the server is started and a message is output to render an iframe. Overall,
the code exemplifies how to build a User Interface that puts emphasis on a
Domain and its use cases.
Optimistic UI
Optimistic UI is a User Interface pattern to improve perceived performance by
prematurely indicating the success of state-changing actions. On top of that, the
concept is also useful for simulating updates to eventually consistent read data.
There are two main scenarios for applying this pattern. The first one is when
requesting the execution of a behavior, of which a successful completion should
be reported. The second scenario is when a use case execution implies an update
to related data that is being displayed. In both cases, a success is indicated
regardless of the actual progress. If there is rendered data to update, the
originally submitted information can be re-used. However, the User Interface
Layer must ensure to not replicate domain-specific data transformations for this
purpose.
#successThreshold;
constructor({chanceOfSuccess}) {
this.#successThreshold = chanceOfSuccess / 100;
}
async handleWritePostCommand(command) {
return new Promise((resolve, reject) => {
setTimeout(() => {
if (Math.random() <= this.#successThreshold) resolve();
else reject(new Error('server error'));
}, Math.random() * 2000);
});
}
}
The second example provides the HTML document for the User interface:
Social network: HTML document
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Business social network</title>
<link rel="stylesheet" href="/social-network/ui/index.css">
</head>
<body>
<h2>Write Post</h2>
<form method="POST" class="write-post">
<textarea name="content" cols="30" rows="10"></textarea>
<input type="submit" value="Write post">
</form>
<h2>Written Posts</h2>
<section class="written-posts"></section>
<script type="module" src="/social-network/ui/index.js"></script>
</body>
</html>
The final code provides the configuration for the write post functionality (run
code):
Social network: Configuration and usage
const commandHandlers = new PostCommandHandlers({chanceOfSuccess: 25});
console.log('<iframe src="http://localhost:50000"></iframe>');
Streams
Reactive Read Models require a mechanism to notify consumers of future
updates. There are different patterns for this purpose, such as callbacks,
asynchronous generators and streams. The Node.js module stream provides
components to work with streaming data. Compared to plain callbacks and
generator functions, it offers advanced functionalities, such as data throttling and
chaining. Also, its use enables easier integration with other native Node.js parts.
The module stream exposes four types of streams. The type Readable represents
a consumable source of data. Destinations to send information to are
implemented with the class Writable. The type Duplex is both readable and
writable, which is useful for bidirectional communication. Finally, consuming
information from one source, transforming it and forwarding the result is
achieved with the class Transform.
constructor() {
super({objectMode: true});
}
_transform(input, _, callback) {
callback(null, `${JSON.stringify(input)}\n`);
}
First, the code defines the class ObjectToStringStream. The component extends
the type Transform and invokes its base constructor with the objectMode option
set to true. This enables to also process objects instead of only strings and
buffers. Every transform stream implements its behavior in the operation
_transform(), which in this case converts objects to JSON strings. The
component definition is followed by instantiating the class Readable with an
activated objectMode option and an empty read() function. Afterwards, the
recursive operation pushNextEvent() is defined and executed, which
continuously pushes messages into the readable stream. Finally, the stream is
connected to an ObjectToStringStream instance, which itself is linked to the
standard output. This is done via the function pipe(). Executing the code causes
to output messages repeatedly.
Data buffering
Every Node.js stream is equipped with an internal in-memory buffer mechanism. This allows
readable streams to read underlying data independent of consumption and writable streams to
accept input independent of processing. While the buffers have advisory size thresholds, there is
no hard limit enforced. For the sake of simplicity, the custom stream implementations in this
chapter ignore buffer thresholds.
Server-sent Events
Server-sent Events (SSE) is a unidirectional communication protocol on top of
HTTP for receiving messages over a persistent connection. One approach to
mimic this behavior is Long Polling, where individual subsequent connections
are established for each message to receive. The advantage of SSE is that it
maintains a single long-lasting connection with an automatic reconnect
mechanism. The standard defines the client-side interface EventSource, which
expects a URL and enables to register named event listeners. Server-sent HTTP
responses must be equipped with the “Content-Type” header value “text/event-
stream”. Every message to send consists of an optional identifier, an optional
type with the default value “message” and custom data. Each field is represented
as an individual line with key and value, whereas messages are separated by an
empty line.
Server-sent Events example
data: {userId: 'user-1', email: 'foo@example.com'}
type: UserLoggedIn
data: {userId: 'user-1', device: 'laptop'}
id: 1234
type: UserLoggedOut
data: {userId: 'user-1'}
Idempotent processing
When consuming Server-sent Events, the processing mechanism should ideally be idempotent.
This is because whenever an HTTP connection is interrupted and re-established, previously
processed messages may appear again. Alternatively, event duplication can be mitigated though
the HTTP header “Last-Event-ID”, which is automatically sent from the EventSource component.
The following code implements a transform stream for the Server-sent Events
format:
HTTP: SSE Transform Stream
class ServerSentEventStream extends Transform {
constructor() {
super({objectMode: true});
}
_transform(data, _, callback) {
callback(null, `data: ${JSON.stringify(data)}\n\n`);
}
The next example shows an extended version of the HTTP interface factory with
SSE support:
HTTP: HTTP interface factory with SSE support
const createHttpInterfaceWithSseSupport = (targetFunction, methods = []) =>
async (request, response) => {
try {
if (!methods.includes(request.method)) throw new Error('invalid method');
const message = request.method === 'POST' ?
JSON.parse(await parseHttpStream(request)) : parseQueryString(request.url);
const result = await targetFunction(message);
const resultIsStream = result && result.readable && result._read;
response.writeHead(200, {'Cache-Control': 'no-cache',
'Content-Type': resultIsStream ? 'text/event-stream' : 'application/json'});
if (resultIsStream) {
response.connection.setTimeout(0);
result.pipe(new ServerSentEventStream()).pipe(response);
} else response.end(result != null ? JSON.stringify(result) : result);
} catch (error) {
response.writeHead(500);
response.end(error.toString());
}
};
The third example illustrates the usage of the previously introduced components
(run code):
HTTP: Usage of HTTP interface factory with SSE support
const handleStreamRandomNumberQuery = async () => {
const stream = new Readable({objectMode: true, read() {}});
setInterval(() =>
stream.push({id: generateId(), type: 'RandomNumber', data: Math.random()}),
1000);
return stream;
};
const httpInterface =
createHttpInterfaceWithSseSupport(handleStreamRandomNumberQuery, ['GET']);
console.log('<iframe src="http://localhost:50000"></iframe>');
The code starts with defining the Query Handler function
handleStreamRandomNumberQuery(). This operation returns a stream that
continuously transmits “RandomNumber” events. As next step, an HTTP
interface with SSE support is created via the reworked factory function. Then, an
HTTP server is instantiated. The passed in listener operation forwards requests
for URLs starting with “/stream” to the Query Handler interface. For all other
requests, an HTML document is returned as response. The therein contained
JavaScript code first creates an EventSource instance with the relative URL
“/stream”. Then, it registers an event listener for the default type “message”. The
according callback operation appends each received message as plain text to a
<pre> element. Executing the code in the Playground shows a page that renders
SSE messages.
Architectural approaches
Independent of infrastructural patterns and technologies, there are different
architectural approaches for operating Reactive Read Models. The following
subsections explain and illustrate three possibilities, each with their own
advantages and disadvantages. All of them share the fact that the Application
Layer responds to a selected Query type with a stream result. The difference lies
in what information the respective streams transmit. While there are more
approaches, the selected ones represent the most common and distinct choices.
The subsections are preceded by the introduction of an example topic, which is
used to illustrate each approach individually. As side effect, the example
implementations provide real-time collaboration capabilities. For this aspect, the
goal is to demonstrate that Reactive Read Models enable this functionality
automatically to some extent.
Example: Todo list
#eventStore;
constructor({eventStore}) {
this.#eventStore = eventStore;
}
The event definitions express the relevant state changes for the todo list
software. In addition to a list identity, each of the types also incorporates a todo
identifier. On top of that, the “TodoWritten” event type includes the submitted
content. Note that there is no dedicated message for the creation of a list. This is
because an instance counts as existing as soon as its first event occurs. The class
TodoListCommandHandlers implements all the required Command Handler
operations. Its constructor expects an Event Store instance and invokes the
function createMessageForwarder() for exposing a unified message handler.
Similar to the blog example, each use case operation exclusively creates an event
and appends it to the affected stream.
The next code provides a basic HTML document:
Todo list: HTML document
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Todo List</title>
</head>
<body>
<h1>Todo List</h1>
<h2>Add new todo</h2>
<form method="POST" class="write-todo">
<input type="text" name="content">
<input type="submit" value="Write todo">
</form>
<h2>Todos</h2>
<section class="todos"></section>
<script type="module" src="/todo-list/shared/ui/index.js"></script>
</body>
</html>
import(`../../${approach}/ui/index.js`);
The HTML document contains a headline, an input form, a todo list container
element and a <script> element. The client-side code starts with retrieving the
values of the URL parameters “todo-list-id” and “approach”. This is done with
the native browser interface URLSearchParams. Next, a form submit event
listener is registered. Its callback operation issues a “WritePost” Command with
the respectively entered content. Afterwards, a click event listener is registered
on the list container. Its handler operation first verifies that the event occurred on
a checkbox. Then, it retrieves the todo identity from an ancestor node. As next
step, it either issues a “CompleteTodo” or a “UncompleteTodo” Command,
depending on the checkbox status. Finally, the code dynamically imports the
module for the specific implementation approach.
The last code shows a helper operation to create a server for the todo list
software:
Todo list: Server factory
const createServer = ({commandHandlers, queryHandlers}) => {
const commandHttpInterface = createHttpInterface(
message => commandHandlers.handleCommand(message), ['POST']);
const queryHttpInterface = createHttpInterfaceWithSseSupport(
message => queryHandlers.handleQuery(message), ['GET']);
One option for a Reactive Read Model is to consume an event stream directly in
the User Interface client part. For that purpose, a Query Handler forwards all
information from an originating event source. With this approach, the client can
independently operate a projection and freely maintain a custom data structure.
One disadvantage is that domain-specific events are exposed to the outside.
Although User Interface parts architecturally belong with the context of their
associated use cases, this circumstance may be undesirable. Also, there can be a
network traffic overhead when the original event source contains extraneous
data. Furthermore, the approach only works for streams that exclusively enclose
information a respective user is allowed to access. Often, this is not the case for
global event streams.
The first code provides a Query Handlers component to expose a todo list event
stream:
Event forwarding: Query Handlers
class TodoListQueryHandlers {
#eventStore;
constructor({eventStore}) {
this.#eventStore = eventStore;
}
The next example shows the client-side behavior for consuming an event stream:
Event forwarding: Client-side behavior
const parameters = new URLSearchParams(location.search);
const todoListId = parameters.get('todo-list-id');
The third example provides the setup code of the previously introduced
components (run code):
Event forwarding: Server configuration
const eventStore = new FilesystemEventStoreWithGlobalStream({storageDirectory});
createServer({commandHandlers, queryHandlers}).listen(50000);
The setup code starts with creating an Event Store instance. This is followed by
instantiating both the shared Command Handlers component and the custom
Query Handlers class. Next, a server is created using the operation
createServer() with the Application Services as arguments. Then, a URL is
defined that includes both the approach name and an exemplary todo list identity.
Finally, a message with an <iframe> element is logged to the console. Executing
the code in the Playground provides a fully functional todo list software that can
be used collaboratively. Independent of which user issues a Command execution,
all clients receive a message and update their User Interface. Inspecting the
network traffic shows that every received message contains the full event from
the Event Store.
Data events
activate() {
this.#eventStore.subscribe('$global', ({type, data}) => {
if (!this.#todoListReadModelStorage.has(data.todoListId))
this.#todoListReadModelStorage.set(data.todoListId, {});
const todoList = this.#todoListReadModelStorage.get(data.todoListId);
if (type === 'TodoWritten')
todoList[data.todoId] = {id: data.todoId, content: data.content};
if (type === 'TodoCompleted') todoList[data.todoId].isCompleted = true;
if (type === 'TodoUncompleted') todoList[data.todoId].isCompleted = false;
this.#todoListReadModelMessageBus.publish(`TodoList/${data.todoListId}`,
this.#todoListReadModelStorage.get(data.todoListId));
});
}
#todoListReadModelStorage; #todoListReadModelMessageBus;
constructor({todoListReadModelStorage, todoListReadModelMessageBus}) {
this.#todoListReadModelStorage = todoListReadModelStorage;
this.#todoListReadModelMessageBus = todoListReadModelMessageBus;
}
The last code provides the server-side setup for the second approach (run code):
Data events: Server setup
const eventStore = new FilesystemEventStoreWithGlobalStream({storageDirectory});
createServer({commandHandlers, queryHandlers}).listen(50000);
The client-side code again starts with retrieving the URL query parameter for the
todo list identity. Next, it constructs and serializes a “StreamTodoListState”
message. Then, an EventSource instance is created using a URL path together
with the serialized Query. Afterwards, a message event listener is added that de-
serializes the transmitted event and invokes the function renderTodos(). The
server-side setup again instantiates an Event Store, the shared Command
Handlers component and the specialized Query Handlers class. On top of that, it
also creates instances for a Read Model storage, a Message Bus and the Read
Model projection. Compared to the first approach, the client-side code is
significantly less complex. At the same time, the network traffic is increased due
to always sending the full Read Model.
Change events
The third approach for Read Model reactivity uses an initial event with the full
data structure and subsequent change messages. As with the second approach,
the Application Layer is responsible for maintaining the Read Model. Also, the
according Query Handler responds with a dedicated readable stream that initially
transmits the current data. However, instead of pushing the full Read Model
continuously, each change triggers a message that exclusively describes the
updated parts. This results in an optimal network traffic usage. Also, no internal
domain-specific events are exposed to the outside. Compared to the previous
approach, the complexity in the User Interface part is slightly increased. This is
because it must contain the logic for translating change descriptions into update
operations of the Read Model data.
activate() {
this.#eventStore.subscribe('$global', ({type, data}) => {
if (!this.#todoListReadModelStorage.has(data.todoListId))
this.#todoListReadModelStorage.set(data.todoListId, {});
const todoList = this.#todoListReadModelStorage.get(data.todoListId);
if (type === 'TodoWritten')
todoList[data.todoId] = {id: data.todoId, content: data.content};
if (type === 'TodoCompleted') todoList[data.todoId].isCompleted = true;
if (type === 'TodoUncompleted') todoList[data.todoId].isCompleted = false;
this.#todoListReadModelMessageBus.publish(
`TodoList/${data.todoListId}`, {[data.todoId]: todoList[data.todoId]});
});
}
constructor({todoListReadModelStorage, todoListReadModelMessageBus}) { /* .. */ }
The next example shows the client-side behavior for processing the data and the
change events (run code usage):
Change events: Client-side behavior
const parameters = new URLSearchParams(location.search);
const todoListId = parameters.get('todo-list-id');
As with the other approaches, the client-side behavior first retrieves the todo list
identifier from the URL. Next, the variable todos is defined for the Read Model
data. Then, a Query is created and serialized, which is used for instantiating the
EventSource class. Afterwards, a message event listener is registered. The
callback operation first de-serializes the actual message. Then, it assigns all
contained properties to the Read Model using the operation Object.assign().
This works for both the initial full data set and subsequent change descriptions,
as their structure is identical. Finally, the function renderTodos() is executed
with the current Read Model. The server-side setup code is identical to the
previous implementation. Overall, this approach represents a combination of
moderate client-side complexity and optimized network traffic.
Custom Elements
Custom Elements is a web standard that enables to build specialized HTML
elements on top of existing native ones. Essentially, it exposes the internal
browser component model. One key advantage of the standard is that custom
parts integrate seamlessly with native ones due to the same technology. Creating
a Custom Element requires to implement a class that extends an existing HTML
element type. In most cases, this should be the generic constructor HTMLElement.
More specific ones, such as HTMLOptionElement, are necessary when building
drop-in replacements for native elements. Every custom implementation must be
registered in the Custom Elements registry via the function
customElements.define(). The operation expects a tag name, a constructor and
optional options. The options are required when extending a specific HTML
element.
Every Custom Element has a lifecycle and associated events, for which callback
functions can be defined. Whenever an element is appended to a node that is
connected to a document, the operation connectedCallback() is executed.
Instead of the constructor, all initialization work, such as fetching resources or
rendering children, should be done inside this callback. Since the callback can
execute multiple times, one-time setup must be protected from repeated
execution. The operation disconnectedCallback() is invoked when an element
is removed from a document-connected node. Custom Elements can configure
change notifications for attributes by defining the static getter function get
observedAttributes(). Whenever one of the attributes change, the function
attributeChangedCallback() is invoked. Finally, the operation
adoptedCallback() is executed when an element is moved to another
document.
attributeChangedCallback() {
const attributes = CalculatorElement.observedAttributes;
const [a, operator, b] = attributes.map(name => this.getAttribute(name));
this.innerHTML = new Function(`return ${a} ${operator} ${b}`)();
}
customElements.define('simple-calculator', CalculatorElement);
The next example shows an HTML document for an exemplary usage of the
calculator element (run code usage)
Calculator: HTML usage
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Simple calculator</title>
</head>
<body>
<simple-calculator a="3" operator="*" b="9"></simple-calculator>
<script src="./index.js"></script>
</body>
</html>
Consider implementing the User Interface for the home page of an online
newspaper. The goal is to display an overview of all latest articles. For the
example, the relevant part of the Domain Model consists of two components.
One is the article, which is a combination of title, author identity, summary and
content. The second Domain Model component is the author, which consists of a
name and personal information. For the home page, each article is rendered as a
preview item with title, author and summary text. The author part includes the
name and a tooltip with the personal information. This means that the data for a
single article preview originates from two separate Entities. For brevity reasons,
the example uses plain objects as data source.
The first code shows the data for authors and articles:
Online newspaper: Data
const authors = [
{id: '1', name: 'Jack Johnson', info: 'Passionate cook'},
{id: '2', name: 'Jill Jackson', info: 'Loves to surf'},
];
const articles = [
{id: '3', title: 'Changing weather', authorId: '1',
summary: 'How the weather changed over the years', content: '...'},
{id: '4', title: 'Shiny new globalization', authorId: '1',
summary: 'The pros and cons of a globalized economy', content: '...'},
{id: '5', title: 'How sustainable are e-books', authorId: '2',
summary: 'Are e-books better for the environment?', content: '...'},
];
The next example provides a Query Handlers component for retrieving all
required information:
Online newspaper: Query Handlers
class NewspaperQueryHandlers {
async handleGetArticleQuery(message) {
return articles.find(summary => summary.id === message.data.articleId);
}
async handleGetAuthorQuery(message) {
return authors.find(info => info.id === message.data.authorId);
}
The source code for the data defines the two arrays authors and articles. For
the authors list, there are two individual entries defined. The articles array
contains three separate items, of which two reference the same author identity.
These data structures can be understood as Read Model information, as they are
exclusively used for read-related functionalities. The class
NewspaperQueryHandlers implements the required Query Handlers for the
homepage use cases. Its constructor creates a unified Query Handler interface
via the function createMessageForwarder(). Retrieving all latest article
identifiers is done by executing the operation
handleFindLatestArticlesQuery(). The function handleGetArticleQuery()
returns the article entry for a given identifier. Finally, author data can be
retrieved with the operation handleGetAuthorQuery().
The following code shows the implementation for a tooltip Custom Element:
Online newspaper: Info Tooltip
class InfoToolTipElement extends HTMLElement {
connectedCallback() {
const element = this.querySelector('section');
element.setAttribute('hidden', '');
this.addEventListener('mouseover', () => element.removeAttribute('hidden'));
this.addEventListener('mouseout', () => element.setAttribute('hidden', ''));
}
customElements.define('info-tooltip', InfoToolTipElement);
The next example provides the Custom Element for the information of an author:
Online newspaper: Author info
const requestCache = {};
async connectedCallback() {
const authorId = this.getAttribute('author-id');
const request = requestCache[authorId] ||
getJSON('/query', {id: generateId(), type: 'GetAuthor', data: {authorId}});
requestCache[authorId] = request;
const {name, info} = await request;
this.innerHTML = `${name}
<info-tooltip><a href="#">(i)</a><section>${info}</section></info-tooltip>`;
}
customElements.define('author-info', AuthorInfoElement);
async connectedCallback() {
const articleId = this.getAttribute('article-id');
const data = await getJSON('/query',
{id: generateId(), type: 'GetArticle', data: {articleId}});
this.innerHTML = `<h3>${data.title}</h3>
<author-info author-id="${data.authorId}"></author-info>
<p>${data.summary}</p>`;
}
customElements.define('article-preview', ArticlePreviewElement);
The next code shows the main file of the client-side JavaScript for the
newspaper homepage:
Online newspaper: Rendering latest articles
const latestArticlesElement = document.createElement('div');
document.querySelector('body').appendChild(latestArticlesElement);
getJSON('/query', {id: generateId(), type: 'FindLatestArticles'})
.then(articleIds => articleIds.forEach(articleId => {
const articleElement = document.createElement('article-preview');
articleElement.setAttribute('article-id', articleId);
latestArticlesElement.appendChild(articleElement);
}));
This is complemented with a basic HTML document that includes the JavaScript
code:
Online newspaper: HTML document
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Online newspaper</title>
<link rel="stylesheet" href="/newspaper/ui/index.css">
</head>
<body>
<script type="module" src="/newspaper/ui/index.js"></script>
</body>
</html>
The final code provides the configuration for the newspaper homepage (run
code):
Online newspaper: Configuration and usage
const queryHandlers = new NewspaperQueryHandlers();
console.log('<iframe src="http://localhost:50000"></iframe>');
The client-side JavaScript code starts with creating and appending a DOM
element as container for article previews. Next, it issues the Query
“FindLatestArticles” and renders an <article-preview> element for each
returned identifier. The server-side code starts with instantiating the class
NewspaperQueryHandlers. Next, an HTTP interface and a filesystem interface
are created with the according factory functions. Afterwards, an HTTP server is
instantiated. Its listener operation forwards requests with the URL path “/query”
to the Query Handlers and all others to the filesystem. The server is started and a
message is logged to render an <iframe>. Running the code renders a page with
all available articles together with their author information. Note that each author
is only requested once, independent of the associated article count.
This example illustrates how to use Custom Elements for creating a component
model that incorporates both presentation and Domain elements.
Shared code
For the implementation of the User Interface, the Sample Application source
code is extended with multiple infrastructural components. The operation
createHttpFilesystemInterface() is introduced for serving files over HTTP.
For exposing Query streams over the network, the existing factory
createHttpInterface() is extended with SSE support. This also requires the
introduction of the ServerSentEventStream class. The browser-based request
helper functions getJSON() and post() are used for sending Commands and
Queries from the client. Also, two stylesheets are added, of which one applies
generic styling and the other defines specific rules for task boards. Since the files
are purely related to visual aspects, they are considered optional. As a
consequence, their contained CSS code is not shown or discussed separately in
this section.
User context
The UI implementation for user-related use cases consists of three parts. One is
the user creation, which focuses on the initial action that is required to use the
software. The second part is the user login, which deals with the authentication
of an existing user. The third component is the user profile, which converts a
user identity into detail information. For this functionality, the context
implementation is extended with a Query type to retrieve a user by its identity.
This part is required for the project and the task board context. Both team
member Aggregates and task board Aggregates reference user identifiers. With
the user profile mechanism, their corresponding UI parts can render user detail
information without violating conceptual boundaries.
The first example shows the HTML document for the user creation part:
User context: User creation HTML
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Task Board Application | User creation</title>
<link rel="stylesheet" href="/shared/ui/style.css">
</head>
<body>
<h1>Task Board Application</h1>
<user-creation></user-creation>
<a href="/user/ui/user-login.html">Login</a>
<script type="module" src="/user/ui/user-creation.js"></script>
</body>
</html>
The second code contains the implementation of the user creation Custom
Element:
User context: User creation element
class UserCreationElement extends HTMLElement {
connectedCallback() {
this.innerHTML = `<form>
<input type="text" required name="username" placeholder="Username" />
<input type="email" required name="emailAddress" placeholder="Email" />
<input type="password" required name="password" placeholder="Password" />
<button type="submit">Create user</button>
</form>`;
this.addEventListener('submit', event => this._handleSubmit(event));
}
async _handleSubmit(event) {
event.preventDefault();
const {elements: formElements} = event.target;
const [username, emailAddress, password] =
['username', 'emailAddress', 'password'].map(name => formElements[name].value);
const command = {id: generateId(), type: 'CreateUser', data:
{userId: generateId(), username, emailAddress, password, role: 'user'}};
const response = await post('/user', command);
if (response.status === 200) {
sessionStorage.setItem('userId', command.data.userId);
document.location.href = '/project/ui/project-overview.html';
} else alert(await response.text());
}
customElements.define('user-creation', UserCreationElement);
The following example shows the HTML document for the user login:
User context: User login HTML
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Task Board Application | Login</title>
<link rel="stylesheet" href="/shared/ui/style.css">
</head>
<body>
<h1>Task Board Application</h1>
<user-login></user-login>
<a href="/user/ui/user-creation.html">Create user</a>
<script type="module" src="/user/ui/user-login.js"></script>
</body>
</html>
The next code implements the Custom Element for the user login:
User context: User login element
class UserLoginElement extends HTMLElement {
connectedCallback() {
this.innerHTML = `<form>
<input type="email" required name="emailAddress" placeholder="Email" />
<input type="password" required name="password" placeholder="Password" />
<button type="submit">Login</button>
</form>`;
this.addEventListener('submit', event => this._handleSubmit(event));
}
async _handleSubmit(event) {
event.preventDefault();
const {elements: form} = event.target;
const emailAddress = form.emailAddress.value, password = form.password.value;
const [user] = await getJSON('/user',
{id: generateId(), type: 'FindUserByEmailAddress', data: {emailAddress}});
if (!user) {
alert('Login error');
return;
}
const commandResponse = await post('/user',
{id: generateId(), type: 'LoginUser', data: {userId: user.id, password}});
if (commandResponse.status === 200) {
sessionStorage.setItem('userId', user.id);
location.href = '/project/ui/project-overview.html';
} else alert('Login error');
}
customElements.define('user-login', UserLoginElement);
The class UserLoginElement is responsible for the user login functionality. Its
function connectedCallback() renders inputs for an e-mail address and a
password, and registers a form submit event listener. The handler operation
_handleSubmit() first retrieves the submitted data. Next, it issues a
“FindUserByEmailAddress” Query to determine the user identifier for a given e-
mail address. In case of an empty result, an error message is displayed and the
process is aborted. In case of success, a “LoginUser” Command with the
according identity is sent. If the command succeeds, the user identifier is again
stored in the Session Storage and the project overview is shown. If an error
occurs, a generic alert is provided to the user. The Custom Element is registered
with the tag name “user-login”.
The final example provides a Custom Element for the user profile mechanism:
User context: User login element
const requestCache = {};
async connectedCallback() {
const id = this.getAttribute('id');
const request = requestCache[id] ||
getJSON('/user', {id: generateId(), type: 'FindUser', data: {id}});
requestCache[id] = request;
const user = await request;
this.innerHTML = user.username;
this.setAttribute('title', user.emailAddress);
}
};
customElements.define('user-profile', UserProfileElement(HTMLElement));
customElements.define('user-profile-option',
UserProfileElement(HTMLOptionElement), {extends: 'option'});
The code first defines the empty object requestCache. Then, the function
UserProfileElement() is implemented. This operation expects a constructor
and returns a Custom Element class that extends the given type. The returned
class fetches and renders user details. Its operation connectedCallback() first
retrieves the “id” attribute value. Next, it either issues a “FindUser” Query or re-
uses a cached request. Finally, it renders the username with the e-mail address as
“title” attribute. The factory function UserProfileElement() is executed twice
to create two classes. Both are registered as Custom Elements. While the “user-
profile” entry extends the type HTMLElement, the “user-profile-option” variant
uses the class HTMLOptionElement and “option” as extends parameter. This
enables the user profile to be used both as autonomous element and as select
option.
Project context
The User Interface related to the project context consists of two parts that are
implemented as three components. One is the project overview, which is
responsible for creating new projects and rendering existing ones. This includes
both projects that are owned by a user and the ones a user collaborates in. The
second component is the team member creation, which provides the ability to
add new team members to a team. This component interacts with the user
context for translating an e-mail address into a user identity. Viewing the list of
existing members and removing individual ones is enabled through the team
member list component. For this functionality, the context implementation is
extended with a Query type to retrieve the members of a team.
The first example shows the HTML document for the project overview:
Project context: Project overview HTML
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Task Board Application | Project Overview</title>
<link rel="stylesheet" href="/shared/ui/style.css">
</head>
<body>
<h1>Project overview</h1>
<project-overview></project-overview>
<script type="module" src="/project/ui/project-overview.js"></script>
</body>
</html>
The next code implements the Custom Element for the project overview:
Project context: Project overview element
class ProjectOverviewElement extends HTMLElement {
async connectedCallback() {
this.innerHTML = `<form>
<input type="text" required name="name" placeholder="Project name">
<button type="submit">Create project</button>
</form>
<h3>Projects you own</h3><ul class="owned-projects"></ul>
<h3>Projects you collaborate in</h3><ul class="collaborating-projects"></ul>`;
this.addEventListener('submit', event => this._handleSubmit(event));
const data = {userId: sessionStorage.getItem('userId')};
getJSON('/project', {type: 'FindProjectsByOwner', data})
.then(projects => this._addProjects('owned-projects', projects));
getJSON('/project', {type: 'FindProjectsByCollaboratingUser', data})
.then(projects => this._addProjects('collaborating-projects', projects));
}
_addProjects(projectType, projects) {
const template = document.createElement('template');
template.innerHTML = projects.map(({name, teamId, taskBoardId}) => `<li>
${name}
(<a href="/project/ui/team-management.html?id=${teamId}">Team</a>,
<a href="/task-board/ui/task-board.html?id=${taskBoardId}&team-id=${teamId}">
Task Board
</a>)
</li>`).join('');
this.querySelector(`.${projectType}`).appendChild(template.content);
}
async _handleSubmit(event) {
event.preventDefault();
const name = event.target.elements.name.value;
const ownerId = sessionStorage.getItem('userId');
const id = generateId(), teamId = generateId(), taskBoardId = generateId();
const project = {projectId: id, name, ownerId, teamId, taskBoardId};
const projectCommand = {id: generateId(), type: 'CreateProject', data: project};
const response = await post('/project', projectCommand);
if (response.status === 200) this._addProjects('owned-projects', [project]);
else alert('Project creation failed.');
}
customElements.define('project-overview', ProjectOverviewElement);
The class ProjectOverviewElement provides the User Interface for creating new
projects and viewing existing ones. Its operation connectedCallback() first
renders a form and two list containers, one for owned projects and one for
collaborations. Next, a form submit event listener is registered. Then, the user
identity is retrieved from the Session Storage. Afterwards, the Queries
“FindProjectsByOwner” and “FindProjectsByCollaboratingUser” are issued. For
each response, the operation _addProjects() is invoked together with the
according project type. This function creates a <template> element, renders
every project as list item and appends the result to the according container. The
operation _handleSubmit() retrieves submitted data, issues a “CreateProject”
command and either updates the project list or shows an error. For the Custom
Elements registry entry, the name “team-member-list” is used.
The following example shows the HTML document for the team management:
Project context: Team management HTML
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Task Board Application | Team Management</title>
<link rel="stylesheet" href="/shared/ui/style.css">
</head>
<body>
<h1>Team Management</h1>
<aside>
<a href="./project-overview.html"><< Back to project overview</a>
</aside>
<team-member-creation></team-member-creation>
<team-member-list></team-member-list>
<script type="module" src="/project/ui/team-member-creation.js"></script>
<script type="module" src="/project/ui/team-member-list.js"></script>
</body>
</html>
The first team management part is the Custom Element for the team member
creation:
Project context: Team member creation element
class TeamMemberCreationElement extends HTMLElement {
connectedCallback() {
this._teamId = new URLSearchParams(location.search).get('id');
this.innerHTML = `<form>
<input type="text" required name="emailAddress" placeholder="E-Mail address">
<input type="text" required name="role" placeholder="Role">
<button type="submit">Add team member</button>
</form>`;
this.addEventListener('submit', event => this._handleSubmit(event));
}
async _handleSubmit(event) {
event.preventDefault();
const {elements} = event.target;
const emailAddress = elements.emailAddress.value, role = elements.role.value;
const [user] = await getJSON('/user',
{id: generateId(), type: 'FindUserByEmailAddress', data: {emailAddress}});
if (!user) {
alert('User not found');
return;
}
const {status} = await post('/project',
{id: generateId(), type: 'AddTeamMemberToTeam', data:
{teamMemberId: generateId(), teamId: this._teamId, userId: user.id, role}});
if (status === 200) window.location.reload();
else alert('Team member creation failed.');
}
customElements.define('team-member-creation', TeamMemberCreationElement);
The second team management part is the Custom Element for the team member
list:
Project context: Team member list element
class TeamMemberListElement extends HTMLElement {
connectedCallback() {
this._teamId = new URLSearchParams(location.search).get('id');
this.innerHTML = '<h2>Team Members</h2><ul class="team-members"></ul>';
this.addEventListener('click', event => this._handleRemoveButtonClick(event));
getJSON('/project', {type: 'FindTeamMembers', data: {teamId: this._teamId}})
.then(members => this._addTeamMembers(members));
}
_addTeamMembers(members) {
const template = document.createElement('template');
template.innerHTML = members.map(({id, userId}) => `<li>
<user-profile id=${userId}></user-profile>
<button class="small" id="${id}">X</button>
</li>`).join('');
this.querySelector('.team-members').appendChild(template.content);
}
async _handleRemoveButtonClick({target}) {
if (!target.matches('button[id]')) return;
const data = {teamMemberId: target.getAttribute('id'), teamId: this._teamId};
const {status} = await post('/project',
{id: generateId(), type: 'RemoveTeamMemberFromTeam', data});
if (status === 200) target.closest('li').remove();
else alert('Team member removal failed');
}
customElements.define('team-member-list', TeamMemberListElement);
The class TeamMemberListElement enables to view and delete members of a
team. Its operation connectedCallback() also first retrieves the team identity
from the URL. Then, it renders a list container and registers a click event
listener. Afterwards, it issues a “FindTeamMembers” Query and executes the
function _addTeamMembers() with the response data. This operation creates a
<template> element, renders an entry for every received member and appends
the result to the list. Each entry consists of a <user-profile> element with the
according identifier and a remove button. The handler operation
_handleRemoveButtonClick() first verifies that the respective event occurred
on a remove button. Then, it sends a “RemoveTeamMemberFromTeam”
Command and either updates the list or reports the error. Finally, the Custom
Element is registered as “team-member-list”.
The first example shows an excerpt of the extended Read Model projection:
Task board context: Read Model projection
class TaskBoardReadModelProjection {
#taskReadModelMessageBus; #taskReadModelStorage;
async handleTaskCreatedEvent({data}) {
const {taskId, title, description, status, assigneeId} = data;
const updates = {id: taskId, title, description, status, assigneeId};
await this.#taskReadModelStorage.update(taskId, updates);
const task = await this.#taskReadModelStorage.load(taskId);
if (task.taskBoardId) this.#publishChangeMessage(task.taskBoardId, [task]);
}
/* .. */
#publishChangeMessage(taskBoardId, tasks) {
this.#taskReadModelMessageBus.publish(`TaskBoard/${taskBoardId}`, tasks);
}
The next code shows the implementation of the according Query Handler
component:
Task board context: Query Handlers
class TaskBoardQueryHandlers {
#taskReadModelStorage; #taskReadModelMessageBus;
constructor({taskReadModelStorage, taskReadModelMessageBus}) {
this.#taskReadModelStorage = taskReadModelStorage;
this.#taskReadModelMessageBus = taskReadModelMessageBus;
this.handleQuery = createMessageForwarder(this, {messageSuffix: 'Query'});
}
The next example provides the HTML document for the task board:
Task board context: Task board HTML
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Task Board Application | Task Board</title>
<link rel="stylesheet" href="/shared/ui/style.css">
<link rel="stylesheet" href="/task-board/ui/style.css">
</head>
<body>
<h1>Task Board</h1>
<task-creation></task-creation>
<task-board></task-board>
<script type="module" src="/task-board/ui/task-creation.js"></script>
<script type="module" src="/task-board/ui/task-board.js"></script>
<script type="module" src="/task-board/ui/task-item.js"></script>
<script type="module" src="/user/ui/user-profile.js"></script>
</body>
</html>
This is followed by the code for the Custom Element to create a task:
Task board context: Task creation element
class TaskCreationElement extends HTMLElement {
async connectedCallback() {
const queryParameters = new URLSearchParams(location.search);
this._taskBoardId = queryParameters.get('id');
this.innerHTML = `<form>
<input required type="text" name="title" placeholder="Title">
<input type="text" name="description" placeholder="Description">
<button type="submit">Create task</button>
</form>`;
this.addEventListener('submit', event => this._handleSubmit(event));
}
async _handleSubmit(event) {
event.preventDefault();
const [title, description] =
['title', 'description'].map(name => event.target.elements[name].value);
const id = generateId(), taskId = generateId();
const {status} = await post('/task-board', {id, type: 'AddNewTaskToTaskBoard',
data: {taskId, title, description, taskBoardId: this._taskBoardId}});
if (status !== 200) alert('Creation failed.');
}
customElements.define('task-creation', TaskCreationElement);
The class TaskCreationElement enables to add a new task to a task board. Its
operation connectedCallback() retrieves the task board identifier from the
URL, renders a form and registers a submit event listener. The <form> element
exclusively contains inputs for a title and a task description. Note that the
Application Service for creating a task allows to provide more information, such
as status and assignee identifier. Here, the interface design aims for an improved
user experience through less form inputs. After a task creation, the fields can be
updated separately. The handler function _handleSubmit() retrieves submitted
data and issues an according “AddNewTaskToTaskBoard” command. If the
request fails, an error message is displayed via the function alert(). The
Custom Element is registered with the name “task-creation”.
The next code provides the implementation for the task item component:
Task board context: Task item element
class TaskElement extends HTMLElement {
async connectedCallback() {
if (this.children.length > 0) return;
const statusOptions = ['todo', 'in progress', 'done'].map(
status => `<option name="${status}" value="${status}">${status}</option>`);
this.innerHTML = `<form><strong class="task-id">Task ID: ${this.id}</strong>
<button class="remove small" type="button">Remove</button>
<textarea type="text" required name="title" placeholder="Title"></textarea>
<textarea type="text" name="description" placeholder="Description"></textarea>
<select name="status">${statusOptions}</select>
<select name="assigneeId"></select></form>`;
this.addEventListener('change', event => this._handleFormInteraction(event));
this.querySelector('.remove').addEventListener('click', () => this._remove());
this.addEventListener('click', ({target}) => {
this.classList[target === this ? 'remove' : 'add']('large-view');
});
}
setTeamMembers(teamMembers) {
this.querySelector('[name=assigneeId]').innerHTML =
`<option name="none" value="">unassigned</option>\
${teamMembers.map(({id, userId}) => `<option is="user-profile-option"
id="${userId}" name="${id}" value="${id}"></option>`).join('')}`;
}
updateData(data) {
const formElements = this.querySelector('form').elements;
for (const [key, value] of Object.entries(data)) {
const element = formElements[key];
if (!element) continue;
if (element.options) element.options[value || 'none'].selected = true;
else element.value = value;
}
const isAssigneeDisabled = data.status !== 'todo';
this.querySelector('[name=assigneeId] option').disabled = isAssigneeDisabled;
this.querySelector('[name=status]').disabled = !data.assigneeId;
}
async _handleFormInteraction(event) {
const formElement = event.target.closest('select, textarea');
if (!formElement) return;
const attribute = formElement.getAttribute('name'), value = formElement.value;
const commandSuffixByAttribute = {title: 'Title', description: 'Description',
status: 'Status', assigneeId: 'Assignee'};
const type = `UpdateTask${commandSuffixByAttribute[attribute]}`;
const {status} = await post('/task-board', {id: generateId(), type,
data: {taskId: this.getAttribute('id'), [attribute]: value}});
if (status !== 200) alert('Task update failed, please reload page.');
}
async _remove() {
const data = {taskId: this.getAttribute('id'),
taskBoardId: this.getAttribute('task-board-id')};
await post('/task-board',
{id: generateId(), type: 'RemoveTaskFromTaskBoard', data});
}
customElements.define('task-item', TaskElement);
The component TaskElement provides the User Interface for an individual task.
Its setup code renders a <form> element and registers multiple event listeners.
The class defines two functions for injecting data from the outside. Setting the
list of team members as possible assignees is achieved via the operation
setTeamMembers(). The function updateData() expects task data for populating
the contained form elements. The handler operation _handleFormInteraction()
is executed whenever the value of a form input element changes. Depending on
the name of the modified input, an according Command is created and sent. If
the request fails, an error message is displayed. Finally, the handler operation
_remove() is responsible for issuing a “RemoveTaskFromTaskBoard”
Command. For the Custom Elements registry entry, the name “task-item” is
used.
View and edit mode for tasks
The task item component supports displaying and editing information. For both modes, the
according stylesheet defines separate rules. When clicking on the content area of a task item, it
enters the editing mode and shows a backdrop element. When clicking on the backdrop, the task
switches back to the view mode.
The last example shows the Custom Element for the task board:
Task board context: Task board element
class TaskBoardElement extends HTMLElement {
async connectedCallback() {
this.innerHTML = ['todo', 'in progress', 'done'].map(
status => `<ul class="task-column" data-status="${status}"></ul>`).join('');
const parameters = new URLSearchParams(location.search);
const query =
{type: 'StreamTasksOnTaskBoard', data: {taskBoardId: parameters.get('id')}};
const eventSource = new EventSource(`/task-board?${createQueryString(query)}`);
eventSource.addEventListener(
'message', ({data}) => this._addOrUpdateTasks(JSON.parse(data)));
this._teamMemberRequest = getJSON('/project',
{type: 'FindTeamMembers', data: {teamId: parameters.get('team-id')}});
}
async _addOrUpdateTasks(tasks) {
const teamMembers = await this._teamMemberRequest;
tasks.forEach(task => {
let taskItem = this.querySelector(`[id="${task.id}"]`);
if (!task.taskBoardId) {
taskItem.remove();
return;
}
if (!taskItem) {
taskItem = document.createElement('task-item');
taskItem.setAttribute('id', task.id);
taskItem.setAttribute('task-board-id', task.taskBoardId);
}
const columnElement = this.querySelector(`[data-status='${task.status}']`);
if (!columnElement.querySelector(`[id='${task.id}']`))
columnElement.appendChild(taskItem);
taskItem.setTeamMembers(teamMembers);
taskItem.updateData(task);
});
}
customElements.define('task-board', TaskBoardElement);
The last example shows the final setup code to run the Sample Application (run
code):
Sample Application: Setup code
const programMappings = {
'user/write-side': 50001,
'user/read-side': 50002,
'project/write-side': 50003,
'project/read-side': 50004,
'task-board/write-side': 50005,
'task-board/read-side': 50006,
};
http.createServer(httpProxy).listen(50000);
console.log('<iframe src="http://localhost:50000"></iframe>');
The User Interface implementation extends the Sample Application into a fully
functional task board software. Each of the three contexts provide their own UI
parts as Custom Elements. The shared HTTP file server in combination with the
proxy allows a unified access to all relevant files. Finally, the use of Reactive
Read Models enable a collaborative task board usage.
Conclusion
The concepts covered in this book enable to build software in a distinct way.
Domains, Domain Models and Bounded Contexts put emphasis on the
problem space, on knowledge abstractions and on conceptual boundaries. The
Layered Architecture and the Onion Architecture establish a modular
software structure. Value Objects, Entities and Domain Services drive the
design of Domain Model implementations. Domain Events express important
occurrences and facilitate message exchange. Aggregates define transactional
consistency boundaries. Repositories implement persistence in a meaningful
way to a Domain. Application Services manage use case executions. CQRS
separates write concerns and read concerns. Event Sourcing represents state as a
set of transitions and emphasizes behavior. Program separation allows
autonomy and scaling. Task-based UIs with Reactive Read Models empower
User Interfaces with real-time capabilities.
The following code shows a statically typed implementation of the address book
example from Chapter 6 (run code):
Address Book: Static types
class Name {
class Address {
class Contact {
Immutability
The immutability of individual attributes and complete components can be
enforced through runtime behavior or through static types. In TypeScript, the
modifier readonly marks an attribute as immutable. Both approaches have their
own advantages and implications. Defining immutability using a static type
system prevents the execution of code that attempts to perform an invalid
modification. However, in order for this to work correctly, every consumer of the
code must perform static type checking. Enforcing immutability with runtime
behavior through mechanisms such as Object.freeze() makes modifications
impossible. The disadvantage of this approach is that violation attempts are only
discovered during execution and may trigger exceptions. Theoretically, both
approaches can be combined. Though, when using static types, defining
immutability exclusively on a type-level is normally sufficient.
class Address {
class Contact {
The final implementation of the address book example defines the Value Object
components exclusively as static types:
Address Book: Value Objects as types
type Name = {
readonly firstName: string;
readonly lastName: string;
};
type Address = {
readonly street: string;
readonly houseNumber: string;
readonly zipCode: string;
readonly country: string;
};
class Contact {
Invariants
Whenever possible, invariants should be expressed on a type-level. The
feasibility of this approach depends on the specific constraints and the
capabilities of the type system. As described in Chapter 6, an invariant is “about
protecting components from entering an invalid state”. This protection should
happen at the earliest possible point in time. While it is acceptable and
sometimes inevitable to perform runtime checks, type-level constraints can be
superior. When all logical invariants are expressed through static types, there is
no possibility for an unexpected violation attempt during runtime. However, the
type systems of many programming languages lack the necessary capabilities to
express complex domain-specific constraints. While TypeScript has a powerful
type system, it can also only cover limited scenarios for invariant protection.
Example: Survey engine
The following code shows one possible implementation of the question Value
Object component (run code):
Survey Engine: Question Value Object
type PossibleAnswers = Array<string> & {0: string, 1: string};
class Question {
question: string;
possibleAnswers: PossibleAnswers;
The code implements the question concept together with its associated
invariants. One of them is protected through static typing, the other one via a
runtime check. The type PossibleAnswers expresses the invariant minimum
count of possible answers. This is done by combining a string array type and a
type with strings at index 0 and 1. Simply put, the type represents a string array
with a minimum length of 2. The class Question expects a question text and the
possible answers as constructor arguments. For protecting the invariant
maximum question length, a runtime check is performed. Another feasible
implementation approach is to also enforce the minimum count of answers at
runtime. In contrast, protecting both invariants through static types is not
possible with TypeScript.
Events
Events are messages that are typically exchanged between software parts. Their
purpose is to convey structured knowledge. In most cases, the data structure
should be homogenous across instances of the same type. This is equally true for
Domain Events that are published outside their conceptual boundary and for
Event Sourcing records. The event creation mechanism described in Chapter 7
incorporates multiple responsibilities within a single factory. One part is the
enforcement of a data structure upon event instantiation. This aspect can be
completely replaced with the use of static types. The second part is a
configurable process of creating metadata without affecting the Domain Layer.
Other than the data structure aspect, this functionality is a runtime behavior that
must exist independent of static typing.
The following code shows a TypeScript version of the event type factory from
Chapter 7:
Events: Event Type Factory
type IdGenerator = () => string;
type Metadata = Record<string, object>;
type MetadataProvider = () => Metadata;
const setMetadataProvider =
(metadataProvider: MetadataProvider) => createMetadata = metadataProvider;
The following code illustrates a usage of the event type factory and the
differentiation of event types by type name (run code):
Events: Event Type Factory usage
const CommentWrittenEvent = createEventType('CommentWritten')
<{commentId: string, message: string, author?: string}>();
const CommentDeletedEvent = createEventType('CommentDeleted')
<{commentId: string}>();
events.forEach(event => {
if (event.type == CommentWrittenEvent.type)
console.log(`${event.data.author}: ${event.data.message}`);
});
const applySupportTicketEvents = (state: State = {id: ''}, events: Event[] = []) =>
events.reduce(applyEvent, state);
The event types specify their expected data structure as type arguments. Also,
the code defines matching static types using the TypeScript utility InstanceType
and introduces the combined type Event. The support ticket Aggregate
component is mainly enriched with type annotations. For the structure of the
state representation, the type State is defined. Overall, the runtime behavior
remains largely unchanged. The runtime check in the function
commentSupportTicket() is replaced in favor of a static type. Finally, the usage
code again demonstrates the ability to differentiate between event types based on
their type name. The invocation of the function commentSupportTicket() yields
either one event or two different types. By checking for a specific type name,
TypeScript can infer the correct event type and the associated data structure.
Dependency Inversion
The Dependency Inversion principle is typically applied through the use of
interfaces. As explained in Chapter 5, JavaScript does not provide a native
language construct for this concept. In contrast, TypeScript supports interfaces
through the keywords interface and type. Both allow to define abstract
component signatures, for which other software parts can provide specialized
implementations. The implementations can optionally be marked with the
keyword implements to enforce the contract independent of actual component
use. With regard to the Domain Layer, there are two main use cases for
Dependency Inversion. For one, Domain Services can be defined as interfaces,
which is especially useful when there are infrastructural dependencies. Secondly,
Repository signatures can be expressed as explicit Domain Model components,
for which the Infrastructure part provides implementations.
};
Commenting System: Offensive Terms Detector implementation
class OffensiveTermsDetector implements OffensiveLanguageDetector {
#offensiveTerms: string[];
constructor(offensiveTerms: string[]) {
this.#offensiveTerms = offensiveTerms;
}
doesTextContainOffensiveLanguage(text: string) {
const lowercaseText = text.toLowerCase();
return this.#offensiveTerms.some(term => lowercaseText.includes(term));
}
class CommentCollection {
constructor({offensiveLanguageDetector}: Options) {
this.#offensiveLanguageDetector = offensiveLanguageDetector;
}
The second example illustrates the use of Repository interfaces for the
consultant profile directory example from Chapter 9 (run code usage):
Consultant Profile Directory: Generic Repository interface
interface Repository<Entity> {
}
};
const messageTypeFactory =
{createMessageType, setIdGenerator, setDefaultMetadataProvider};
messageTypeFactory.setIdGenerator(generateId);
messageTypeFactory.setDefaultMetadataProvider(() => ({creationTime: new Date()}));
console.log(new CreateUserCommand(
{data: {userId: '1', email: 'alex@example.com'}, metadata: {}}));
console.log(new CreateUserCommand(
{data: {userId: '1', email: 'james@example.com', username: 'james'}}));
console.log(new GetUserQuery({data: {userId: '1'},
metadata: {authentication: {subjectId: '123'}}}));
The discussed concepts and their examples as well as the Sample Application
implementation show some possible advantages of static types. Nevertheless,
this appendix does not represent a universal recommendation to use them. In
fact, there are scenarios, where plain JavaScript works just as well or even better.
Also, TypeScript specifically is different from a statically typed language where
type information is available at runtime. In the end, the ideal choice of a
programming language always depends on the specific scenario.
Appendix B: Going into production
When building software for productive use, most of the infrastructural
functionalities should be powered by existing technologies. Employing self-
made components is in most cases not a feasible approach. The following
sections describe the main parts of an exemplary technology stack that can be
used in production. This incorporates identifier generation, containerization,
orchestration, an Event Bus, an Event Store, a Read Model storage and an HTTP
server. The technologies are illustrated with basic examples and are partially also
integrated into the Sample Application. Whenever possible, they are consumed
through implementations that resemble the same interfaces as their filesystem-
based counterparts. Note that the selected technologies should not to be
understood as universal recommendations. The usefulness of tools always
depends on the respective project and its circumstances.
Identifier generation
The UUID standard version 4 is generally recommendable for identifiers,
independent of specific use cases. One advantage of this standard is that there
are numerous implementations across programming languages that can be used
interchangeably. In theory, even the custom code shown in this book can be used
in production. However, one important aspect for UUID generation is the
randomness quality and the consequential likelihood of identifier collisions. The
implementation introduced in Chapter 6 uses Math.random() and has a high
chance of duplicates. Therefore, it is better to use a third-party module. Since
identifiers must also be generated on the client, a library should support this use
case. At the time of writing, the most popular npm package for UUID generation
is uuid.
Containerization and Orchestration are two important concepts for software that
incorporates multiple runtime programs. Containerization is about packaging
individual programs together with their dependencies into isolated and
lightweight logical runtime units. Multiple such units can be operated on a single
host without affecting each other’s resources. Every container acts as its own
operating system with the possibility to share network and filesystem resources
with the host. One of the most popular container technologies is Docker. There
are multiple ways to create Docker containers. For one, custom images can be
built using Dockerfile manifests. This approach is especially useful for
functionalities that require setup or package installation. Alternatively, pre-built
Docker images can be downloaded in order to operate a container with custom
code mounted into it.
The following example starts an HTTP server that responds to each request with
an incrementing counter value:
Containerization: HTTP-based counter
let counter = 0;
require('http').createServer(
(request, response) => response.end(`${counter++}`)).listen(50000);
This code can be packaged into a container with port forwarding based on the
Node.js image by executing the following:
Containerization: Dockerized HTTP-based counter
docker run -d -p 50000:50000 -v "$PWD/counter.js":/counter.js node:alpine counter.js
The next example shows a reworked version of the HTTP-based counter using
Redis as in-memory storage:
Containerization: HTTP-based counter with Redis
const http = require('http');
const Redis = require('ioredis');
The last example of this section shows a Docker Compose configuration for the
HTTP counter with Redis:
Containerization: Docker Compose for HTTP-based counter with Redis
version: '3.9'
services:
counter:
image: node:alpine
ports: ['50000:50000']
working_dir: /root
volumes: ['./counter.js:/root/counter.js']
command: /bin/sh -c "npm i ioredis && node counter.js"
depends_on: ['redis']
redis:
image: redis:6.0.10-alpine
The reworked server code additionally imports the package ioredis and
instantiates the class Redis. Instead of mutating a local variable, the Redis entry
“counter” is incremented and returned. The Docker Compose configuration
defines two parts. One is a Node.js container with port forwarding, which
installs the library ioredis and executes the mounted script “counter.js”. The
second container is based on the Redis image. Using the directive depends_on,
the container startup order is customized. For executing the example, the server
code must be saved as “counter.js” and the configuration as “docker-
compose.yml”. The command docker-compose up -d starts the defined parts.
Docker Compose creates a network in which containers are accessible by their
name. This way, the Node.js code can connect to the Redis server via the
hostname “redis”.
Event Store
There are many storage technologies that can be used as an Event Store. The
main requirements are support for transactions and data change notifications of
some kind. These two functionalities exist in a wide range of available tools.
This includes many SQL databases such as PostgreSQL as well as NoSQL stores
such as MongoDB. On top of that, there are also databases that are specifically
built for Event Sourcing. One such technology is EventStoreDB. The stream-
oriented database provides numerous specialized features, such as optimistic
concurrency checks, stream subscriptions and categorized and global event
streams. There are official client packages for multiple runtime environments
including Node.js. Compared to a generic database, EventStoreDB is more
powerful for when working with event streams and requires less custom code.
Unused functionalities
The code examples in this appendix only use selected features of EventStoreDB. The two most
notable unused functionalities are persistent subscriptions and projections. Persistent
subscriptions enable competing consumers of a stream, where the processed revision is tracked
within the database. Projections allow to create derived streams that either emit their own events
or link to existing ones from other streams.
class EventStoreDBEventStore {
#client;
constructor({connectionString}) {
this.#client = EventStoreDBClient.connectionString(connectionString);
}
class RedisIndexedStorage {
load(id) {
return this.#client.hgetall(`${this.#dataType}:${id}`);
}
#getRedisIndexKey(index, value) {
return `${this.#dataType}:index:${index}:${value}`;
}
Message Bus
The main requirement for a Message Bus is a loosely coupled message
exchange, where publisher and subscriber communicate only indirectly. On top
of that, the distribution of Domain Events demands delivery guarantees, such as
an “at least once” delivery. Typically, this requires the use of persistent queues to
cope with failures and temporary unavailability. The majority of Message Bus
technologies support one or more messaging protocols. One such protocol is the
Advanced Message Queuing Protocol (AMQP). Another more simple variant is
the Simple Text Oriented Message Protocol (STOMP). RabbitMQ is a Message
Queue that supports both AMQP and STOMP as well as other protocols. The
Node.js third-party library amqplib allows to communicate with RabbitMQ
using AMQP. Even more, it supports specialized protocol extensions for so-
called “publisher confirms”.
The first code example shows two JSON serialization operations for objects that
contain BigInt fields:
Serialization: JSON serialization
const serializeJSON = input => JSON.stringify(input,
(key, value) => typeof value === 'bigint' ? `${value}n` : value);
class RabbitMQMessageBus {
unsubscribe(topic, subscriber) {
const subscribers = this.#getSubscribers(topic);
subscribers.splice(subscribers.indexOf(subscriber), 1);
this.#subscribersByTopic.set(topic, subscribers);
}
HTTP server
HTTP servers can have many individual purposes within a software. For one,
Application Layer components such as Command Handlers and Query Handlers
are typically exposed via an HTTP interface. Secondly, all client-side assets for a
web-based User Interface must be served through a static file server.
Furthermore, if a software is operated as multiple separate programs, an HTTP
proxy allows to maintain a unified endpoint. The three functionalities can be
powered by a single technology or a combination of many. nginx is an HTTP
server that can act both as proxy and as static file server. Its behavior is
controlled through a custom configuration format. Exposing Application
Services via HTTP can be done using the native http module or third-party
libraries such as koa.
The following example shows an NGINX configuration for a static file server
and a proxy functionality:
HTTP Server: NGINX configuration for a static file server and a proxy
http {
server {
location /static {
root /data;
include /etc/nginx/mime.types;
}
location /context-a {
proxy_pass http://context-a;
}
location /context-b {
proxy_pass http://context-b;
}
}
}
events { }
NGINX configuration files consist of directives. Every directive has a name and
parameters, which can be strings, numbers or blocks with nested directives. The
two main parts in a configuration are the blocks http and events. While the
http part allows use-case-specific configuration, the events directive defines
general runtime behavior. Inside the http block, there can be multiple server
entries. Similarly, inside a server block, there can be multiple location entries.
Every location directive accepts a location match and a block. Whenever a
request URL matches a location pattern, the associated behavior is applied. For
the example, the location directive with the pattern “/static” defines the static
file server functionality. The other location directives define a proxy behavior
for the hosts “context-a” and “context-b”.
As example, the following code shows the updated configuration for the write
side of the task board context implementation:
Sample Application: Task Board write side configuration
const eventStore = new EventStoreDBEventStore(
{connectionString: 'esdb://task-board.eventstoredb:2113?Tls=false'});
The configuration first defines two templates. Entries that start with “x-“ are
called extension fields and can be combined with YAML anchors to create
reusable templates. Every context consists of an EventStoreDB instance, a write
side, a Redis instance and a read side. The root directory of the code bundle is
mounted into each Node.js container. Both RabbitMQ and NGINX are defined
as single instances. For the NGINX server, the Sample Application directory is
mounted and port 80 is forwarded to the host. All containers are based on pre-
built Docker images. The execution commands for the context parts consist of
two instructions. First, the shell script “await-setup.sh” is executed to wait for
EventStoreDB, Redis and RabbitMQ. Afterwards, the actual Node.js program is
started.
server {
location ~ ^/($|[^/]*/) {
root /usr/share/nginx/ui;
index user/ui/user-login.html;
include /etc/nginx/mime.types;
}
location /project {
proxy_pass http://project.$side;
}
location /task-board {
proxy_set_header Connection '';
proxy_http_version 1.1;
chunked_transfer_encoding off;
proxy_pass http://task-board.$side;
}
location /user {
proxy_pass http://user.$side;
}
}
}
events { }
The directive resolver ensures that NGINX uses the Docker nameserver for
host resolution. Otherwise, containers would not be accessible by their name.
The mapped variable $side maps the HTTP request method to either the value
“write-side” or “read-side”. Within the server directive, there are four nested
location blocks. The first one defines the static file server part. For both the
root path and every path with at least two components, a static asset is served.
Furthermore, the file “user/ui/user-login.html” is used as root document.
Requests with the paths “/project”, “/task-board” or “/user” are forwarded to the
respective context. POST requests are forwarded to the write side of a context,
all others to the read side. The task board part contains additional configuration
for SSE support.
Young, Greg. 2014. CQRS and Event Sourcing - Code on the Beach 2014. Talk
on YouTube
Young, Greg. 2014. GOTO 2014 • Event Sourcing. Talk on YouTube