Enterprise Angular
Enterprise Angular
Enterprise Angular
Manfred Steyer
This book is for sale at http://leanpub.com/enterprise-angular
This is a Leanpub book. Leanpub empowers authors and publishers with the Lean Publishing
process. Lean Publishing is the act of publishing an in-progress ebook using lightweight tools and
many iterations to get reader feedback, pivot until you have the right book and build traction once
you do.
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Structure of This Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Trainings and Consultancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Help to Improve this Book! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Thanks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Trying it out . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Isolating Micro Frontends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Incremental Builds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Deploying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Smarter, Not Harder: Simplifying your Application With NGRX Signal Store and Custom
Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
Goal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
DataService Custom Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
Implementing A Generic Custom Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
Providing a Fitting Data Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
Undo/Redo-Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
Using the Store in a Component . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
Conclusion and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
NGRX Signal Store Deep Dive: Flexible and Type-Safe Custom Extensions . . . . . . . . . . 170
A Simple First Extension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
Now it Really Starts: Typing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Typing and Dynamic Properties – How do They Work Together? . . . . . . . . . . . . . . . 174
More Examples: CRUD and Undo/Redo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
Out of the Box Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
Literature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
We provide our offer in various forms: remote or on-site; public or as dedicated company
workshops; in English or in German.
If you have any questions, reach out to us using office@softwarearchitekt.at.
³https://www.angulararchitects.io/en/angular-workshops/
Introduction 4
Thanks
I want to thank several people who have helped me write this book:
• The great people at Nrwl.io⁵ who provide the open-source tool Nx⁶ used in the case studies
here and described in the following chapters.
• Thomas Burleson⁷ who did an excellent job describing the concept of facades. Thomas
contributed to the chapter about tactical design which explores facades.
• The master minds Zack Jackson⁸ and Jack Herrington⁹ helped me to understand the API for
Dynamic Module Federation.
• The awesome Tobias Koppers¹⁰ gave me valuable insights into this topic and
• The one and only Dmitriy Shekhovtsov¹¹ helped me using the Angular CLI/webpack 5
integration for this.
⁴https://github.com/manfredsteyer/ddd-bk
⁵https://nrwl.io/
⁶https://nx.dev/angular
⁷https://twitter.com/thomasburleson?lang=de
⁸https://twitter.com/ScriptedAlchemy
⁹https://twitter.com/jherr
¹⁰https://twitter.com/wSokra
¹¹https://twitter.com/valorkin
Strategic Domain-Driven Design
To make enterprise-scale applications maintainable, they need to be sub-divided into small, less
complex, and decoupled parts. While this sounds logical, this also leads to two difficult questions:
How to identify such parts and how can they communicate with each other?
In this chapter, I present a techniques I use to slice large software systems: Strategic Design – a
discipline of the domain driven design¹² (DDD) approach.
To use these processes for identifying different domains, we can use several heuristics:
• Organizational Structure: Different roles or different divisions that are responsible for several
steps of the process are in indicator for the existence of several sub-domains.
• Vocabulary: If the same term is used differently or has a significant different importance, we
might have different sub-domains.
• Pivotal Events: Pivotal Events are locations in the process where a significant (sub)task is
completed. After such an event, very often, the process goes on at another time and/or place
and/or with other roles. If our process was a movie, we’d have a scene change after such an
event. Such events are likely boundaries between sub-domains.
Each of these heuristics gives you candidates for slicing your process into sub-domains. However,
it’s your task to decide with which candidates to go. The general goal is to end up with slices that
don’t need to know much about each other.
The good message is: You don’t need to do such decisions alone. You should do it together with
other stakeholders like, first and foremost, business experts but also other architects, developers and
product owners.
A modern approach for brining the knowledge of all these different people together is Event
Storming¹³. It’s a workshop format where different groups of stakeholders. For this, they model
the processes together with post-its (sticky notes).
¹³https://www.eventstorming.com
Strategic Domain-Driven Design 7
In DDD, we distinguish between these two forms of a product. We create different models that are
as concrete and meaningful as possible.
This approach prevents the creation of a single confusing model that attempts to describe the
whole world. Such models have too many interdependencies that make decoupling and subdividing
impossible.
We can still relate different views on the product entity at a logical level. If we use the same id on
both sides, we know which “catalog product” and which “approval product” are different view to
the same entity.
Hence, each model is only valid for a specific area. DDD calls this area the bounded context¹⁴. To
put it in another way: The bounded context defines thought borders and only within these borders
the model makes sense. Beyond these borders we have a different perspective to the same concepts.
Ideally, each domain has its own bounded context.
Within such a bounded context, we use a ubiquitous language. This is mainly the language
of the domain experts. That means we try to mirror the real world with our model and also
¹⁴https://martinfowler.com/bliki/BoundedContext.html
Strategic Domain-Driven Design 8
within our implementation. This makes the system more self-describing and reduces the risk for
misunderstandings.
Context-Mapping
In our case study, we may find the following domains:
Although these domains should be as self-contained as possible, they still have to interact occa-
sionally. Let’s assume the Ordering domain for placing orders needs to interact with the Catalogue
domain and a connected ERP system.
To define how these domains interact, we create a context map:
In principle, Ordering could have full access to Catalog. In this case, however, the domains aren’t
decoupled anymore and a change in Catalog could break Ordering.
Strategic design defines several ways for dealing with such situations. For instance, in the context
map shown above, Catalog offers an API (DDD calls it an open/host service) that exposes only
Strategic Domain-Driven Design 9
selected aspects for other domains. This API should be stable and backwards-compatible to prevent
breaking other domains. Everything else is hidden behind this API and hence can be changed easily.
Since we cannot control the ERP system, Ordering uses a so-called anti-corruption layer (ACR) to
access it. All calls to the ERP system are tunneled by this ACR. Hence, if something changes in the
ERP system, we only need to update the ACR. Also, the ACR allows us to translate concepts from
the ERP system into entities that make sense within our bounded context.
An existing system, like the shown ERP system, usually does not follow the idea of the bounded
context. Instead, it contains several logical and intermingled ones.
Another strategy I want to stress here is Separate Ways. Specific tasks, like calculating VAT, could
be separately implemented in several domains:
At first sight, this seems awkward because it leads to code redundancies and hence breaks the
DRY principle (don’t repeat yourself). Nevertheless, it can come in handy because it prevents
a dependency on a shared library. Although preventing redundant code is important, limiting
dependencies is vital because each dependency increases the overall complexity. Also, the more
dependencies we have the more likely are braking changes when individual parts of our system
evolve. Hence, it’s good first to evaluate whether an additional dependency is truly needed.
• Regulatory Compliance
• Change Cadence
• Team Location
• Risk
• Performance Isolation
• Technology
• User Personas
Conclusion
Strategic design is about identifying loosely-coupled sub-domains. In each domain, we find ubiqui-
tous language and concepts that only make sense within the domain’s bounded context. A context
map shows how those domains interact.
In the next chapter, we’ll see we can implement those domains with Angular using an Nx¹⁵-based
monorepo.
¹⁵https://nx.dev/
Architectures with Sheriff and
Standalone Components
In the previous chapter, I’ve shown how to define your Strategic Design. This chapter highlights the
implementation of your Strategic Design based on Standalone Components and Standalone APIs.
The specified architecture is enforced with the open-source project Sheriff.
The examples used here work with a traditional Angular CLI-Project but also with Nx the next
chapter focuses on.
Source Code¹⁶
Architecture Matrix
¹⁶https://github.com/manfredsteyer/modern-arc.git
Architectures with Sheriff and Standalone Components 12
This matrix is often the starting point of our projects and can be tailored to individual needs. Each
cell results in a module in the source code. Nrwl¹⁷ suggests the following categories (originally for
libraries), among others, which have proven helpful in our daily work:
• feature: A feature module implements a use case with so-called smart components. Due to their
focus on a feature, such components are not very reusable. Smart Components communicate
with the backend. Typically, in Angular, this communication occurs through a store or services.
• ui: UI modules contain so-called dumb or presentational components. These are reusable
components that support the implementation of individual features but do not know them
directly. The implementation of a design system consists of such components. However, UI
modules can also contain general technical components that are used across all use cases.
An example of this would be a ticket component, which ensures that tickets are presented
in the same way in different features. Such components usually only communicate with their
environment via properties and events. They do not get access to the backend or a store outside
of the module.
• data: Data modules contain the respective domain model (actually the client-side view of
it) and services that operate on it. Such services validate e.g. Entities and communicate
with the backend. State management, including the provision of view models, can also be
accommodated in data modules. This is particularly useful when multiple features in the same
domain are based on the same data.
• util: General helper functions etc. can be found in utility modules. Examples of this are logging,
authentication, or working with date values.
Another special aspect of the implementation in the code is the shared area, which offers code for
all domains. This should primarily have technical code – use case-specific code is usually located in
the individual domains.
The structure shown here brings order to the system: There is less discussion about where to find or
place certain sections of code. In addition, two simple but effective rules can be introduced on the
basis of this matrix:
• In terms of strategic design, each domain may only communicate with its own modules. An
exception is the shared area to which each domain has access.
• Each module may only access modules in lower layers of the matrix. Each module category
becomes a layer in this sense.
Both rules support the decoupling of the individual modules or domains and help to avoid cycles.
Being a reference architecture, this matrix is often adopted to project-specific needs. Some teams
simplify it by reducing the amount of layers and access rules; some teams add additional ones. In
some projects, the data layer is called domain or state layer and there are projects where the aspects
of these different names end up in layers of their own.
¹⁷https://go.nrwl.io/angular-enterprise-monorepo-patterns-new-book
Architectures with Sheriff and Standalone Components 13
The module names are prefixed with the name of the respective module category. This means that
you can see at first glance where the respective module is located in the architecture matrix. Within
the modules are typical Angular building blocks such as components, directives, pipes, or services.
The use of Angular modules is no longer necessary since the introduction of standalone components
(directives and pipes). Instead, the standalone flag is set to true:
1 @Component({
2 selector: 'app-flight-booking',
3 standalone: true,
4 imports: [CommonModule, RouterLink, RouterOutlet],
5 templateUrl: './flight-booking.component.html',
6 styleUrls: ['./flight-booking.component.css'],
7 })
8 export class FlightBookingComponent {
9 }
Architectures with Sheriff and Standalone Components 14
In the case of components, the so-called compilation context must also be imported. These are all
other standalone components, directives and pipes that are used in the template.
An index.ts is used to define the module’s public interface. This is a so-called barrel that determines
which module components may also be used outside of the module:
Care should be taken in maintaining the published constructs, as breaking changes tend to affect
other modules. Everything that is not published here, however, is an implementation detail of the
module. Changes to these parts are, therefore, less critical.
• Modules may only communicate with modules of the same domain and shared
• Modules may only communicate with modules on below layers
• Modules may only access the public interface of other modules
The Sheriff¹⁸ open-source project allows these conventions to be enforced via linting. Violation is
warned with an error message in the IDE or on the console:
¹⁸https://github.com/softarc-consulting/sheriff
Architectures with Sheriff and Standalone Components 15
The error message in the IDE provides instant feedback during development and the linter output
on the console can be used to automate these checks in the build process. Hence, source code that
violates the defined architecture can be prevented being committed.
To set up Sheriff, the following two packages must be obtained via npm:
The former includes Sheriff, the latter is the bridge to eslint. To use this bridge, it must be registered
in the .eslintrc.json found in the project root:
1 {
2 [...],
3 "overrides": [
4 [...]
5 {
6 "files": ["*.ts"],
7 "extends": ["plugin:@softarc/sheriff/default"]
8 }
9 ]
10 }
Sheriff considers any folder with an index.ts as a module. By default, Sheriff prevents this
index.js from being bypassed and thus access to implementation details by other modules. The
sheriff.config.ts to be set up in the root of the project defines categories (tags) for the individual
modules and defines dependency rules (depRules) based on them. The following shows a Sheriff
configuration for the architecture matrix discussed above:
16 depRules: {
17 root: ['*'],
18
19 'domain:*': [sameTag, 'domain:shared'],
20
21 'type:feature': ['type:ui', 'type:data', 'type:util'],
22 'type:ui': ['type:data', 'type:util'],
23 'type:data': ['type:util'],
24 'type:util': noDependencies,
25 },
26 };
The tags refer to folder names. Expressions such as <domain> or <feature> are placeholders. Each
module below src/app/domains/<domain> whose folder name begins with feature-* is therefore as-
signed the categories domain:<domain> and type:feature. In the case of src/app/domains/booking,
these would be the categories domain:booking and type:feature.
The dependency rules under depRules pick up the individual categories and stipulate, for example,
that a module only has access to modules in the same domain and to domain:shared. Further rules
define that each layer only has access to the layers below it. Thanks to the root: ['*'] rule, all
non-explicitly categorized folders in the root folder and below are allowed access to all modules.
This primarily affects the shell of the application.
Such three-part imports consist of the project name or name of the workspace (e.g. @demo), the
domain name (e.g. ticketing), and a module name (e.g. data) and thus reflect the desired position
within the architecture matrix.
This notation can be enabled independently of the number of domains and modules with a single
path mapping within tsconfig.json in the project root:
Architectures with Sheriff and Standalone Components 17
1 {
2 "compileOnSave": false,
3 "compilerOptions": {
4 "baseUrl": "./",
5 [...]
6 "paths": {
7 "@demo/*": ["src/app/domains/*"],
8 }
9 },
10 [...]
11 }
IDEs like Visual Studio Code should be restarted after this change. This ensures that they take this
change into account.
The build system Nx, introduced in the next chapter, adds such path mappings automatically to your
project when adding a library.
Conclusion
Strategic design subdivides a system into different ones that are implemented as independently as
possible. This decoupling prevents changes in one area of application from affecting others. The
architecture approach shown subdivides the individual domains into different modules, and the
open-source project Sheriff ensures that the individual modules only communicate with one another
in respecting the established rules.
This approach allows the implementation of large and long-term maintainable frontend monoliths.
Due to their modular structure, the language is sometimes also of moduliths. A disadvantage of such
architectures is increased build and test times. This problem can be solved with incremental builds
and tests. The next chapter addresses this.
Build Performance with Nx
So far, we laid the foundation for a maintainable Angular architecture. We’ve been thinking about
domain slicing, categorizing modules, and enforcing rules based on them with Sheriff.
This chapter supplements our solution with measures to improve build performance. For this, we
will switch to the well-known build system Nx.
� Source Code¹⁹ (see different branches)
1 ng g app miles
1 ng g lib auth
All applications and libraries set up this way are part of the same workspace and repo. It is, therefore,
not necessary to distribute the libraries via npm:
¹⁹https://github.com/manfredsteyer/modern-arc.git
Build Performance with Nx 19
The file public-api.ts, sometimes also called index.ts, has a special task. It defines the library’s
public API:
1 // public-api.ts
2
3 export * from "./lib/auth.service";
All constructs published here are visible to other libraries and applications. The rest is considered
a private implementation detail. In order to grant other libraries and applications in the same
workspace access to a library, a corresponding path mapping must be set up in the central
tsconfig.json:
Build Performance with Nx 20
1 […]
2 "paths": {
3 "@demo/auth": [
4 "auth/src/public-api.ts"
5 ],
6 […]
7 }
8 […]
Calling ng g lib takes care of this path mapping. However, the implementation of the Angular CLI
makes point to the dist folder and, therefore, to the compiled version. This means the author would
have to rebuild the library after every change. To avoid this annoying process, the previous listing
has the mapping point to the library’s source code version. Unlike the CLI, the below-mentioned
tool Nx takes care of this automatically.
Once path mapping is set up, individual applications and libraries can import public API exports:
1 npm i -g nx
This command causes npm to load a script that sets up an Nx workspace with the current Nx version.
There are also scripts for migrating CLI workspaces to Nx, although they do not always activate the
full range of Nx features. For this reason, we had better experiences creating a new Nx workspace
and – if necessary – copying over the existing source code. As usual with the Angular CLI, the
workspace can then be divided into several applications and libraries:
1 nx g app apps/appName
2
3 nx g lib libs/libName
It’s a usual Nx convention to place Angular apps in the apps folder and Angular libs in the libs
folder. Also here, use the default settings for your first Nx projects. However, I would suggest one
exception to this rule: Start with the new esbuild builder as it provides a better build performance
compared to the traditional webpack-based one.
A call to
1 nx graph
1 nx build miles
If the source files that flow into the affected application have not changed, you will immediately
receive the result from the local cache. By default, this is located in a .nx folder excluded in your
project’s .gitignore.
Nx can also be instructed to rebuild certain or all projects:
In this case, too, Nx falls back to the cache if the source files have not changed:
Build Performance with Nx 23
Unit tests, E2E tests, and linting can also be carried out incrementally in the same way. Nx even
goes one step further and caches these actions at the library level. This improves performance by
dividing the application across several libraries.
In principle, this would also be possible for nx build, provided individual libraries are created as
buildable (nx g lib myLib --buildable). However, it has been shown that this approach rarely
leads to performance advantages and that incremental application rebuilds are preferable.
1 npx nx connect-to-nx-cloud
Technically, this command activates the nx-cloud task runner in the nx.json located in the project
root:
²⁰https://nx.app/
Build Performance with Nx 24
1 "tasksRunnerOptions": {
2 "default": {
3 "runner": "nx-cloud",
4 "options": {
5 "cacheableOperations": [
6 "build",
7 "test",
8 "lint"
9 ],
10 "accessToken": "[…]"
11 }
12 }
13 },
A task runner takes care of the execution of individual tasks, such as those behind nx build, nx
lint or nx test. The default implementation caches the results of these tasks in the file system, as
discussed above. The nx-cloud Task Runner, on the other hand, delegates to an account in the Nx
Cloud.
This also shows that the task runner and, thus, the caching strategy can be exchanged relatively
easily. Some open-source projects take advantage of this and offer task runners that leverage their
own data sources like AWS (see here²¹ and here²²), GCP²³, Azure²⁴, or Minio²⁵. Thanks to Lars Gyrup
Brink Nielsen²⁶ for pointing me to these solutions.
However, it should be noted that the task runner’s API is not public and can, therefore, change from
version to version.
The task runner for the Nx Cloud also needs to be configured with an access token (see above).
Commands like nx build output a link to a dynamically created cloud account. When accessing for
the first time, it is advisable to create users to restrict access to them. You can also find a dashboard
under this link that provides information about the builds carried out:
²¹https://www.npmjs.com/package/@magile/nx-distributed-cache
²²https://github.com/bojanbass/nx-aws
²³https://github.com/MansaGroup/nx-gcs-remote-cache
²⁴https://npmjs.com/package/nx-remotecache-azure
²⁵https://npmjs.com/package/nx-remotecache-minio
²⁶https://twitter.com/LayZeeDK
Build Performance with Nx 25
This command supports CircleCI (--ci=circleci) and Azure (--ci==azure) too. If you go with
another environment, you can at least use the generated workflows as a starting point. Essentially,
these scripts specify the desired number of worker nodes and the number of parallel processes per
worker node. The triggered commands are divided into three groups: commands that are executed
Build Performance with Nx 26
sequentially for initialization (init-commands), commands that are executed in parallel on the main
node (parallel-commands) and commands that the workers execute in parallel (parallel-commands)
on agents.
The scripts are triggered whenever the main branch of the repo is changed - either by a direct push
or by merging a pull request:
Conclusion
Nx enables build tasks to be dramatically accelerated. This is made possible, among other things,
by incremental builds, in which only the application parts that have actually changed are rebuilt or
tested. The Nx Cloud offers further acceleration options with its distributed cache. It also allows
the individual builds to be parallelized. Because Nx analyzes the program code and recognizes
dependencies between individual applications and libraries, these options often do not require
manual configuration.
Nx & Sheriff - Friends for Life
Nx provides a lot of features (not only) for Angular teams: A fast CI thanks to the build cache and
parallelization, integration into popular tools like Jest, Cypress, Playwright, or Storybook by the
push of a button, and linting rules for enforcing module boundaries are just a few examples. Sheriff,
on the other hand, focuses on enforcing module boundaries.
At first glance, Sheriff seems to be a small subset of Nx. However, we quite often use both tools
together in our customer projects. In this chapter, I explain why and how your architectures can
benefit from this combination.
Module Boundaries in Nx
By default, Nx allows to enforce module boundaries like those in our architecture matrix:
Here, a technical layer can only access the below layers, and domains like booking and boarding
are not allowed to access each other. However, they can access the shared area (see arrows in the
previous image).
Nx & Sheriff - Friends for Life 28
1 {
2 [...]
3 "tags": ["domain:tickets", "type:domain-logic"]
4 }
Tags are just strings. In the shown case, they reflect the lib’s or app’s position in the architecture
matrix. The prefixes domain and type help to distinguish the two dimensions (columns with domains
and rows with types). This is just to improve readability - for Nx they don’t add any meaning.
1 "rules": {
2 "@nx/enforce-module-boundaries": [
3 "error",
4 {
5 "enforceBuildableLibDependency": true,
6 "allow": [],
7 "depConstraints": [
8 {
9 "sourceTag": "type:app",
10 "onlyDependOnLibsWithTags": [
11 "type:api",
12 "type:feature",
13 "type:ui",
14 "type:domain-logic",
15 "type:util"
16 ]
17 },
18 {
19 "sourceTag": "type:feature",
20 "onlyDependOnLibsWithTags": [
21 "type:ui",
Nx & Sheriff - Friends for Life 29
22 "type:domain-logic",
23 "type:util"
24 ]
25 },
26 {
27 "sourceTag": "type:ui",
28 "onlyDependOnLibsWithTags": ["type:domain-logic", "type:util"]
29 },
30 {
31 "sourceTag": "type:domain-logic",
32 "onlyDependOnLibsWithTags": ["type:util"]
33 },
34
35
36 {
37 "sourceTag": "domain:booking",
38 "onlyDependOnLibsWithTags": ["domain:booking", "shared"]
39 },
40 {
41 "sourceTag": "domain:boarding",
42 "onlyDependOnLibsWithTags": ["domain:boarding", "shared"]
43 },
44 {
45 "sourceTag": "shared",
46 "onlyDependOnLibsWithTags": ["shared"]
47 },
48
49 ]
50 }
51 ]
52 }
There is a set of restrictions for each dimension found in the matrix. As we don’t add new types of
layers and new domains regularly, these linting rules don’t come with a lot of maintenance effort.
After changing these rules, restart your IDE to ensure it rereads the modified files.
Also, a call to nx lint will unveil the same linting errors. This allows your build tasks to check for
alignment with the architecture defined. Using git hooks and tools like husky²⁷, you can also prevent
people from checking in source code that breaks the rules.
²⁷https://typicode.github.io/husky/
²⁸https://www.npmjs.com/package/@angular-architects/ddd
Nx & Sheriff - Friends for Life 31
1 npm i @angular-architects/ddd
2 ng g @angular-architects/ddd:init
3
4 ng g @angular-architects/ddd:domain booking --addApp --standalone
5 ng g @angular-architects/ddd:domain boarding --addApp --standalone
6 ng g @angular-architects/ddd:feature search --domain booking --entity flight --stand\
7 alone
8 ng g @angular-architects/ddd:feature cancel --domain booking --standalone
9 ng g @angular-architects/ddd:feature manage --domain boarding --standalone
If you visualize this architecture with the command nx graph, you get the following graph:
Both boundary types align with each other and are implemented as apps and libs.
However, there are situations where having that many apps and libs feels a bit overwhelming, and
such a fine-grained incremental CI/CD is not needed. In some cases, the build might already be fast
enough or might not benefit much from further apps and libs as the amount of build agents is limited
too.
On the other hand, having module boundaries on this granularization level is essential for our
architecture. Hence, we need to find a way to decouple these two types of boundaries from each
other. For this, we combine Nx with Sheriff²⁹ introduced in the chapter Architectures with Sheriff
and Standalone Components:
• Fewer, more coarse-grained libraries define the boundaries for incremental CI/CD
• The usual fine-grained boundaries for modularization are implemented on a per-folder level
with Sheriff
• As so often, this is a trade-off situation: We trade in the possibility of a more fine-grained
incremental CI/CD for a simplified project structure.
This strategy was already used in the chapter Architectures with Sheriff and Standalone Compo-
nents_.
Shared modules are still implemented as separate libraries. This approach is fitting when we go with
several applications that might be integrated using Hyperlinks or technologies also used for Micro
Frontends, e.g., Federation. More information about Micro Frontends and Federation can be found
in the preceding chapters.
This style gives us a great performance in terms of both incremental builds and incremental testing
and linting. Even though Micro Frontend Technologies might be involved, this does not necessarily
lead to a Micro Frontend architecture, especially if all applications are deployed together.
In this case, putting the domains in different libraries helps to speed up incremental testing and
linting. However, in this case, the potential for speeding up the build performance is limited as each
change leads to a rebuild of the whole application.
Conclusion
Nx is a great build system that uses a build cache and parallelization to speed up your CI
tremendously. It comes with integrations into popular tools like Jest, Cypress, Playwright, and
Storybook. To enforce our architecture, module boundaries can be configured.
Apps and libs define the boundaries for incremental CI and the module boundaries. Hence, we need
to split our software system into several apps and libs.
While having fine-grained module boundaries is preferable, having too many small apps and
libraries might be overwhelming and not needed to improve CI performance. This is where
Sheriff comes in: It allows defining module boundaries on a per-folder basis, while Nx establishes
boundaries on a per-app and per-lib basis.
From Domains to Micro Frontends
Let’s assume you’ve identified the sub-domains for your system. The next question is how to
implement them.
One option is to implement them within a large application – aka a deployment monolith. The second
is to provide a separate application for each domain. Such applications are called micro Frontends.
Deployment Monoliths
A deployment monolith is an integrated solution comprising different domains:
This approach supports a consistent UI and leads to optimized bundles by compiling everything
together. A team responsible for a specific sub-domain must coordinate with other sub-domain
teams. They have to agree on an overall architecture and the leading framework. Also, they need to
define a common policy for updating dependencies.
It is tempting to reuse parts of other domains. However, this may lead to higher coupling and –
eventually – to breaking changes. To prevent this, we’ve used Nx and access restrictions between
libraries in the last chapter.
Micro Frontends
To further decouple your system, you could split it into several smaller applications. If we assume
that use cases do not overlap your sub-domains’ boundaries, this can lead to more autarkic teams
and applications which are separately deployable.
From Domains to Micro Frontends 36
.
You now have something called Micro Frontends. Micro Frontends allow for autarkic teams: Each
team can choose their architectural style, their technology stack, and they can even decide when
to update to newer framework versions. They can use “the best technology” for the requirements
given within the current sub-domain.
The option for deciding which frameworks to use per Micro Frontend is interesting when developing
applications over the long term. If, for instance, a new framework appears in five years, we can use
it to implement the next domain.
If you seek even more isolation between your sub-domains and the teams responsible for them, you
could put each sub-domain into its individual repository:
However, this has costs. Now you have to deal with shipping your shared libraries via npm. This
comes with some efforts and forces you to version your libraries. You need to make sure that each
Micro Frontend uses the right version. Otherwise, you end up with version conflicts.
From Domains to Micro Frontends 37
This approach fits product suites like Google or Office 365 well:
From Domains to Micro Frontends 38
Each domain is a self-contained application here. This structure works well because we don’t need
many interactions between the domains. If we needed to share data, we could use the backend. Using
this strategy, Word 365 can use an Excel 365 sheet for a series letter.
This approach has several advantages:
• It is simple
• It uses SPA frameworks as intended
• We get optimised bundles per domain
In the screenshot, the shell loads the Micro Frontend with the red border into its working area.
Technically, it simply loads the Micro Frontend bundles on demand. The shell then creates an
element for the Micro Frontend’s root element:
Instead of bootstrapping several SPAs, we could also use iframes. While we all know the enormous
disadvantages of iframes and have strategies to deal with most of them, they do provide two useful
features:
1. Isolation: A Micro Frontend in one iframe cannot influence or hack another Micro Frontend
in another iframe. Hence, they are handy for plugin systems or when integrating applications
from other vendors.
2. They also allow the integration of legacy systems.
You can find a library that compensates most of the disadvantages of iframes for intranet applications
From Domains to Micro Frontends 40
here³⁰. Even SAP has an iframe-based framework they use for integrating their products. It’s called
Luigi³¹ and you can find it here³².
The shell approach has the following advantages:
• If we don’t use specific tricks (outlined in the next chapter), each microfrontend comes with
its own copy of Angular and the other frameworks, increasing the bundle sizes.
• We have to implement infrastructure code to load microfrontends and switch between them.
• We have to do some work to get a standard look and feel (we need a universal design system).
Finding a Solution
Choosing between a deployment monolith and different approaches for microfrontends is tricky
because each option has advantages and disadvantages.
I’ve created the following decision tree, which also sums up the ideas outlined in this chapter:
³⁰https://www.npmjs.com/package/@microfrontend/common
³¹https://github.com/SAP/luigi
³²https://github.com/SAP/luigi
From Domains to Micro Frontends 41
As the implementation of a deployment monolith and the hyperlink approach is obvious, the next
chapter discusses how to implement a shell.
• What benefits did practitioners observe, and how do they rate their positive impact?
• What drawbacks did practitioner observe, and how do they rate their negative impact?
• How did practitioners compensate for drawbacks, and how effective have the used counter-
measures been?
These questions were broken down to several technical and organisatorical topics. The inquired
questions have been subdivided into the following groups:
If you are interested, you can download the survey results here³³.
³³https://www.angulararchitects.io/wp-content/uploads/2023/12/report.pdf
From Domains to Micro Frontends 42
Conclusion
There are several ways to implement Micro Frontends. All have advantages and disadvantages. Using
a consistent and optimized deployment monolith can be the right choice.
It’s about knowing your architectural goals and about evaluating the consequences of architectural
candidates.
The Micro Frontend Revolution: Using
Module Federation with Angular
In the past, when implementing Micro Frontends, you had to dig a little into the bag of tricks. One
reason is surely that build tools and frameworks did not know this concept. Fortunately, Webpack
5 initiated a change of course here.
Webpack 5 comes with an implementation provided by the webpack contributor Zack Jackson. It’s
called Module Federation and allows referencing parts of other applications not known at compile
time. These can be Micro Frontends that have been compiled separately. In addition, the individual
program parts can share libraries with each other, so that the individual bundles do not contain any
duplicates.
In this chapter, I will show how to use Module Federation using a simple example.
Example
The example used here consists of a shell, which is able to load individual, separately provided Micro
Frontends if required:
Shell
The Micro Frontend Revolution: Using Module Federation with Angular 44
The loaded Micro Frontend is shown within the red dashed border. Also, the microfrontend can be
used without the shell:
1 ng add @angular-architects/module-federation
2 --project shell --port 4200 --type host
3
4 ng add @angular-architects/module-federation
5 --project mfe1 --port 4201 --type remote
If you use Nx, you should npm install the library separately. After that, you can use the init
schematic:
³⁴https://github.com/manfredsteyer/module-federation-plugin-example/tree/static
³⁵https://www.npmjs.com/package/@angular-architects/module-federation
The Micro Frontend Revolution: Using Module Federation with Angular 45
1 npm i @angular-architects/module-federation -D
2
3 ng g @angular-architects/module-federation:init
4 --project shell --port 4200 --type host
5
6 ng g @angular-architects/module-federation:init
7 --project mfe1 --port 4201 --type remote
The command line argument --type was added in version 14.3 and makes sure, only the
needed configuration is generated.
While it’s obvious that the project shell contains the code for the shell, mfe1 stands for Micro
Frontend 1.
The command shown does several things:
Please note that the webpack.config.js is only a partial webpack configuration. It only contains
stuff to control module federation. The rest is generated by the CLI as usual.
However, the path mfe1/Module which is imported here, does not exist within the shell. It’s just a
virtual path pointing to another project.
To ease the TypeScript compiler, we need a typing for it:
The Micro Frontend Revolution: Using Module Federation with Angular 46
1 // decl.d.ts
2 declare module 'mfe1/Module';
Also, we need to tell webpack that all paths starting with mfe1 are pointing to an other project. This
can be done in the generated webpack.config.js:
The remotes section maps the path mfe1 to the separately compiled Micro Frontend – or to be
more precise: to its remote entry. This is a tiny file generated by webpack when building the
remote. Webpack loads it at runtime to get all the information needed for interacting with the Micro
Frontend.
While specifying the remote entry’s URL that way is convenient for development, we need a more
dynamic approach for production. The next chapter provides a solution for this.
The property shared defines the npm packages to be shared between the shell and the Micro
Frontend(s). For this property, The generated configuration uses the helper method shareAll that is
basically sharing all the dependencies found in your package.json. While this helps to quickly get
a working setup, it might lead to too much shared dependencies. A later section here addresses this.
The combination of singleton: true and strictVersion: true makes webpack emit a runtime error
when the shell and the Micro Frontend(s) need different incompatible versions (e. g. two different
major versions). If we skipped strictVersion or set it to false, webpack would only emit a warning
at runtime. More information³⁶ about dealing with version mismatches can be found in one of the
subsequent chapters.
³⁶https://www.angulararchitects.io/aktuelles/getting-out-of-version-mismatch-hell-with-module-federation/
The Micro Frontend Revolution: Using Module Federation with Angular 47
The helper function share used in this generated configuration replaces the value 'auto'
with the version found in your package.json.
1 @NgModule({
2 imports: [
3 CommonModule,
4 RouterModule.forChild(FLIGHTS_ROUTES)
5 ],
6 declarations: [
7 FlightsSearchComponent
8 ]
9 })
10 export class FlightsModule { }
In order to make it possible to load the FlightsModule into the shell, we also need to expose it via
the remote’s webpack configuration:
The Micro Frontend Revolution: Using Module Federation with Angular 48
The configuration shown here exposes the FlightsModule under the public name Module. The section
shared points to the libraries shared with the shell.
Trying it out
To try everything out, we just need to start the shell and the Micro Frontend:
1 ng serve shell -o
2 ng serve mfe1 -o
Then, when clicking on Flights in the shell, the Micro Frontend is loaded:
The Micro Frontend Revolution: Using Module Federation with Angular 49
Shell
Hint: You can also use the npm script run:all the plugin installs with its ng-add and init schematics:
run:all script
The Micro Frontend Revolution: Using Module Federation with Angular 50
To just start a few applications, add their names as command line arguments:
A Further Detail
Ok, that worked quite well. But have you had a look into your main.ts?
It just looks like this:
1 import('./bootstrap')
2 .catch(err => console.error(err));
The code you normally find in the file main.ts was moved to the bootstrap.ts file loaded here. All
of this was done by the @angular-architects/module-federation plugin.
While this doen’t seem to make a lot of sense at first glance, it’s a typical pattern you find in Module
Federation-based applications. The reason is that Module Federation needs to decide which version
of a shared library to load. If the shell, for instance, is using version 12.0 and one of the Micro
Frontends is already built with version 12.1, it will decide to load the latter one.
To look up the needed meta data for this decision, Module Fedaration squeezes itself into dynamic
imports like this one here. Other than the more tradtional static imports, dynamic imports are
asynchronous. Hence, Module Federation can decide on the versions to use and actually load them.
More on This
Learn more about this and further architecture topics regarding Angular and huge enterprise as well
as industrial solution in our advanced Online Workshop³⁷:
³⁷https://www.angulararchitects.io/schulungen/advanced-angular-enterprise-anwendungen-und-architektur/
The Micro Frontend Revolution: Using Module Federation with Angular 52
Save your ticket³⁸ for one of our online or on-site workshops now or request a company workshop³⁹
(online or In-House) for you and your team!
If you like our offer, keep in touch with us so that you don’t miss anything.
For this, you can subscribe to our newsletter⁴⁰ and/ or follow the book’s author on Twitter⁴¹.
One also has to deal with possible version conflicts. For example, it is likely that components that
were compiled with completely different Angular versions will not work together at runtime. Such
cases must be avoided with conventions or at least recognized as early as possible with integration
tests.
Dynamic Module Federation
In the previous chapter, I’ve shown how to use webpack Module Federation for loading separately
compiled Micro Frontends into a shell. As the shell’s webpack configuration describes the Micro
Frontends, we already needed to know them when compiling it.
In this chapter, I’m assuming a more dynamic situation where the shell does not know the Micro
Frontend upfront. Instead, this information is provided at runtime via a configuration file. While
this file is a static JSON file in the examples shown here, it’s content could also come from a Web
API.
The following image displays the idea described here:
For all Micro Frontends the shell gets informed about at runtime, it displays a menu item. When
clicking it, the Micro Frontend is loaded and displayed by the shell’s router.
Source Code (simple version, see branch: simple)⁴²
Source Code (full version)⁴³
⁴²https://github.com/manfredsteyer/module-federation-with-angular-dynamic/tree/simple
⁴³https://github.com/manfredsteyer/module-federation-with-angular-dynamic.git
Dynamic Module Federation 55
1 npm i -g @angular-architects/module-federation -D
2
3 ng g @angular-architects/module-federation
4 --project mfe1 --port 4201 --type remote
5
6 ng g @angular-architects/module-federation
7 --project mfe2 --port 4202 --type remote
Generating a Manifest
Beginning with the plugin’s version 14.3, we can generate a dynamic host that takes the key data
about the Micro Frontend from a JSON file – called the Micro Frontend Manifest – at runtime:
1 ng g @angular-architects/module-federation
2 --project shell --port 4200 --type dynamic-host
This generates:
• a webpack configuration
• the manifest and
• some code in the main.ts loading the manifest.
1 {
2 "mfe1": "http://localhost:4201/remoteEntry.js",
3 "mfe2": "http://localhost:4202/remoteEntry.js"
4 }
By default, loadManifest not just loads the manifest but also the remote entries the manifest points
to. Hence, Module Federation gets all the required metadata for fetching the Micro Frontends on
demand.
17 })
18 .then(m => m.FlightsModule)
19 },
20 {
21 path: 'bookings',
22 loadChildren: () => loadRemoteModule({
23 type: 'manifest',
24 remoteName: 'mfe2',
25 exposedModule: './Module'
26 })
27 .then(m => m.BookingsModule)
28 },
29 ];
The option type: 'manifest' makes loadRemoteModule to look up the key data needed in the loaded
manifest. The property remoteName points to the key that was used in the manifest.
1 // projects/mfe1/webpack.config.js
2
3 const { shareAll, withModuleFederationPlugin } =
4 require('@angular-architects/module-federation/webpack');
5
6 module.exports = withModuleFederationPlugin({
7
8 name: 'mfe1',
9
10 exposes: {
11 // Adjusted line:
12 './Module': './projects/mfe1/src/app/flights/flights.module.ts'
13 },
14
15 shared: {
16 ...shareAll({
17 singleton: true,
18 strictVersion: true,
19 requiredVersion: 'auto'
20 }),
Dynamic Module Federation 58
21 },
22
23 });
1 // projects/mfe2/webpack.config.js
2
3 const { shareAll, withModuleFederationPlugin } =
4 require('@angular-architects/module-federation/webpack');
5
6 module.exports = withModuleFederationPlugin({
7
8 name: 'mfe2',
9
10 exposes: {
11 // Adjusted line:
12 './Module': './projects/mfe2/src/app/bookings/bookings.module.ts'
13 },
14
15 shared: {
16 ...shareAll({
17 singleton: true,
18 strictVersion: true,
19 requiredVersion: 'auto'
20 }),
21 },
22
23 });
Trying it Out
For each route loading a Micro Frontend, the shell’s AppComponent contains a routerLink:
Dynamic Module Federation 59
That’s it. Just start all the three projects (e. g. by using npm run run:all). The main difference to
the result in the previous chapter is that now the shell informs itself about the Micro Frontends at
runtime. If you want to point the shell to different Micro Frontends, just adjust the manifest.
1 {
2 "mfe1": {
3 "remoteEntry": "http://localhost:4201/remoteEntry.js",
4
5 "exposedModule": "./Module",
6 "displayName": "Flights",
7 "routePath": "flights",
8 "ngModuleName": "FlightsModule"
9 },
10 "mfe2": {
11 "remoteEntry": "http://localhost:4202/remoteEntry.js",
12
13 "exposedModule": "./Module",
14 "displayName": "Bookings",
Dynamic Module Federation 60
15 "routePath": "bookings",
16 "ngModuleName": "BookingsModule"
17 }
18 }
1 // projects/shell/src/app/utils/config.ts
2
3 import {
4 Manifest,
5 RemoteConfig
6 } from "@angular-architects/module-federation";
7
8 export type CustomRemoteConfig = RemoteConfig & {
9 exposedModule: string;
10 displayName: string;
11 routePath: string;
12 ngModuleName: string;
13 };
14
15 export type CustomManifest = Manifest<CustomRemoteConfig>;
The CustomRemoteConfig type represents the entries in the manifest and the CustomManifest type
the whole manifest.
1 // projects/shell/src/app/utils/routes.ts
2
3 import { loadRemoteModule } from '@angular-architects/module-federation';
4 import { Routes } from '@angular/router';
5 import { APP_ROUTES } from '../app.routes';
6 import { CustomManifest } from './config';
7
8 export function buildRoutes(options: CustomManifest): Routes {
9
10 const lazyRoutes: Routes = Object.keys(options).map(key => {
11 const entry = options[key];
12 return {
13 path: entry.routePath,
14 loadChildren: () =>
15 loadRemoteModule({
16 type: 'manifest',
17 remoteName: key,
18 exposedModule: entry.exposedModule
19 })
20 .then(m => m[entry.ngModuleName])
21 }
22 });
23
24 return [...APP_ROUTES, ...lazyRoutes];
25 }
1 @Component({
2 selector: 'app-root',
3 templateUrl: './app.component.html'
4 })
5 export class AppComponent implements OnInit {
6
7 remotes: CustomRemoteConfig[] = [];
8
9 constructor(
10 private router: Router) {
11 }
12
13 async ngOnInit(): Promise<void> {
Dynamic Module Federation 62
The ngOnInit method retrieves the loaded manifest (it’s still loaded in the main.ts as shown above)
and passes it to buildRoutes. The retrieved dynamic routes are passed to the router. Also, the values
of the key/value pairs in the manifest, are put into the remotes field. It’s used in the template to
dynamically create the menu items:
Trying it Out
Now, let’s try out this “dynamic dynamic” solution by starting the shell and the Micro Frontends (e.
g. with npm run run:all).
• loadManifest(...): The above used loadManifest function provides a second parameter called
skipRemoteEntries. Set it to true to prevent loading the entry points. In this case, only the
manifest is loaded:
1 loadManifest("/assets/mf.manifest.json", true)
2 .catch(...)
3 .then(...)
4 .catch(...)
• setManifest(...): This function allows to directly set the manifest. It comes in handy if you
load the data from somewhere else.
• loadRemoteEntry(...): This function allows to directly load the remote entry point. It’s useful
if you don’t use the manifest:
1 Promise.all([
2 loadRemoteEntry({
3 type: 'module',
4 remoteEntry: 'http://localhost:4201/remoteEntry.js'
5 }),
6 loadRemoteEntry({
7 type: 'module',
8 remoteEntry: 'http://localhost:4202/remoteEntry.js'
9 })
10 ])
11 .catch(err => console.error(err))
12 .then(_ => import('./bootstrap'))
13 .catch(err => console.error(err));
• loadRemoteModule(...): Also, if you don’t want to use the manifest, you can directly load a
Micro Frontend with loadRemoteModule:
1 {
2 path: 'flights',
3 loadChildren: () =>
4 loadRemoteModule({
5 type: 'module',
6 remoteEntry: 'http://localhost:4201/remoteEntry.js',
7 exposedModule: './Module',
8 }).then((m) => m.FlightsModule),
9 },
In general I think most people will use the manifest in the future. Even if one doesn’t want to load
it from a JSON file with loadManifest, one can define it via setManifest.
Dynamic Module Federation 64
The property type: 'module' defines that you want to load a “real” EcmaScript module instead of
“just” a JavaScript file. This is needed since Angular CLI 13. If you load stuff not built by CLI 13 or
higher, you very likely have to set this property to script. This can also happen via the manifest:
1 {
2 "non-cli-13-stuff": {
3 "type": "script",
4 "remoteEntry": "http://localhost:4201/remoteEntry.js"
5 }
6 }
If an entry in the manifest does not contain a type property, the plugin assumes the value
module.
Conclusion
Dynamic Module Federation provides more flexibility as it allows loading Micro Frontends we
don’t have to know at compile time. We don’t even have to know their number upfront. This
is possible because of the runtime API provided by webpack. To make using it a bit easier, the
@angular-architects/module-federation plugin wrap it nicely into some convenience functions.
Plugin Systems with Module
Federation: Building An Extensible
Workflow Designer
In the previous chapter, I showed how to use Dynamic Module Federation. This allows us to load
Micro Frontends – or remotes, which is the more general term in Module Federation – not known
at compile time. We don’t even need to know the number of remotes upfront.
While the previous chapter leveraged the router for integrating remotes available, this chapter shows
how to load individual components. The example used for this is a simple plugin-based workflow
designer.
The Workflow Designer can load separately compiled and deployed tasks
The workflow designer acts as a so-called host loading tasks from plugins provided as remotes.
Thus, they can be compiled and deployed individually. After starting the workflow designer, it gets
a configuration describing the available plugins:
Plugin Systems with Module Federation: Building An Extensible Workflow Designer 66
Please note that these plugins are provided via different origins (http://localhost:4201 and http://localhost:4202),
and the workflow designer is served from an origin of its own (http://localhost:4200).
Source Code⁴⁴
Thanks to Zack Jackson⁴⁵ and Jack Herrington⁴⁶, who helped me to understand the rater
new API for Dynamic Module Federation.
⁴⁴https://github.com/manfredsteyer/module-federation-with-angular-dynamic-workflow-designer
⁴⁵https://twitter.com/ScriptedAlchemy
⁴⁶https://twitter.com/jherr
Plugin Systems with Module Federation: Building An Extensible Workflow Designer 67
One difference to the configurations shown in the previous chapter is that here we are directly
exposing standalone components. Each component represents a task that can be put into the
workflow.
The combination of singleton: true and strictVersion: true makes webpack emit a runtime error
when the shell and the micro frontend(s) need different incompatible versions (e. g. two different
major versions). If we skipped strictVersion or set it to false, webpack would only emit a warning
at runtime.
While the displayName is the name presented to the user, the componentName refers to the TypeScript
class representing the Angular component in question.
For loading this key data, the workflow designer leverages a LookupService:
For the sake of simplicity, the LookupService provides some hardcoded entries. In the real world, it
would very likely request this data from a respective HTTP endpoint.
Plugin Systems with Module Federation: Building An Extensible Workflow Designer 69
1 @Component({
2 standalone: true,
3 selector: 'plugin-proxy',
4 template: `
5 <ng-container #placeHolder></ng-container>
6 `
7 })
8 export class PluginProxyComponent implements OnChanges {
9 @ViewChild('placeHolder', { read: ViewContainerRef, static: true })
10 viewContainer: ViewContainerRef;
11
12 constructor() { }
13
14 @Input() options: PluginOptions;
15
16 async ngOnChanges() {
17 this.viewContainer.clear();
18
19 const Component = await loadRemoteModule(this.options)
20 .then(m => m[this.options.componentName]);
21
22 this.viewContainer.createComponent(Component);
23 }
24 }
In versions before Angular 13, we needed to use a ComponentFactoryResolver to get the loaded
component’s factory:
Plugin Systems with Module Federation: Building An Extensible Workflow Designer 70
Wiring Up Everything
Now, it’s time to wire up the parts mentioned above. For this, the workflow designer’s AppComponent
gets a plugins and a workflow array. The first one represents the PluginOptions of the available
plugins and thus all available tasks while the second one describes the PluginOptions of the selected
tasks in the configured sequence:
1 @Component({ [...] })
2 export class AppComponent implements OnInit {
3
4 plugins: PluginOptions[] = [];
5 workflow: PluginOptions[] = [];
6 showConfig = false;
7
8 constructor(
9 private lookupService: LookupService) {
10 }
Plugin Systems with Module Federation: Building An Extensible Workflow Designer 71
11
12 async ngOnInit(): Promise<void> {
13 this.plugins = await this.lookupService.lookup();
14 }
15
16 add(plugin: PluginOptions): void {
17 this.workflow.push(plugin);
18 }
19
20 toggle(): void {
21 this.showConfig = !this.showConfig;
22 }
23 }
The AppComponent uses the injected LookupService for populating its plugins array. When a plugin
is added to the workflow, the add method puts its PluginOptions object into the workflow array.
For displaying the workflow, the designer just iterates all items in the workflow array and creates a
plugin-proxy for them:
As discussed above, the proxy loads the plugin (at least, if it isn’t already loaded) and displays it.
Also, for rendering the toolbox displayed on the left, it goes through all entries in the plugins array.
For each of them it displays a hyperlink calling bound to the add method:
1 <div class="vertical-menu">
2 <a href="#" class="active">Tasks</a>
3 <a *ngFor="let p of plugins" (click)="add(p)">Add {{p.displayName}}</a>
4 </div>
Conclusion
While Module Federation comes in handy for implementing Micro Frontends, it can also be used
for setting up plugin architectures. This allows us to extend an existing solution by 3rd parties. It
also seems to be a good fit for SaaS applications, which needs to be adapted to different customers’
needs.
Using Module Federation with Nx
Monorepos and Angular
While it sounds like a contradiction, the combination of Micro Frontends and Monorepos can
actually be quite tempting: No version conflicts by design, easy code sharing and optimized
bundles are some of the benefits you get. Also, you can still deploy Micro Frontends separately
and isolate them from each other.
This chapter compares the consequences of using several repositories (“Micro Frontends by the
book”) and one sole monorepo. After that, it shows with an example, how to use Module Federation
in an Nx monorepo.
If you want to have a look at the source code⁴⁷ used here, you can check out this repository⁴⁸.
Big thanks to the awesome Tobias Koppers⁴⁹ who gave me valuable insights into this
topic and to the one and only Dmitriy Shekhovtsov⁵⁰ who helped me using the Angular
CLI/webpack 5 integration for this.
This is also a quite usual for Micro Services and it provides the following advantages:
• Micro Frontends – and hence the individual business domains – are isolated from each other.
As there are no dependencies between them, different teams can evolve them separately.
• Each team can concentrate on their Micro Frontend. They only need to focus on their very own
repository.
• Each team has the maximum amount of freedom in their repository. They can go with their very
own architectural decisions, tech stacks, and build processes. Also, they decide by themselves
when to update to newer versions.
• Each Micro Frontend can be separately deployed.
As this best fits the original ideas of Micro Frontends, I call this approach “Micro Frontends by the
book”. However, there are also some disadvantages:
• We need to version and distribute shared dependencies via npm. This can become quite an
overhead, as after every change we need to assign a new version, publish it, and install it into
the respective Micro Frontends.
• As each team can use its own tech stack, we can end up with different frameworks and different
versions of them. This might lead to version conflicts in the browser and to increased bundle
sizes.
Of course, there are approaches to compensate for these drawbacks: For instance, we can automate
the distribution of shared libraries to minimize the overhead. Also, we can avoid version conflicts
by not sharing libraries between different Micro Frontends. Wrapping these Micro Frontends into
web components further abstracts away the differences between frameworks.
Using Module Federation with Nx Monorepos and Angular 74
While this prevents version conflicts, we still have increased bundle sizes. Also, we might need some
workarounds here or there as Angular is not designed to work with another version of itself in the
same browser window. Needless to say that there is no support by the Angular team for this idea.
If you find out that the advantages of this approach outweigh the disadvantages, you find a solution
for mixing and matching different frameworks and versions in one of the next chapters.
However, if you feel that the disadvantages weigh heavier, the next sections show an alternative.
Now, sharing libraries is easy and there is only one version of everything, hence we don’t end up
with version conflicts in the browser. We can also keep some advantages outlined above:
• Micro Frontends can be isolated from each other by using linting rules. They prevent one
Micro Frontend from depending upon others. Hence, teams can separately evolve their Micro
Frontend.
• Micro Frontends can still be separately deployed.
Now, the question is, where’s the catch? Well, the thing is, now we are giving up some of the
freedom: Teams need to agree on one version of dependencies like Angular and on a common
update cycle for them. To put it in another way: We trade in some freedom to prevent version
conflicts and increased bundle sizes.
Using Module Federation with Nx Monorepos and Angular 75
One more time, you need to evaluate all these consequences for your very project. Hence, you need
to know your architecture goals and prioritize them. As mentioned, I’ve seen both working in the
wild in several projects. It’s all about the different consequences.
Monorepo Example
After discussing the consequences of the approach outlined here, let’s have a look at an implemen-
tation. The example used here is a Nx monorepo with a Micro Frontend shell (shell) and a Micro
Frontend (mfe1, “micro frontend 1”). Both share a common library for authentication (auth-lib) that
is also located in the monorepo. Also, mfe1 uses a library mfe1-domain-logic.
If you haven’t used Nx before, just assume a CLI workspace with tons additional features. You can
find more infos on Nx in our tutorial⁵¹.
To visualize the monorepo’s structure, one can use the Nx CLI to request a dependency graph:
1 nx graph
If you don’t have installed this CLI, you can easily get it via npm (npm i -g nx). The displayed graph
looks like this:
The auth-lib provides two components. One is logging-in users and the other one displays the
current user. They are used by both, the shell and mfe1:
⁵¹https://www.angulararchitects.io/aktuelles/tutorial-first-steps-with-nx-and-angular-architecture/
Using Module Federation with Nx Monorepos and Angular 76
Schema
1 "paths": {
2 "@demo/auth-lib": [
3 "libs/auth-lib/src/index.ts"
4 ]
5 },
The shell and mfe1 (as well as further Micro Frontends we might add in the future) need to be
deployable in separation and loaded at runtime.
However, we don’t want to load the auth-lib twice or several times! Archiving this with an npm
package is not that difficult. This is one of the most obvious and easy to use features of Module
Federation. The next sections discuss how to do the same with libraries of a monorepo.
Using Module Federation with Nx Monorepos and Angular 77
1 @Injectable({
2 providedIn: 'root'
3 })
4 export class AuthService {
5
6 // tslint:disable-next-line: variable-name
7 private _userName: string = null;
8
9 public get userName(): string {
10 return this._userName;
11 }
12
13 constructor() { }
14
15 login(userName: string, password: string): void {
16 // Authentication for honest users
17 // (c) Manfred Steyer
18 this._userName = userName;
19 }
20
21 logout(): void {
22 this._userName = null;
23 }
24 }
Besides this service, there is also a AuthComponent with the UI for logging-in the user and a
UserComponent displaying the current user’s name. Both components are registered with the library’s
NgModule:
Using Module Federation with Nx Monorepos and Angular 78
1 @NgModule({
2 imports: [
3 CommonModule,
4 FormsModule
5 ],
6 declarations: [
7 AuthComponent,
8 UserComponent
9 ],
10 exports: [
11 AuthComponent,
12 UserComponent
13 ],
14 })
15 export class AuthLibModule {}
As every library, it also has a barrel index.ts (sometimes also called public-api.ts) serving as the
entry point. It exports everything consumers can use:
Please note that index.ts is also exporting the two components although they are already registered
with the also exported AuthLibModule. In the scenario discussed here, this is vital in order to make
sure it’s detected and compiled by Ivy.
Let’s assume the shell is using the AuthComponent and mfe1 uses the UserComponent. As our goal is
to only load the auth-lib once, this also allows for sharing information on the logged-in user.
1 npm i @angular-architects/module-federation -D
2
3 npm g @angular-architects/module-federation:init
4 --project shell --port 4200 --type host
5
6 npm g @angular-architects/module-federation:init
7 --project mfe1 --port 4201 --type remote
Meanwhile, Nx also ships with its own support for Module Federation⁵². Beyond the
covers, it handles Module Federation in a very similar way as the plugin used here.
This generates a webpack config for Module Federation. Since version 14.3, the withModuleFederationPlugin
provides a property sharedMappings. Here, we can register the monorepo internal libs we want to
share at runtime:
1 // apps/shell/webpack.config.js
2
3 const { shareAll, withModuleFederationPlugin } =
4 require('@angular-architects/module-federation/webpack');
5
6 module.exports = withModuleFederationPlugin({
7
8 remotes: {
9 'mfe1': "http://localhost:4201/remoteEntry.js"
10 },
11
12 shared: shareAll({
13 singleton: true,
14 strictVersion: true,
15 requiredVersion: 'auto'
16 }),
17
18 sharedMappings: ['@demo/auth-lib'],
19
20 });
As sharing is always an opt-in in Module Federation, we also need the same setting in the Micro
Frontend’s configuration:
⁵²https://nx.dev/module-federation/micro-frontend-architecture
Using Module Federation with Nx Monorepos and Angular 80
1 // apps/mfe1/webpack.config.js
2
3 const { shareAll, withModuleFederationPlugin } =
4 require('@angular-architects/module-federation/webpack');
5
6 module.exports = withModuleFederationPlugin({
7
8 name: "mfe1",
9
10 exposes: {
11 './Module': './apps/mfe1/src/app/flights/flights.module.ts',
12 },
13
14 shared: shareAll({
15 singleton: true,
16 strictVersion: true,
17 requiredVersion: 'auto'
18 }),
19
20 sharedMappings: ['@demo/auth-lib'],
21
22 });
Since version 14.3, the Module Federation plugin shares all libraries in the monorepo by
default. To get this default behavior, just skip the sharedMappings property. If you use it,
only the mentioned libs are shared.
Trying it out
To try this out, just start the two applications. As we use Nx, this can be done with the following
command:
The switch --all starts all applications in the monorepo. Instead, you can also go with the switch
--projects to just start a subset of them:
--project takes a comma-separated list of project names. Spaces are not allowed.
Using Module Federation with Nx Monorepos and Angular 81
After starting the applications, log-in in the shell and make it to load mfe1. If you see the logged-in
user name in mfe1, you have the proof that auth-lib is only loaded once and shared across the
applications.
To make these error messages appear in your IDE, you need eslint support. For Visual Studio Code,
this can be installed via an extension.
Besides checking against linting rules in your IDE, one can also call the linter on the command line:
Using Module Federation with Nx Monorepos and Angular 82
The good message: If it works on the command line, it can be automated. For instance, your build
process could execute this command and prevent merging a feature into the main branch if these
linting rules fail: No broken windows anymore.
For configuring these linting rules, we need to add tags to each app and lib in our monorepo. For
this, you can adjust the project.json in the app’s or lib’s folder. For instance, the project.json for
the shell can be found here: apps/shell/project.json. At the end, you find a property tag, I’ve set
to scope:shell:
1 {
2 [...]
3 "tags": ["scope:shell"]
4 }
The value for the tags are just strings. Hence, you can set any possible value. I’ve repeated this for
mfe1 (scope:mfe1) and the auth-lib (scope:auth-lib).
Once the tags are in place, you can use them to define constraints in your global eslint configuration
(.eslintrc.json):
Using Module Federation with Nx Monorepos and Angular 83
1 "@nrwl/nx/enforce-module-boundaries": [
2 "error",
3 {
4 "enforceBuildableLibDependency": true,
5 "allow": [],
6 "depConstraints": [
7 {
8 "sourceTag": "scope:shell",
9 "onlyDependOnLibsWithTags": ["scope:shell", "scope:shared"]
10 },
11 {
12 "sourceTag": "scope:mfe1",
13 "onlyDependOnLibsWithTags": ["scope:mfe1", "scope:shared"]
14 },
15 {
16 "sourceTag": "scope:shared",
17 "onlyDependOnLibsWithTags": ["scope:shared"]
18 }
19 ]
20 }
21 ]
After changing global configuration files like the .eslintrc.json, it’s a good idea to restart your
IDE (or at least affected services of your IDE). This makes sure the changes are respected.
More on these ideas and their implementation with Nx can be found in the chapters on Strategic
Design.
Incremental Builds
To build all apps, you can use Nx’ run-many command:
However, this does not mean that Nx always rebuilds all the Micro Frontends as well as the shell.
Instead, it only rebuilds the changed apps. For instance, in the following case mfe1 was not changed.
Hence, only the shell is rebuilt:
Using Module Federation with Nx Monorepos and Angular 84
Using the build cache to only recompile changed apps can dramatically speed up your
build times.
This also works for testing, e2e-tests, and linting out of the box. If an application (or library) hasn’t
been changed, it’s neither retested nor relinted. Instead, the result is taken from the Nx build cache.
By default, the build cache is located in node_modules/.cache/nx. However, there are several options
for configuring how and where to cache.
Deploying
As normally, libraries don’t have versions in a monorepo, we should always redeploy all the
changed Micro Frontends together. Fortunately, Nx helps with finding out which applications/ Micro
Frontends have been changed or affected by a change:
You might also want to detect the changed applications as part of your build process.
Redeploying all applications that have been changed or affected by a (lib) change is vital,
if you share libraries at runtime. If you have a release branch, it’s enough to just redeploy
all apps that have been changed in this branch.
If you want to have a graphical representation of the changed parts of your monorepo, you can
request a dependency graph with the following command:
Using Module Federation with Nx Monorepos and Angular 85
1 nx affected:graph
Assuming we changed the domain-logic lib used by mfe1, the result would look as follows:
By default, the shown commands compare your current working directory with the main branch.
However, you can use these commands with the switches --base and --head.
These switches take a commit hash or the name of a branch. In the latter case, the last commit of
the mentioned branch is used for the comparison.
Conclusion
By using monorepos for Micro Frontends you trade in some freedom for preventing issues. You can
still deploy Micro Frontends separately and thanks to linting rules provided by Nx Micro Frontends
can be isolated from each other.
However, you need to agree on common versions for the frameworks and libraries used. Therefore,
you don’t end up with version conflicts at runtime. This also prevents increased bundles.
Both works, however, both has different consequences. It’s on you to evaluate these consequences
for your very project.
Dealing with Version Mismatches in
Module Federation
Webpack Module Federation makes it easy to load separately compiled code like micro frontends. It
even allows us to share libraries among them. This prevents that the same library has to be loaded
several times.
However, there might be situations where several micro frontends and the shell need different
versions of a shared library. Also, these versions might not be compatible with each other.
For dealing with such cases, Module Federation provides several options. In this chapter, I present
these options by looking at different scenarios. The source code⁵³ for these scenarios can be found
in my GitHub account⁵⁴.
Big thanks to Tobias Koppers⁵⁵, founder of webpack, for answering several questions
about this topic and for proofreading this chapter.
1 new ModuleFederationPlugin({
2 [...],
3 shared: ["rxjs", "useless-lib"]
4 })
If you are new to Module Federation, you can find an explanation about it here⁵⁶.
The package useless-lib⁵⁷ is a dummy package, I’ve published for this example. It’s available in the
versions 1.0.0, 1.0.1, 1.1.0, 2.0.0, 2.0.1, and 2.1.0. In the future, I might add further ones. These
versions allow us to simulate different kinds of version mismatches.
To indicate the installed version, useless-lib exports a version constant. As you can see in the
screenshot above, the shell and the micro frontend display this constant. In the shown constellation,
⁵⁶https://www.angulararchitects.io/aktuelles/the-microfrontend-revolution-module-federation-in-webpack-5/
⁵⁷https://www.npmjs.com/package/useless-lib
Dealing with Version Mismatches in Module Federation 88
both use the same version (1.0.0), and hence they can share it. Therefore, useless-lib is only loaded
once.
However, in the following sections, we will examine what happens if there are version mismatches
between the useless-lib used in the shell and the one used in the microfrontend. This also allows
me to explain different concepts Module Federation implements for dealing with such situations.
• Shell: useless-lib@^1.0.0
• MFE1: useless-lib@^1.0.1
Module Federation decides to go with version 1.0.1 as this is the highest version compatible with
both applications according to semantic versioning (^1.0.0 means, we can also go with a higher
minor and patch versions).
Dealing with Version Mismatches in Module Federation 89
• Shell: useless-lib@∼1.0.0
• MFE1: useless-lib@1.1.0
Both versions are not compatible with each other (∼1.0.0 means, that only a higher patch version
but not a higher minor version is acceptable).
This leads to the following result:
This shows that Module Federation uses different versions for both applications. In our case, each
application falls back to its own version, which is also called the fallback module.
Dealing with Version Mismatches in Module Federation 90
• Shell: useless-lib@^1.0.0
• MFE1: useless-lib@^1.0.1
While in the case of classic (static) Module Federation, both applications would agree upon using
version 1.0.1 during the initialization phase, here in the case of dynamic module federation, the
shell does not even know of the micro frontend in this phase. Hence, it can only choose for its own
version:
If there were other static remotes (e. g. micro frontends), the shell could also choose for one of their
versions according to semantic versioning, as shown above.
Unfortunately, when the dynamic micro frontend is loaded, module federation does not find an
already loaded version compatible with 1.0.1. Hence, the micro frontend falls back to its own
version 1.0.1.
On the contrary, let’s assume the shell has the highest compatible version:
• Shell: useless-lib@^1.1.0
• MFE1: useless-lib@^1.0.1
In this case, the micro frontend would decide to use the already loaded one:
To put it in a nutshell, in general, it’s a good idea to make sure your shell provides the highest
compatible versions when loading dynamic remotes as late as possible.
However, as discussed in the chapter about Dynamic Module Federation, it’s possible to dynamically
load just the remote entry point on program start and to load the micro frontend later on demand.
By splitting this into two loading processes, the behavior is exactly the same as with static (“classic”)
Dealing with Version Mismatches in Module Federation 92
Module Federation. The reason is that in this case the remote entry’s meta data is available early
enoght to be considering during the negotation of the versions.
Singletons
Falling back to another version is not always the best solution: Using more than one version can
lead to unforeseeable effects when we talk about libraries holding state. This seems to be always the
case for your leading application framework/ library like Angular, React or Vue.
For such scenarios, Module Federation allows us to define libraries as singletons. Such a singleton
is only loaded once.
If there are only compatible versions, Module Federation will decide for the highest one as shown in
the examples above. However, if there is a version mismatch, singletons prevent Module Federation
from falling back to a further library version.
For this, let’s consider the following version mismatch:
• Shell: useless-lib@^2.0.0
• MFE1: useless-lib@^1.1.0
1 // Shell
2 shared: {
3 "rxjs": {},
4 "useless-lib": {
5 singleton: true,
6 }
7 },
Here, we use an advanced configuration for defining singletons. Instead of a simple array, we go
with an object where each key represents a package.
If one library is used as a singleton, you will very likely set the singleton property in every
configuration. Hence, I’m also adjusting the microfrontend’s Module Federation configuration
accordingly:
Dealing with Version Mismatches in Module Federation 93
1 // MFE1
2 shared: {
3 "rxjs": {},
4 "useless-lib": {
5 singleton: true
6 }
7 }
To prevent loading several versions of the singleton package, Module Federation decides for only
loading the highest available library which it is aware of during the initialization phase. In our case
this is version 2.0.0:
However, as version 2.0.0 is not compatible with version 1.1.0 according to semantic versioning,
we get a warning. If we are lucky, the federated application works even though we have this
mismatch. However, if version 2.0.0 introduced braking changes we run into, our application might
fail.
In the latter case, it might be beneficial to fail fast when detecting the mismatch by throwing an
example. To make Module Federation behaving this way, we set strictVersion to true:
Dealing with Version Mismatches in Module Federation 94
1 // MFE1
2 shared: {
3 "rxjs": {},
4 "useless-lib": {
5 singleton: true,
6 strictVersion: true
7 }
8 }
Version mismatches regarding singletons using strictVersion make the application fail
• Shell: useless-lib@^2.0.0
• MFE1: useless-lib@^1.1.0
Now, we can use the requiredVersion option for the useless-lib when configuring the microfron-
tend:
1 // MFE1
2 shared: {
3 "rxjs": {},
4 "useless-lib": {
5 singleton: true,
6 strictVersion: true,
7 requiredVersion: ">=1.1.0 <3.0.0"
8 }
9 }
According to this, we also accept everything having 2 as the major version. Hence, we can use the
version 2.0.0 provided by the shell for the micro frontend:
Conclusion
Module Federation brings several options for dealing with different versions and version mismatches.
Most of the time, you don’t need to do anything, as it uses semantic versioning to decide for the
highest compatible version. If a remote needs an incompatible version, it falls back to such one by
default.
In cases where you need to prevent loading several versions of the same package, you can define a
shared package as a singleton. In this case, the highest version known during the initialization phase
is used, even though it’s not compatible with all needed versions. If you want to prevent this, you
can make Module Federation throw an exception using the strictVersion option.
You can also ease the requirements for a specific version by defining a version range using
requestedVersion. You can even define several scopes for advanced scenarios where each of them
can get its own version.
Multi-Framework and -Version Micro
Frontends with Module Federation
Most articles on Module Federation assume, you have just one version of your major Framework,
e. g. Angular. However, what to do if you have to mix and match different versions or different
frameworks? No worries, we got you covered. This chapter uses an example to explain how to
develop such a scenario in 4 steps.
Example
Please find the live demo and the source code here:
• Live Example⁵⁸
• Source Code Shell⁵⁹
• Source Code for Micro Frontend⁶⁰
• Source Code for Micro Frontend with Routing⁶¹
• Source Code for Micro Frontend with Vue⁶²
⁵⁸https://red-ocean-0fe4c4610.azurestaticapps.net
⁵⁹https://github.com/manfredsteyer/multi-framework-version
⁶⁰https://github.com/manfredsteyer/angular-app1
⁶¹https://github.com/manfredsteyer/angular3-app
⁶²https://github.com/manfredsteyer/vue-js
Multi-Framework and -Version Micro Frontends with Module Federation 98
Pattern or Anti-Pattern?
In his recent talk on Micro Frontend Anti Patterns⁶⁵, my friend Luca Mezzalira⁶⁶ mentions using
several frontend frameworks in one application. He calls this anti pattern the Hydra of Lerna⁶⁷. This
name comes from a water monster in Greek and Roman mythology having several heads.
There’s a good reason for considering this an anti pattern: Current frameworks are not prepared
to be bootstrapped in the same browser tab together with other frameworks or other versions of
themselves. Besides leading to bigger bundles, this also increases the complexity and calls for some
workarounds.
However, Luca also explains that there are some situations where such an approach might be needed.
He brings up the following examples:
This all speaks right from my heart and perfectly correlates with my “story” I’m telling a lot at
conferences and at our company workshops: Try to avoid mixing frameworks and versions in the
browser. However, if you have a good reason for doing it after ruling out the alternatives, there are
ways for making Multi-Framework/ Multi-Version Micro Frontends work.
As always in the area of software architecture – and probably in life as general – it’s all about trade-
offs. So if you find out that this approach comes with less drawbacks than alternatives with respect
to your very architecture goals, lets go for it.
The first two options correlate with each other. We need to display and hide our Micro Frontends
on demand, e. g. when activating a specific menu item. As each Micro Frontend is a self-contained
frontend, this also means we have to bootstrap it on demand in the middle of our page. For this,
different frameworks provide different methods or functions. When wrapped into Web Components,
all we need to do is to add or remove the respective HTML element registered with the Web
Component.
Isolating CSS styles via Shadow DOM helps to make teams more self-sufficient. However, I’ve seen
that quite often teams trade in a bit of independence for some global CSS rules provided by the shell.
In this case, the Shadow DOM emulation provided by Angular (with and without Web Components)
is a good choice: While it prevents styles from other components bleeding into yours, it allows to
share global styles too.
Also, Custom Events and Properties seem to be a good choice for communicating at first glance.
However, for the sake of simplicity, meanwhile, I prefer a simple object acting as a mediator or
“mini message bus” in the global namespace.
In general, we have to see that such Web Components wrapping whole Micro Frontends are no
typical Web Components. I’m stressing this out because sometimes people confuse the idea of a
(Web) Component with the idea of a Micro Frontend or use these terms synonymously. This leads
to far too fine-grained Micro Frontends causing lots of issues with integration.
Implementation in 4 steps
Now, after discussing the implementation strategy, let’s look at the promised 4 steps for building
such a solution.
1 npm i @angular/elements
• By going with an empty bootstrap array, Angular won’t directly bootstrap any component on
startup. However, in such cases, Angular demands us of placing a custom bootstrap logic in
the method ngDoBootstrap described by the DoBoostrap interface.
• ngDoBootstrap uses Angular Elements’ createCustomElement to wrap your AppComponent in a
Web Component. To make it work with DI, you also need to pass the current Injector.
• The method customElements.define registers the Web Component under the name angular1-element
with the browser.
The result of this is that the browser will mount the Application in every angular1-element tag that
occurs in your application.
If your framework doesn’t directly support web components, you can also hand-wrap your
application. For instance, a React component could be wrapped as follows:
Multi-Framework and -Version Micro Frontends with Module Federation 102
1 // app.js
2 import React from 'react'
3 import ReactDOM from 'react-dom'
4
5 class App extends React.Component {
6
7 render() {
8 const reactVersion = require('./package.json').dependencies['react'];
9
10 return ([
11 <h1>
12 React
13 </h1>,
14 <p>
15 React Version: {reactVersion}
16 </p>
17 ])
18 }
19 }
20
21 class Mfe4Element extends HTMLElement {
22 connectedCallback() {
23 ReactDOM.render(<App/>, this);
24 }
25 }
26
27 customElements.define('react-element', Mfe4Element);
1 ng add @angular-architects/module-federation
This installs and initializes the package. If you go with Nx and Angular, its more usual to do both
steps separately:
Multi-Framework and -Version Micro Frontends with Module Federation 103
1 npm i @angular-architects/module-federation -D
2
3 ng g @angular-architects/module-federation:init
In the case of other frameworks like React or Vue, this all is just about adding the ModuleFederationPlugin
to the webpack configuration. Please remember that you need to bootstrap your application
asynchronously in most cases. Hence, your entry file will more or less just contain a dynamic import
loading the rest of the application.
For this reason, the above discussed React-based Micro Frontend uses the following index.js as the
entry point:
1 // index.js
2 import('./app');
1 // main.ts
2 import('./bootstrap');
This common pattern gives Module Federation the necessary time for loading the shared dependen-
cies.
After setting up Module Federation, expose the Web Component-based wrapper via the webpack
configuration:
1 // webpack.config.js
2 [...]
3 module.exports = {
4 [...]
5 plugins: [
6 new ModuleFederationPlugin({
7
8 name: "angular1",
9 filename: "remoteEntry.js",
10
11 exposes: {
12 './web-components': './src/bootstrap.ts',
13 },
14
15 shared: share({
16 "@angular/core": { requiredVersion: "auto" },
Multi-Framework and -Version Micro Frontends with Module Federation 104
As the goal is to show how to mix different versions of Angular, this Micro Frontend uses
Angular 12 while the shell shown below uses a more recent Angular version. Hence, also
an older version of @angular-architects/module-federation and the original more verbose
configuration is used. Please find details on different versions⁶⁸ here.
The settings in the section shared make sure we can mix several versions of a framework but also
reuse an already loaded one if the version numbers fit exactly. For this, requiredVersion should
point to the installed version – the one, you also find in your package.json. The helper method
share that comes with @angular-architects/module-federation takes care of this when setting
requiredVersion to auto.
While according to semantic versioning an Angular library with a higher minor or patch
version is backwards compatible, there are no such guarantees for already compiled code.
The reason is that the code emitted by the Angular compiler uses Angular’s internal APIs
semantic does not apply for. Hence, you should use an exact version number (without any
^ or ∼).
⁶⁸https://github.com/angular-architects/module-federation-plugin/blob/main/migration-guide.md
⁶⁹https://www.angulararchitects.io/aktuelles/multi-framework-and-version-micro-frontends-with-module-federation-the-good-the-
bad-the-ugly/
⁷⁰https://www.npmjs.com/package/@angular-architects/module-federation-tools
Multi-Framework and -Version Micro Frontends with Module Federation 105
1 // main.ts
2 import { AppModule } from './app/app.module';
3 import { environment } from './environments/environment';
4 import { bootstrap } from '@angular-architects/module-federation-tools';
5
6 bootstrap(AppModule, {
7 production: environment.production,
8 appType: 'microfrontend' // for micro frontend
9 // appType: 'shell', // for shell
10 });
1 ng add @angular-architects/module-federation
As mentioned above, in the case of Nx and Angular, perform the installation and initialization
separately:
1 npm i @angular-architects/module-federation -D
2 ng g @angular-architects/module-federation:init --type host
The switch –type host generates a typical host configuration. It is available since plugin
version 14.3 and hence since Angular 14.
1 // webpack.config.js
2 const { shareAll, withModuleFederationPlugin } =
3 require('@angular-architects/module-federation/webpack');
4
5 module.exports = withModuleFederationPlugin({
6
7 shared: {
8 ...shareAll({
9 singleton: true,
10 strictVersion: true,
11 requiredVersion: 'auto'
Multi-Framework and -Version Micro Frontends with Module Federation 106
12 }),
13 },
14
15 });
The wrapper component also creates an HTML element with the name react-element the Web
Component is mounted in.
If you load a Micro Frontend compiled with Angular 13 or higher, you need to set the property type
to module:
Multi-Framework and -Version Micro Frontends with Module Federation 107
Also, in the case of Angular 13+ you don’t need the remoteName property. The reason for these two
differences is that Angular CLI 13+ don’t emit “old-style JavaScript” files anymore but JavaScript
modules. Their handling in Module Federation is a bit different.
If your Micro Frontend brings its own router, you need to tell your shell that the Micro Frontend
will append further segments to the URL. For this, you can go with the startsWith matcher also
provided by @angular-architects/module-federation-tools:
1 import {
2 startsWith,
3 WebComponentWrapper,
4 WebComponentWrapperOptions
5 }
6 from '@angular-architects/module-federation-tools';
7
8 [...]
9
10 export const APP_ROUTES: Routes = [
11 [...]
12 {
13 matcher: startsWith('angular3'),
14 component: WebComponentWrapper,
15 data: {
16 [...]
17 } as WebComponentWrapperOptions
18 },
19 [...]
20 }
Multi-Framework and -Version Micro Frontends with Module Federation 108
To make this work, the path prefix angular3 used here needs to be used by the Micro Frontend too.
As the routing config is just a data structure, you will find ways to add it dynamically.
Result
The result of this endeavor is an application that consists of different frameworks respective
framework-versions:
Example
Whenever possible, the framework is shared. Otherwise, a new framework (version) is loaded by
Module Federation. Another advantage of this approach is that it works without any additional
meta framework. We just need some thin helper functions.
The drawbacks are increased complexity and bundle sizes. Also, we are leaving the path of
the supported use cases: None of the frameworks has been officially tested together with other
frameworks or other versions of itself in the same browser tab.
Pitfalls with Module Federation and
Angular
In this chapter, I’m going to destroy my Module Federation example! However, you don’t need to
worry: It’s for a very good reason. The goal is to show typical pitfalls that come up when using
Module Federation together with Angular. Also, I present some strategies for avoiding these pitfalls.
While Module Federation is really a straight and thoroughly thought through solution, using Micro
Frontends means in general to make runtime dependencies out of compile time dependencies. As a
result, the compiler cannot protect you as well as you are used to.
If you want to try out the examples used here, you can fork this example⁷¹.
1 shared: {
2 "@angular/core": { singleton: true, strictVersion: true },
3 "@angular/common": { singleton: true, strictVersion: true },
4 "@angular/router": { singleton: true, strictVersion: true },
5 "@angular/common/http": { singleton: true, strictVersion: true },
6 },
As you see, we don’t specify a requiredVersion anymore. Normally this is not required because
webpack Module Federation is very smart with finding out which version you use.
However, now, when compiling the shell (ng build shell), we get the following error:
The reason for this is the secondary entry point @angular/common/http which is a bit like an npm
package within an npm package. Technically, it’s just another file exposed by the npm package
@angular/common.
Unsurprisingly, @angular/common/http uses @angular/common and webpack recognizes this. For this
reason, webpack wants to find out which version of @angular/common is used. For this, it looks into
the npm package’s package.json (@angular/common/package.json) and browses the dependencies
there. However, @angular/common itself is not a dependency of @angular/common and hence, the
version cannot be found.
You will have the same challenge with other packages using secondary entry points, e. g. @angular/material.
To avoid this situation, you can assign versions to all shared libraries by hand:
1 shared: {
2 "@angular/core": {
3 singleton: true,
4 strictVersion: true,
5 requiredVersion: '12.0.0'
6 },
7 "@angular/common": {
8 singleton: true,
9 strictVersion: true,
10 requiredVersion: '12.0.0'
11 },
12 "@angular/router": {
13 singleton: true,
14 strictVersion: true,
15 requiredVersion: '12.0.0'
16 },
17 "@angular/common/http": {
18 singleton: true,
19 strictVersion: true,
20 requiredVersion: '12.0.0'
21 },
22 },
Obviously, this is cumbersome and so we came up with another solution. Since version 12.3,
@angular-architects/module-federation⁷² comes with an unspectacular looking helper function
called shared. If your webpack.config.js was generated with this or a newer version, it already
uses this helper function.
⁷²https://www.npmjs.com/package/@angular-architects/module-federation
Pitfalls with Module Federation and Angular 111
1 [...]
2
3 const mf = require("@angular-architects/module-federation/webpack");
4 [...]
5 const share = mf.share;
6
7 [...]
8
9 shared: share({
10 "@angular/core": {
11 singleton: true,
12 strictVersion: true,
13 requiredVersion: 'auto'
14 },
15 "@angular/common": {
16 singleton: true,
17 strictVersion: true,
18 requiredVersion: 'auto'
19 },
20 "@angular/router": {
21 singleton: true,
22 strictVersion: true,
23 requiredVersion: 'auto'
24 },
25 "@angular/common/http": {
26 singleton: true,
27 strictVersion: true,
28 requiredVersion: 'auto'
29 },
30 "@angular/material/snack-bar": {
31 singleton: true,
32 strictVersion: true,
33 requiredVersion:'auto'
34 },
35 })
As you see here, the share function wraps the object with the shared libraries. It allows to use
requiredVersion: 'auto' and converts the value auto to the value found in your shell’s (or your
micro frontend’s) package.json.
Pitfalls with Module Federation and Angular 112
1 npm i @angular/material@10
2 npm i @angular/cdk@10
Now, let’s switch to the Micro Frontend’s (mfe1) FlightModule to import the MatSnackBarModule:
1 [...]
2 import { MatSnackBarModule } from '@angular/material/snack-bar';
3 [...]
4
5 @NgModule({
6 imports: [
7 [...]
8 // Add this line
9 MatSnackBarModule,
10 ],
11 declarations: [
12 [...]
13 ]
14 })
15 export class FlightsModule { }
To make use of the snack bar in the FlightsSearchComponent, inject it into its constructor and call
its open method:
Pitfalls with Module Federation and Angular 113
1 [...]
2 import { MatSnackBar } from '@angular/material/snack-bar';
3
4 @Component({
5 selector: 'app-flights-search',
6 templateUrl: './flights-search.component.html'
7 })
8 export class FlightsSearchComponent {
9 constructor(snackBar: MatSnackBar) {
10 snackBar.open('Hallo Welt!');
11 }
12 }
Also, for this experiment, make sure the webpack.config.js in the project mfe1 does not define the
versions of the dependencies shared:
1 shared: {
2 "@angular/core": { singleton: true, strictVersion: true },
3 "@angular/common": { singleton: true, strictVersion: true },
4 "@angular/router": { singleton: true, strictVersion: true },
5 "@angular/common/http": { singleton: true, strictVersion: true },
6 },
Not defining these versions by hand forces Module Federation into trying to detect them automati-
cally. However, the peer dependency conflict gives Module Federation a hard time and so it brings
up the following error:
While @angular/material and @angular/cdk officially need @angular/core 10, the rest of the
application already uses @angular/core 12. This shows that webpack looks into the package.json
files of all the shared dependencies for determining the needed versions.
In order to resolve this, you can set the versions by hand or by using the helper function share that
uses the version found in your project’s package.json:
Pitfalls with Module Federation and Angular 114
1 [...]
2
3 const mf = require("@angular-architects/module-federation/webpack");
4 [...]
5 const share = mf.share;
6
7 [...]
8
9 shared: share({
10 "@angular/core": {
11 singleton: true,
12 strictVersion: true,
13 requiredVersion: 'auto'
14 },
15 "@angular/common": {
16 singleton: true,
17 strictVersion: true,
18 requiredVersion: 'auto'
19 },
20 "@angular/router": {
21 singleton: true,
22 strictVersion: true,
23 requiredVersion: 'auto'
24 },
25 "@angular/common/http": {
26 singleton: true,
27 strictVersion: true,
28 requiredVersion: 'auto'
29 },
30 "@angular/material/snack-bar": {
31 singleton: true,
32 strictVersion: true,
33 requiredVersion:'auto'
34 },
35 })
If auth-lib was a traditional npm package, we could just register it as a shared library with module
federation. However, in our case, the auth-lib is just a library in our monorepo. And libraries in
that sense are just folders with source code.
To make this folder look like a npm package, there is a path mapping for it in the tsconfig.json:
1 "paths": {
2 "auth-lib": [
3 "projects/auth-lib/src/public-api.ts"
4 ]
5 }
Please note that we are directly pointing to the src folder of the auth-lib. Nx does this by default.
If you go with a traditional CLI project, you need to adjust this by hand.
Fortunately, Module Federation got us covered with such scenarios. To make configuring such cases a
bit easier as well as to prevent issues with the Angular compiler, @angular-architects/module-federation
provides a configuration property called:
Pitfalls with Module Federation and Angular 116
1 module.exports = withModuleFederationPlugin({
2
3 // Shared packages:
4 shared: [...],
5
6 // Explicitly share mono-repo libs:
7 sharedMappings: ['auth-lib'],
8
9 });
Obviously, if you don’t opt-in into sharing the library in one of the projects, these project will get
their own copy of the auth-lib and hence sharing the user name isn’t possible anymore.
However, there is a constellation with the same underlying issue that is everything but obvious. To
construct this situation, let’s add another library to our monorepo:
1 ng g lib other-lib
Also, make sure we have a path mapping for it pointing to its source code:
1 "paths": {
2 "other-lib": [
3 "projects/other-lib/src/public-api.ts"
4 ],
5 }
Let’s assume we also want to store the current user name in this library:
11 constructor() { }
12
13 }
However, now the Micro Frontend has three ways of getting the defined user name:
At first sight, all these three options should bring up the same value. However, if we only share
auth-lib but not other-lib, we get the following result:
As other-lib is not shared, both, auth-lib but also the micro frontend get their very own version
of it. Hence, we have two instances of it in place here. While the first one knows the user name, the
second one doesn’t.
What can we learn from this? Well, it would be a good idea to also share the dependencies of our
shared libraries (regardless of sharing libraries in a monorepo or traditional npm packages!).
This also holds true for secondary entry points our shared libraries belong to.
Hint: @angular-architects/module-federation comes with a helper function shareAll for sharing
all dependencies defined in your project’s package.json:
1 shared: {
2 ...shareAll({
3 singleton: true,
4 strictVersion: true,
5 requiredVersion: 'auto'
6 }),
7 }
This can at least lower the pain in such cases for prototyping. Also, you can make share and shareAll
to include all secondary entry points by using the property includeSecondaries:
1 shared: share({
2 "@angular/common": {
3 singleton: true,
4 strictVersion: true,
5 requiredVersion: 'auto',
6 includeSecondaries: {
7 skip: ['@angular/http/testing']
8 }
9 },
10 [...]
11 })
Pitfalls with Module Federation and Angular 120
It seems like, the loaded Micro Frontend mfe1 cannot get hold of the HttpClient. Perhaps it even
works when running mfe1 in standalone mode.
The reason for this is very likely that we are not exposing the whole Micro Frontend via Module
Federation but only selected parts, e. g. some Features Modules with Child Routes:
Or to put it in another way: DO NOT expose the Micro Frontend’s AppModule. However, if we expect
the AppModule to provide some global services like the HttpClient, we also need to do this in the
shell’s AppModule:
1 // Shell's AppModule
2 @NgModule({
3 imports: [
4 [...]
5 // Provide global services your micro frontends expect:
6 HttpClientModule,
7 ],
8 [...]
9 })
10 export class AppModule { }
Pitfalls with Module Federation and Angular 121
As you see here, now, the shell’s AppModule uses the Micro Frontend’s AppModule. If you use the
router, you will get some initial issues because you need to call RouterModule.forRoot for each
AppModule (Root Module) on the one side while you are only allowed to call it once on the other
side.
But if you just shared components or services, this might work at first sight. However, the actual
issue here is that Angular creates a root scope for each root module. Hence, we have two root scopes
now. This is something no one expects.
Also, this duplicates all shared services registered for the root scope, e. g. with providedIn: 'root'.
Hence, both, the shell and the Micro Frontend have their very own copy of these services and this
is something, no one is expecting.
A simple but also non preferable solutions is to put your shared services into the platform scope:
However, normally, this scope is intended to be used by Angular-internal stuff. Hence, the only clean
solution here is to not share your AppModule but only lazy feature modules. By using this practice,
you assure (more or less) that these feature modules work the same when loaded into the shell as
when used in standalone-mode.
With inject() must be called from an injection context Angular tells us that there are several
Angular versions loaded at once.
To provoke this error, adjust your shell’s webpack.config.js as follows:
1 shared: share({
2 "@angular/core": { requiredVersion: 'auto' },
3 "@angular/common": { requiredVersion: 'auto' },
4 "@angular/router": { requiredVersion: 'auto' },
5 "@angular/common/http": { requiredVersion: 'auto' },
6 })
Pitfalls with Module Federation and Angular 123
Please note, that these libraries are not configured to be singletons anymore. Hence, Module
Federation allows loading several versions of them if there is no highest compatible version.
Also, you have to know that the shell’s package.json points to Angular 12.0.0 without ^ or ∼, hence
we exactly need this very version.
If we load a Micro Frontend that uses a different Angular version, Module Federation falls back to
loading Angular twice, once the version for the shell and once the version for the Micro Frontend.
You can try this out by updating the shell’s app.routes.ts as follows:
1 {
2 path: 'flights',
3 loadChildren: () => loadRemoteModule({
4 remoteEntry:
5 'https://brave-plant-03ca65b10.azurestaticapps.net/remoteEntry.js',
6 remoteName: 'mfe1',
7 exposedModule: './Module'
8 })
9 .then(m => m.AppModule)
10 },
To make exploring this a bit easier, I’ve provided this Micro Frontend via a Azure Static Web App
found at the shown URL.
If you start your shell and load the Micro Frontend, you will see this error.
What can we learn here? Well, when it comes to your leading, stateful framework – e. g. Angular –
it’s a good idea to define it as a singleton. I’ve written down some details on this in the chapter on
version mismatches.
If you really want to mix and match different versions of Angular, I’ve got you covered with with
another chapter of this book. However, you know what they say: Beware of your wishes.
The reason for this duplication is that Module Federation generates a bundle per shared library per
consumer. The consumer in this sense is the federated project (shell or Micro Frontend) or a shared
library. This is done to have a fall back bundle for resolving version conflicts. In general this makes
sense while in such a very specific case, it doesn’t bring any advantages.
However, if everything is configured in the right way, only one of these duplicates should be loaded
at runtime. As long as this is the case, you don’t need to worry about duplicates.
Conclusion
Module Federation is really clever when it comes to auto-detecting details and compensating for
version mismatches. However, it can only be as good as the meta data it gets. To avoid getting off
the rails, you should remember the following:
• requiredVersion: Assign the requiredVersion by hand, esp. when working with secondary en-
trypoints and when having peer dependency warnings. The plugin @angular-architects/module-federation
get’s you covered with its share helper function allowing the option requiredVersion: 'auto'
that takes the version number from your project’s package.json.
• Share dependencies of shared libraries too, esp. if they are also used somewhere else. Also
think on secondary entry points.
• Make the shell provide global services the loaded Micro Frontends need, e. g. the HttpClient
via the HttpClientModule.
• Never expose the AppModule via Module Federation. Prefer to expose lazy Feature modules.
• Use singleton:true for Angular and other stateful framework respective libraries.
• Don’t worry about duplicated bundles as long as only one of them is loaded at runtime.
Module Federation with Angular’s
Standalone Components
Most tutorials on Module Federation and Angular expose Micro Frontends in the form of NgModules.
However, with the introduction of Standalone Components we will have lightweight Angular
solutions not leveraging NgModules anymore. This leads to the question: How to use Module
Federation in a world without NgModules?
In this chapter, I give answers. We see both, how to expose a bunch of routes pointing to Standalone
Components and how to load an individual Standalone Component. For this, I’ve updated my
example to fully work without NgModules:
Interestingly, Standalone Components belonging together can be grouped using a router config.
Hence, we can expose and lazy load such router configurations.
1 // projects/mfe1/src/main.ts
2
3 import { environment } from './environments/environment';
4 import { enableProdMode, importProvidersFrom } from '@angular/core';
5 import { bootstrapApplication } from '@angular/platform-browser';
6 import { AppComponent } from './app/app.component';
7 import { RouterModule } from '@angular/router';
8 import { MFE1_ROUTES } from './app/mfe1.routes';
9
10
11 if (environment.production) {
12 enableProdMode();
13 }
14
15 bootstrapApplication(AppComponent, {
16 providers: [
17 importProvidersFrom(RouterModule.forRoot(MFE1_ROUTES))
18 ]
19 });
When bootstrapping, the application registers its router config MFE1_ROUTES via services providers.
This router config points to several Standalone Components:
1 // projects/mfe1/src/app/mfe1.routes.ts
2
3 import { Routes } from '@angular/router';
4 import { FlightSearchComponent }
5 from './booking/flight-search/flight-search.component';
6 import { PassengerSearchComponent }
7 from './booking/passenger-search/passenger-search.component';
8 import { HomeComponent } from './home/home.component';
9
10 export const MFE1_ROUTES: Routes = [
Module Federation with Angular’s Standalone Components 127
11 {
12 path: '',
13 component: HomeComponent,
14 pathMatch: 'full'
15 },
16 {
17 path: 'flight-search',
18 component: FlightSearchComponent
19 },
20 {
21 path: 'passenger-search',
22 component: PassengerSearchComponent
23 }
24 ];
Here, importProvidersFrom bridges the gap between the existing RouterModule and the world of
Standalone Components. As a replacement for this, future versions of the router will expose a
function for setting up the router’s providers. According to the underlying CFP, this function will
be called configureRouter.
The shell used here is just an ordinary Angular application. Using lazy loading, we are going to
make it reference the Micro Frontend at runtime.
1 npm i @angular-architects/module-federation
2
3 ng g @angular-architects/module-federation:init
4 --project mfe1 --port 4201 --type remote
This command generates a webpack.config.js. For our purpose, we have to modify the exposes
section as follows:
Module Federation with Angular’s Standalone Components 128
This configuration exposes both, the Micro Frontend’s router configuration (pointing to Standalone
Components) and a Standalone Component.
Static Shell
Now, let’s also activate Module Federation for the shell. In this section, I focus on Static Federation.
This means, we are going to map the paths pointing to our Micro Frontends in the webpack.config.js.
The next section shows how to switch to Dynamic Federation, where we can define the
key data for loading a Micro Frontend at runtime.
To enable Module Federation for the shell, let’s execute this command:
Module Federation with Angular’s Standalone Components 129
1 ng g @angular-architects/module-federation:init
2 --project shell --port 4200 --type host
The webpack.config.js generated for the shell needs to point to the Micro Frontend:
As we are going with static federation, we also need typings for all configured paths (EcmaScript
modules) referencing Micro Frontends:
1 // projects/shell/src/decl.d.ts
2
3 declare module 'mfe1/*';
Now, all it takes is a lazy route in the shell, pointing to the routes and the Standalone Component
exposed by the Micro Frontend:
Module Federation with Angular’s Standalone Components 130
1 // projects/shell/src/app/app.routes.ts
2
3 import { Routes } from '@angular/router';
4 import { HomeComponent }
5 from './home/home.component';
6 import { NotFoundComponent }
7 from './not-found/not-found.component';
8 import { ProgrammaticLoadingComponent }
9 from './programmatic-loading/programmatic-loading.component';
10
11 export const APP_ROUTES: Routes = [
12 {
13 path: '',
14 component: HomeComponent,
15 pathMatch: 'full'
16 },
17
18 {
19 path: 'booking',
20 loadChildren: () => import('mfe1/routes').then(m => m.BOOKING_ROUTES)
21 },
22
23 {
24 path: 'my-tickets',
25 loadComponent: () =>
26 import('mfe1/Component').then(m => m.MyTicketsComponent)
27 },
28
29 [...]
30
31 {
32 path: '**',
33 component: NotFoundComponent
34 }
35 ];
Also, in the shell’s router config, we need to switch out the dynamic imports used before by calls to
loadRemoteModule:
1 // projects/shell/src/app/app.routes.ts
2
3 import { Routes } from '@angular/router';
4 import { HomeComponent } from './home/home.component';
5 import { NotFoundComponent } from './not-found/not-found.component';
6 import { ProgrammaticLoadingComponent }
7 from './programmatic-loading/programmatic-loading.component';
8 import { loadRemoteModule } from '@angular-architects/module-federation';
9
10 export const APP_ROUTES: Routes = [
11 {
12 path: '',
13 component: HomeComponent,
14 pathMatch: 'full'
15 },
16 {
17 path: 'booking',
18 loadChildren: () =>
19 loadRemoteModule({
20 type: 'module',
21 remoteEntry: 'http://localhost:4201/remoteEntry.js',
Module Federation with Angular’s Standalone Components 132
22 exposedModule: './routes'
23 })
24 .then(m => m.MFE1_ROUTES)
25 },
26 {
27 path: 'my-tickets',
28 loadComponent: () =>
29 loadRemoteModule({
30 type: 'module',
31 remoteEntry: 'http://localhost:4201/remoteEntry.js',
32 exposedModule: './Component'
33 })
34 .then(m => m.MyTicketsComponent)
35 },
36 [...]
37 {
38 path: '**',
39 component: NotFoundComponent
40 }
41 ];
The loadRemoteModule function takes all the key data, Module Federation needs for loading the
remote. This key data is just several strings, hence you can load it from literally everywhere.
1 <h1>Programmatic Loading</h1>
2
3 <div>
4 <button (click)="load()">Load!</button>
5 </div>
6
7 <div #placeHolder></div>
1 // projects/shell/src/app/programmatic-loading/programmatic-loading.component.ts
2
3 import {
4 Component,
5 OnInit,
6 ViewChild,
7 ViewContainerRef
8 } from '@angular/core';
9
10 @Component({
11 selector: 'app-programmatic-loading',
12 standalone: true,
13 templateUrl: './programmatic-loading.component.html',
14 styleUrls: ['./programmatic-loading.component.css']
15 })
16 export class ProgrammaticLoadingComponent implements OnInit {
17
18 @ViewChild('placeHolder', { read: ViewContainerRef })
19 viewContainer!: ViewContainerRef;
20
21 constructor() { }
22
23 ngOnInit(): void {
24 }
25
26 async load(): Promise<void> {
27
28 const m = await import('mfe1/Component');
29 const ref = this.viewContainer.createComponent(m.MyTicketsComponent);
30 // const compInstance = ref.instance;
31 // compInstance.ngOnInit()
32 }
33
34 }
This example shows a solution for Static Federation. Hence a dynamic import is used for getting
hold of the Micro Frontend.
After importing the remote component, we can instantiate it using the ViewContainer’s createComponent
method. The returned reference (ref) points to the component instance with it’s instance property.
The instance allows to interact with the component, e. g. to call methods, set property, or setup event
handlers.
If we wanted to switch to Dynamic Federation, we would again use loadRemoteModule instead of
Module Federation with Angular’s Standalone Components 134
Since Native Federation also needs to create a few bundles, it delegates to the bundler of choice. The
individual bundlers are connected via interchangeable adapters.
The following image shows an example built with Angular, esbuild, and Native Federation:
The shell shown here has loaded a separately developed and deployed Micro Frontend into its
workspace using Native Federation.
From Module Federation to esbuild and Native Federation 137
Although both the shell and the micro frontend are based on Angular, Native Federation only loaded
Angular once. To make this possible, Native Federation, following the ideas of Module Federation,
places the remotes and the libraries to be shared in their own bundles. For this, it uses standards-
compliant EcmaScript bundles that could also be created by other tools. Information about these
bundles is placed in metadata files:
These metadata files are the basis for a standard-compliant Import Map that informs the browser
from where which bundles are to be loaded.
The property name defines a unique name for the remote. The exposes section specifies which files
the remote should expose to the host. Although these files are built and deployed together with the
remote, they can be loaded into the host at runtime. Since the host doesn’t care about the full file
paths, exposes maps them to shorter names.
In the case shown, the remote just publishes its AppComponent for simplicity. However, any system
component could be published instead, e.g. lazy routing configurations that reference multiple
components of a feature.
Under shared, the configuration lists all dependencies that the remote wants to share with other
remotes and the host. In order to avoid an exhaustive list of all required npm packages, the shareAll
helper function is used. It includes all packages that are in the package.json under dependencies.
Details about the parameters passed to shareAll can be found in one of the previous chapters about
Module Federation.
Packages shareAll should not share are entered under skip. This can improve the build and startup
performance of the application slightly. In addition, packages that are intended for use with NodeJS
must be added to skip, since they cannot be compiled for use in the browser.
From Module Federation to esbuild and Native Federation 139
The type dynamic-host indicates that the remotes to be loaded are defined in a configuration file:
1 {
2 "mfe1" : "http://localhost:4201/remoteEntry.json"
3 }
The exposes entry known from the remote’s config is not generated for hosts because hosts typically
do not publish files for other hosts. However, if you want to set up a host that also acts as a remote
for other hosts, there is nothing wrong with adding this entry.
The main.ts file, also modified by ng add, initializes Native Federation using the manifest:
The initFederation function reads the metadata of each remote and generates an Import Map used
by the browser to load shared packages and exposed modules. The program flow then delegates to the
bootstrap.ts, which starts the Angular solution with the usual instructions (bootstrapApplication
or bootstrapModule).
All files considered so far were set up using ng add. In order to load a program part published by a
remote, the host must be expanded to include lazy loading:
1 […]
2 import { loadRemoteModule } from '@angular-architects/native-federation';
3
4 export const APP_ROUTES: Routes = [
5 […],
6 {
7 path: 'flights',
8 loadComponent: () =>
9 loadRemoteModule('mfe1', './Component').then((m) => m.AppComponent),
10 },
11 […]
12 ];
The lazy route uses the loadRemoteModule helper function to load the AppComponent from the remote.
It takes the name of the remote from the manifest (mfe1) and the name under which the remote
publishes the desired file (./Component).
This routing config needs to be added to the exposes section in the Micro Frontend’s federation.config.js:
21 'rxjs/fetch',
22 'rxjs/testing',
23 'rxjs/webSocket',
24 // Add further packages you don't need at runtime
25 ]
26
27 });
1 [...]
2 import { loadRemoteModule } from '@angular-architects/native-federation';
3
4 export const APP_ROUTES: Routes = [
5 [...]
6
7 {
8 path: 'flights',
9 // loadChildreas instead of loadComponent !!!
10 loadChildren: () =>
11 loadRemoteModule('mfe1', './routes').then((m) => m.APP_ROUTES),
12 },
13
14 [...]
15 ];
1 <ul>
2 <li><img src="../assets/angular.png" width="50"></li>
3 <li><a routerLink="/">Home</a></li>
4 <li><a routerLink="/flights/flight-search">Flights</a></li>
5 <li><a routerLink="/flights/holiday-packages">Holidays</a></li>
6 </ul>
7
8 <router-outlet></router-outlet>
to decouple individual frontends from each other. However, if a frontend expects information from
other frontends, exactly the opposite is achieved. Most solutions I’ve seen only share a handful of
contextual information. Examples include the current username, the current client and a few global
filters.
To share information, we first need a shared library. This library can be a separately developed npm
package or a library within the current Angular project. The latter can be generated with
1 ng g lib auth
The name of the library in this case is set as auth. To share data, this library receives a stateful service.
For the sake of brevity, I’m using the simplest stateful service I can think about:
1 @Injectable({
2 providedIn: 'root'
3 })
4 export class AuthService {
5 userName = '';
6 }
In this very simple scenario, the service is used as a black board: A Micro Frontend writes information
into the service and another one reads the information. However, a somewhat more convenient
way to share information would be to use a publish/subscribe mechanism through which interested
parties can be informed about value changes. This idea can be realized, for example, by using RxJS
subjects.
If Monorepo-internal libraries are used, they should be made accessible via path mapping in the
tsconfig.json:
1 "compilerOptions": {
2 "paths": {
3 "@demo/auth": [
4 "projects/auth/src/public-api.ts"
5 ]
6 },
7 […]
8 }
Please note that I’m pointing to public-api.ts in the lib’s source code. This strategy is also used
by Nx, but the CLI points to the dist folder by default. Hence, in the latter case, you need to update
this entry by hand.
It must also be ensured that all communication partners use the same path mapping.
From Module Federation to esbuild and Native Federation 144
Conclusion
The new esbuild builder provides tremendous improvements in build performance. However, the
popular Module Federation is currently bound to webpack. Native Federation provides the same
mental model and is implemented in a tooling-agnostic way. Hence, it works with all possible
bundlers. Also, it uses web standards like EcmaScript modules and Import Maps. This also allows
for different implementation and makes it a reliable solution in the long run.
The new NGRX Signal Store for
Angular: 3 + n Flavors
Most Angular applications need to preserve some state so that the same data doesn’t need to fetched
time and again from the backend. Examples are information that are needed when switching back
from a details view to a list view or information collected during clicking through a wizard.
So far, the default state management solution in the Angular world has been the Redux-based NGRX
Store. Since the advent of Signals in Angular, the NGRX team has been working on a new store that
leverages this new reactive building block. Compared to the traditional NGRX Store, the Signal Store
is lightweight, easy to use, and highly extensible.
This chapter shows how to incorporate it in your application. For this, it shows up 3+1 different
flavors of using it.
Source Code⁷⁷
1 npm i @ngrx/signals
⁷⁷https://github.com/manfredsteyer/standalone-example-cli
The new NGRX Signal Store for Angular: 3 + n Flavors 146
Each top-level state property gets its own Signal. These properties are retrieved as read-only Signals,
ensuring a separation between reading and writing: Consumers using the Signals can just read their
values. For updating the state, the service encapsulating the state provides methods (see below). This
ensures that the state can only be updated in a well-defined manner.
Also, nested objects like the one provided by the preferences property above result in nested signals.
Hence, one can retrieve the whole preferences object as a Signal but also its properties:
1 const ps = this.state.preferences();
2 const direct = this.state.preferences.directConnection();
Currently, this isn’t implemented for arrays, as Angular’s envisioned Signal Components will solve
this use case by creating a Signal for each iterated item.
The new NGRX Signal Store for Angular: 3 + n Flavors 147
Here, computed serves the same purpose as Selectors in the Redux-based NGRX Store: It enables us
to calculate different state representations for different use cases. These so-called View Models are
only recomputed when at least one of the underlying signals changes.
Updating State
For updating the SignalState, Signal Store provides us with a patchState function:
Here, we pass in the state container and a partial state. As an alternative, one can pass a function
taking the current state and transforming it to the new state:
Side Effects
Besides updating the state, methods can also trigger side effects like loading and saving objects:
The new NGRX Signal Store for Angular: 3 + n Flavors 148
1 async load() {
2 if (!this.from() || !this.to()) return;
3
4 const flights = await this.flightService.findPromise(
5 this.from(),
6 this.to()
7 );
8
9 patchState(this.state, { flights });
10 }
It’s also fine to just return a partial state. It will be patched over the current state:
If you don’t need to project the current state, just returning a partial state is fine too. In this case,
you can skip the inner function:
Updater can be defined in the Store’s (signalState’s) “sovereign territory”. For the consumer, it is
just a black box:
1 patchState(updateBasket(id, selected))
In this case, the service is also registered in the root scope. When skipping { providedIn: 'root' },
one needs to register the service by hand, e. g., by providing it when bootstrapping the application,
within a router configuration, or on component level.
1 withComputed((store) => ({
2 selected: computed(() => store.flights().filter((f) => store.basket()[f.id])),
3 criteria: computed(() => ({ from: store.from(), to: store.to() })),
4 })),
The returned computed signals become part of the store. A more compact version might involve
directly destructuring the passed store:
1 withMethods((state) => {
2 const { basket, flights, from, to, initialized } = state;
3 const flightService = inject(FlightService);
4
5 return {
6 updateCriteria: (from: string, to: string) => {
7 patchState(state, { from, to });
8 },
9 updateBasket: (flightId: number, selected: boolean) => {
10 patchState(state, {
11 basket: {
12 ...basket(),
13 [flightId]: selected,
14 },
15 });
The new NGRX Signal Store for Angular: 3 + n Flavors 151
16 },
17 delay: () => {
18 const currentFlights = flights();
19 const flight = currentFlights[0];
20
21 const date = addMinutes(flight.date, 15);
22 const updFlight = { ...flight, date };
23 const updFlights = [updFlight, ...currentFlights.slice(1)];
24
25 patchState(state, { flights: updFlights });
26 },
27 load: async () => {
28 if (!from() || !to()) return;
29 const flights = await flightService.findPromise(from(), to());
30 patchState(state, { flights });
31 }
32 };
33 }),
withMethods runs in an injection context and hence can use inject to get hold of services. After
withMethods was executed, the retrieved methods are added to the store.
1 @Component([...])
2 export class FlightSearchComponent {
3 private store = inject(FlightBookingStore);
4
5 from = this.store.from;
6 to = this.store.to;
7 basket = this.store.basket;
8 flights = this.store.flights;
9 selected = this.store.selected;
10
11 async search() {
12 this.store.load();
13 }
14
15 delay(): void {
The new NGRX Signal Store for Angular: 3 + n Flavors 152
16 this.store.delay();
17 }
18
19 updateCriteria(from: string, to: string): void {
20 this.store.updateCriteria(from, to);
21 }
22
23 updateBasket(id: number, selected: boolean): void {
24 this.store.updateBasket(id, selected);
25 }
26 }
Hooks
The function withHooks provides another feature allowing to setup lifecycle hooks to run when the
store is initialized or destroyed:
1 withHooks({
2 onInit({ load }) {
3 load()
4 },
5 onDestroy({ flights }) {
6 console.log('flights are destroyed now', flights());
7 },
8 }),
Both hooks get the store passed. One more time, by using destructuring, you can focus on a subset
of the stores members.
rxMethod
Branch: arc-signal-store-rx
While Signals are easy to use, they are not a full replacement for RxJS. For leveraging RxJS and its
powerful operators, the Signal Store provides a secondary entry point @ngrx/signals/rxjs-interop,
containing a function rxMethod<T>. It allows working with an Observable representing side-effects
that automatically run when specific values change:
The new NGRX Signal Store for Angular: 3 + n Flavors 153
The type parameter T defines the type the rxMethod works on. While the rxMethod receives an
Obserable<T>, the caller can also pass an Observable<T>, a Signal<T>, or T directly. In the latter
two cases, the passed values are converted into an Observable.
After defining the rxMethod, somewhere else in the application, e. g. in a hook or a regular method,
you can call this effect:
1 withHooks({
2 onInit({ loadBy, criteria }) {
3 connectCriteria(criteria);
4 },
5 })
Here, the criteria Signal – a computed signal – is passed. Every time this Signal changes, the effect
within connectCriteria is re-executed.
One of the examples found in this repository is a CallState feature⁷⁹ defining a state property
informing about the state of the current HTTP call:
In this section, I’m using this example to explain how to provide custom features.
For the state properties added by the feature, one can provide Updaters:
⁷⁹https://github.com/markostanimirovic/ngrx-signal-store-playground/blob/main/src/app/shared/call-state.feature.ts
The new NGRX Signal Store for Angular: 3 + n Flavors 155
Updaters allows the consumer to modify the feature state without actually knowing how it’s
structured.
The provided properties, methods, and Updaters can be used in the Store’s methods:
The new NGRX Signal Store for Angular: 3 + n Flavors 156
The consumer of the store sees the properties provided by the feature too:
As each feature is transforming the Store’s properties and methods, make sure to call them
in the right order. If we assume that methods registered with withMethods use the CallState,
withCallState has to be called before withMethods.
The passed collection name prevents naming conflicts. In our case, the collection is called flight,
and hence the feature creates several properties beginning with flight, e.g., flightEntities.
There is quite an amount of ready-to-use Updaters:
• addEntity
The new NGRX Signal Store for Angular: 3 + n Flavors 158
• addEntities
• removeEntity
• removeEntities
• removeAllEntities
• setEntity
• setEntities
• setAllEntities
• updateEntity
• updateEntities
• updateAllEntities
Similar to @ngrx/entities, internally, the entities are stored in a normalized way. That means they
are stored in a dictionary, mapping their primary keys to the entity objects. This makes it easier to
join them together to View Models needed for specific use cases.
As we call our collection flight, withEntities creates a Signal flightEntityMap mapping flight ids
to our flight objects. Also, it creates a Signal flightIds containing all the ids in the order. Both are
used by the also added computed signal flightEntities used above. It returns all the flights as an
array respecting the order of the ids within flightIds. Hence, if you want to rearrange the positions
of our flights, just update the flightIds property accordingly.
For building the structures like the flightEntityMap, the Updaters need to know how the entity’s id
is called. By default, it assumes a property id. If the id is called differently, you can tell the Updater
by using the idKey property:
1 patchState(
2 state,
3 setAllEntities(flights, {
4 collection: 'flight', idKey: 'flightId' }));
The passed property needs to be a string or number. If it’s of a different data type or if it doesn’t
exist at all, you get a compilation error.
Conclusion
The upcoming NGRX Signal Store allows managing state using Signals. The most lightweight option
for using this library is just to go with a SignalState container. This data structure provides a
Signal for each state property. These signals are read-only. For updating the state, you can use the
patchState function. To make sure updates only happen in a well-defined way, the signalState can
be hidden behind a facade.
The SignalStore is more powerful and allows to register optional features. They define the state to
manage but also methods operating on it. A SignalStore can be provided as a service and injected
into its consumers.
The new NGRX Signal Store for Angular: 3 + n Flavors 159
The SignalStore also provides an extension mechanism for implementing custom features to ease
repeating tasks. Out of the box, the Signal Store comes with a pretty handy feature for managing
entities.
Smarter, Not Harder: Simplifying your
Application With NGRX Signal Store
and Custom Features
What would you say if you could implement a Signal Store for a (repeating) CRUD use case
including Undo/Redo in just 7 (!) lines of code?
To make this possible, we need some custom features for the Signal Store. In this chapter, I show
how this all works.
As always, my work is highly inspired by the implementation of the NGRX Signal and the examples
provided by Marko Stanimirović⁸⁰, the NGRX core team member who envisioned and implemented
the Signal Store.
Source Code⁸¹ (Branch: arc-signal-store-custom-examples)
Goal
The goal of this chapter is to show how to implement custom features for the Signal Store that allow
for the following:
This is how the demo application I’ve built on top of these custom features looks like:
⁸⁰https://twitter.com/MarkoStDev
⁸¹https://github.com/manfredsteyer/standalone-example-cli/tree/arc-signal-store-custom-examples
Smarter, Not Harder: Simplifying your Application With NGRX Signal Store and Custom Features 161
Demo Application
And this is the whole code we need to set up the store, including Undo/Redo and connecting it to a
Data Service fetching the entities from the backend:
Smarter, Not Harder: Simplifying your Application With NGRX Signal Store and Custom Features 162
As you can see, I’m using the @ngrx/signals/entities package for managing entities. Besides
this, I moved the remaining logic into three reusable custom features: withCallState was already
discussed in a previous chapter. This chapter discusses withDataService and provides the code for
withUndoRedo.
These types describe how our search filter is structured, what we mean when referring to an entity,
and how a DataService should look like. The type EntityId comes from @ngrx/signals/entities
and accepts a string or a number.
Expecting that an entity is an arbitrary object with an id property is one of the conventions
@ngrx/signals/entities provides for shorten your code. If your primary key is called otherwise,
you can tell @ngrx/signals/entities accordingly. However, to keep the presented example small,
I’ve decided to stick with this convention.
Smarter, Not Harder: Simplifying your Application With NGRX Signal Store and Custom Features 163
Its type parameter describes the Entity to manage, the corresponding search filter, and the
DataService. When calling this generic method we just need to pass in the DataService and an
initial filter. TypeScript infers the rest:
As shown in the previous chapter, the signalStoreFeature function basically composes existing
features into a new one. For instance, we can introduce new state properties with withState,
computed Signals with withComputed, or methods with withMethods.
Smarter, Not Harder: Simplifying your Application With NGRX Signal Store and Custom Features 164
However, one little thing is a bit different this time: Our feature has some expectations for the
Signal Store it is used with. It expects the callState feature and the entity feature to be in place.
The former one sets up a callState property we need; the latter one sets up an entityMap and an
ids property as well as a calculated Signal entities.
These expectations are defined by the first parameter passed to signalStoreFeature. It describes
the expected state properties (state), computed signals (signals), and methods. As we don’t expect
any methods, we can also omit the key methods instead of pointing to type<{}>().
To avoid naming conflicts, the entity feature allows using different property names. To keep things
simple, I’m sticking with the default names here. However, in a following chapter, you learn how to
deal with dynamic property names in a type-safe way.
The remaining parts of this custom feature are just about adding state properties, computed Signals,
and methods on top of the expected features:
Undo/Redo-Feature
The Undo/Redo feature is implemented in a very similar way. Internally, it managed two stacks: an
undo stack and a redo stack. The stacks are basically arrays with StackItems:
Each StackItem represents a snapshot of the current search filter and the information the entity
feature uses (entityMap, ids).
For configuring the feature, a UndoRedoOptions type is used:
The options object allows us to limit the stack size. Older items are removed according to the First
In, First Out rule if the stack grows to large.
The withUndoRedo function adds the feature. It is structured as follows:
Smarter, Not Harder: Simplifying your Application With NGRX Signal Store and Custom Features 167
Similar to the withDataService function discussed above, it calls signalStoreFeature and defines
its expectations for the store using the first argument. It introduces an undo and a redo method,
restoring the state from the respective stacks. To observe the state, the onInit hook at the end creates
an effect. After each change, this effect stores the original state on the undo stack.
Smarter, Not Harder: Simplifying your Application With NGRX Signal Store and Custom Features 168
One thing is a bit special about this implementation of the Undo/Redo feature: The feature itself
holds some internal state – like the undoStack and the redoStack – that is not part of the Signal
Store.
Please find the full implementation of this feature in my GitHub repository⁸² (Branch: arc-signal-
store-custom-examples). If you want to see a different implementation that also stores the feature-in-
ternal state in the Signal Store, please look at the arc-signal-custom-examples-undoredo-alternative
branch.
1 @Component( [...] )
2 export class FlightSearchComponent {
3 private store = inject(FlightBookingStore);
4
5 // Delegate to signals
6 from = this.store.filter.from;
7 to = this.store.filter.to;
8 flights = this.store.entities;
9 selected = this.store.selectedEntities;
10 selectedIds = this.store.selectedIds;
11
12 // Delegate to methods
13 async search() {
14 this.store.load();
15 }
16
17 undo(): void {
18 this.store.undo();
19 }
20
21 redo(): void {
22 this.store.redo();
23 }
24
25 updateCriteria(from: string, to: string): void {
26 this.store.updateFilter({ from, to });
27 }
⁸²https://github.com/manfredsteyer/standalone-example-cli/tree/arc-signal-store-custom-examples
Smarter, Not Harder: Simplifying your Application With NGRX Signal Store and Custom Features 169
28
29 updateBasket(id: number, selected: boolean): void {
30 this.store.updateSelected(id, selected);
31 }
32
33 }
1 import {
2 SignalStoreFeature,
3 signalStoreFeature,
4 withComputed,
5 withState,
6 } from '@ngrx/signals';
7
8 […]
9
10 export type CallState = 'init' | 'loading' | 'loaded' | { error: string };
11
12 export function withCallState() {
13 return signalStoreFeature(
14 withState<{ callState: CallState }>({ callState: 'init' }),
⁸³https://twitter.com/MarkoStDev
⁸⁴https://github.com/manfredsteyer/standalone-example-cli/tree/arc-signal-store-custom-typed
NGRX Signal Store Deep Dive: Flexible and Type-Safe Custom Extensions 171
This is a function that returns the result of signalStoreFeature. The signalStoreFeature func-
tion, in turn, simply groups existing features: withState introduces the callState property, and
withComputed defines the previously discussed calculated signals based on it.
The Updaters provided by the feature only return a partial state object with the property to be
updated:
Our withCallState function does not currently have an explicit return type. Therefore, TypeScript
infers this type by looking at the return value in the function. The compiler realizes that a callState
property is available.
The type determined here by inference is a SignalStoreFeature<Input, Output>. The type
parameter Input defines which signals and methods the feature expects from the store, and Output
specifies which additional signals and methods the feature provides. Our feature does not place any
expectations on the store, but provides a callState signal as well as several calculated signals such
as loading. Respectively, our Input and Output types looks as follows:
It should be noted that state describes the signal to be introduced, and the signals property
represents the signals calculated from it. This representation at least corresponds to the simplified
external view.
The internal view is a little more complex, especially since withState first introduces the callState
signal and only then withComputed adds the calculated signals. That’s why the inside view has two
outputs, which are combined using a helper type.
NGRX Signal Store Deep Dive: Flexible and Type-Safe Custom Extensions 173
For the sake of simplicity, the previous image calls the helper type Merged Result. However, the
truth is that the Signal Store has several internal types for this.
On a logical level, the the internal view und the external one are equivalent. TypeScript may need a
little nudge in the form of a type assertion to recognize this. However, explicitly defining the internal
view is a bit annoying and currently not really possible because the required helper types are not part
of the Signal Store’s public API. That’s why I ‘m using a pattern here that can also be found several
times in the Signal Store code: A combination of a function overload with the external view and
a function implementation that uses SignalStoreFeature instead of SignalStoreFeature<Input,
Output> for the internal view:
NGRX Signal Store Deep Dive: Flexible and Type-Safe Custom Extensions 174
The SignalStoreFeature type without type parameters uses more general types for Input and
Output that do not assume specific names or data types.
This prefix should now be included in the property names defined by the feature. For example, the
first call to withCallState should produce the following properties:
• flightsCallState (state)
• flightsLoading (computed)
• flightsLoaded (computed)
• flightsError (computed)
• passengersCallState (state)
• passengersLoading (computed)
• passengersLoaded (computed)
• passengersError (computed)
Setting up these properties at runtime isn’t a big problem in the world of TypeScript, especially since
the underlying JavaScript is a dynamic language anyway. The challenge, however, is to also inform
the type system about these properties.
For this task, you first need to find a way to express the prefix in a type declaration. At this point,
we benefit from the fact that literals can also be used as data types:
Such String Literal Union Types are often used in TypeScript applications to express enums. This is
even closer to EcmaScript than using TypeScript’s enum keyword. Funnily, nobody is forcing us to
offer multiple options. That’s why this variant is completely ok:
NGRX Signal Store Deep Dive: Flexible and Type-Safe Custom Extensions 176
So here we have a type that can hold exactly a single string value. We use this exact pattern to
inform the type system about our prefix. First, we create a type that defines the name of the signal
to be introduced based on the prefix:
This is a so-called mapped type, which maps one type to a new one. The type parameter Prop extends
string describes the original type. It can be any string used as a type. String must also be written
in lowercase because, at this point, we are referring to a specific string and not the String object
type. The notation K in Prop also reduces to this string. In more complex cases, one could use the
keyword in, for instance, to loop through the properties of the original type.
We can proceed analogously for the calculated signals to be introduced:
Since a mapped type can only have a single mapping, several mapped types are used here. They are
combined with the &-operator (intersection operator). With these two types we can now specify the
typing of our withCallState function:
14 […]
15 }
Now, the type system knows about our configured properties. In addition, it is now important to set
up these properties at runtime. An auxiliary function getCallStateKeys is used for this purpose:
This helper function returns the same mappings at runtime as the previously introduced types during
compile time. The updated implementation of withCallState picks up these names and sets up
corresponding properties:
1 […]
2 export function withCallState<Prop extends string>(config: {
3 prop: Prop;
4 }): SignalStoreFeature {
5 const { callStateKey, errorKey, loadedKey, loadingKey } =
6 getCallStateKeys(config);
7
8 return signalStoreFeature(
9 withState({ [callStateKey]: 'init' }),
10 withComputed((state: Record<string, Signal<unknown>>) => {
11
12 const callState = state[callStateKey] as Signal<CallState>;
13
14 return {
15 [loadingKey]: computed(() => callState() === 'loading'),
16 [loadedKey]: computed(() => callState() === 'loaded'),
17 [errorKey]: computed(() => {
18 const v = callState();
19 return typeof v === 'object' ? v.error : null;
20 })
21 }
22 })
23 );
24 }
NGRX Signal Store Deep Dive: Flexible and Type-Safe Custom Extensions 178
So that the updaters can cope with the dynamic properties, they also receive a corresponding
parameter:
This idea can also be found in @ngrx/signals/entity. The updater is then used as follows:
Conclusion
The NGRX team is known for being exceptionally skilled at leveraging the possibilities of the
TypeScript type system. The result is an extremely easy-to-use and type-safe API.
In this chapter, we switched perspectives and discussed how you can leverage the patterns used by
the NGRX team for your custom Signal Store features. This enables to configure property names
and thus avoid naming conflicts without compromising type safety.
To do this, we have to deal with aspects of TypeScript that application developers usually don’t get
in contact that often. That’s why the patterns used may sometimes seem a bit complicated. The good
news is that we only need these patterns if we are developing highly reusable solutions. As soon as
we switch back to the role of application developer, we have a type-safe solution that is comfortable
to use.
The NGRX Signal Store and Your
Architecture
The NGRX Signal Store is a modern and lightweight state management solution. However, when
adding it to your application, several architectural questions come up: Where to put it? How large
should it be? Is a store allowed to access other stores? Can it be used for global state? Can it be used
together with or instead of the traditional Redux-based NGRX Store?
This chapter provides answers and shows that lightweight stores change some of the rules known
from the world of Redux-oriented stores.
This architecture, which often acts as the starting point and can be tailored to individual require-
ments, is described in this book’s first chapters.
The NGRX Signal Store and Your Architecture 181
When going with the traditional Redux-based NGRX Store, we subdivide the state into feature
slices. While they can be associated with the feature layer, we often push them down to the domain
level, as the same state is often needed in several features of the same domain.
When talking about this reference architecture, we should also keep in mind there are
several flavors. For instance, some teams have a data layer or state layer where they put
feature slices needed by several features. These layers can be an alternative but also an
addition to the domain layer.
When we incorporate a lightweight store like the NGRX Signals Store, we encounter different
rules: In general, lightweight stores can be found in all technical layers:
• Feature Layer: We can use a store on the component level for managing component state or
on the feature level so that several components of the same feature can access it. In the latter
case, an example is a wizard delegating to different components.
• UI: UI components for sure have state. Some of them have quite extensive ones that need to be
shared with child components. An example is a sophisticated scheduler with different views
demanding several child components. Such a state can be managed by a lightweight store
directly connected to the component.
• Domain: State that is needed by several features in the same domain is defined here. A
lightweight store used for this is exposed by this layer so that the feature layer can access
it.
The NGRX Signal Store and Your Architecture 182
• Util: Quite often, utilities are stateless: Think about functions validating inputs or calculating
dates. However, there are also some stateful utility libs where a store can be helpful. An example
is a generic authentication library managing some data about the current user or a translation
library holding translation texts.
A Store used on the component level is directly provided by the component in question:
1 @Component({
2 [...],
3 providers: [MySignalStore]
4 })
5 export class MyComp {
6 [...]
7 }
This also makes the Store available to child components. However, this also means that the store is
destroyed when the component is destroyed.
For the other use cases, we can provide the Store via the root injector:
The Angular team told the community several times this is the way to go in most cases. In general,
we could also provide such stores on the level of (lazy) routes. However, this does not add much value,
as forRoot services also work with lazy loading: If only used in a lazy application part, the bundler
puts them into the respective chunk. More information about when to use so-called Environment
providers on the route level can be found here⁸⁷.
However, I also think more and more people will reconsider using “Redux by default”. If you feel
that you don’t benefit from the strength of this approach in your very case, you might want to go
with a more lightweight alternative like the NGRX Signal Store instead. This can also be observed
in other communities where lightweight stores have been popular for years.
To be clear, the Redux pattern should be a part of your toolbox. However, if you find a more
lightweight solution that fits better, go with it.
Such a service is similar to facades often used for state management. But as it’s part of the feature
and doesn’t abstract a sub-system, I prefer the name feature service.
Conclusion
Lightweight stores like the NGRX Signal Store change some of the rules known from Redux-based:
Such stores can be defined on different technical layers, and they can be provided within the root
provider, a (lazy) route, or on the component level.
The NGRX Signal Store and Your Architecture 185
Redux is not going away, and it belongs to our toolbox. However, if you feel a more lightweight
approach is more fitting for your needs, the NGRX Signal Store is quite tempting. Also, you can
have the best of both worlds by combining both stores or by extending the Signal Store with custom
features that provide missing Redux features.
In view of the single responsibility principle, I would not allow lightweight stores to access each
other; instead, you can introduce a feature service orchestrating the needed stores.
Bonus: Automate your Architecture
with Nx Workspace Plugins
Nx is quite popular when it comes to large Angular-based business applications. Thanks to its plugin
concept, Nx can also be extended very flexibly. The Nx plugin registry⁸⁹ lists numerous such plugins
that take care of recurring tasks and integrate proven tools.
In addition to community plugins for the general public, project-internal plugins can also make sense
to automate highly project-related tasks. This includes generating code sections and implementing
patterns, the target architecture specifies: repositories, facades, entities or CRUD forms are just a
few examples.
Unfortunately, implementing plugins was not trivial in the past: Each plugin had to be published as
a package via npm and installed in your own Nx workspace. This procedure had to be repeated for
each new plugin version.
This back and forth is a thing of the past thanks to workspace plugins. These are plugins that Nx sets
up in the form of a library in the current workspace. This means that changes can be made quickly
and tested immediately. If necessary, proven workspace plugins can also be exported via npm for
other projects.
In this chapter I show how workspace plugins with generators automating repeating tasks can be
implemented and used.
Source Code⁹⁰
⁸⁹https://nx.dev/plugin-registry
⁹⁰https://github.com/manfredsteyer/nx-plugin-demo
Bonus: Automate your Architecture with Nx Workspace Plugins 187
When asked, select the options Angular and Integrated Monorepo; for the remaining options you
can go with the defaults.
After that, add a generator to your plugin:
If you follow the instructions here step by step, please copy the contents of this listing into the
generated file libs\my-plugin\src\generators\my-generator\files\src\index.ts.template.
Wildcards can be found not only in the files, but also in the file names. For example, Nx would
replace __projectName__ in a file name with the value of projectName.
Implementing a Generator
Technically speaking, a generator is just an asynchronous function receiving two parameters: A tree
object representing the file system and an options object with the parameters passed when calling
the generator at the command line:
Bonus: Automate your Architecture with Nx Workspace Plugins 188
1 // libs/my-plugin/src/generators/my-generator/generator.ts
2
3 import {
4 formatFiles,
5 generateFiles,
6 getWorkspaceLayout,
7 Tree,
8 } from '@nrwl/devkit';
9
10 import {
11 libraryGenerator
12 } from '@nrwl/angular/generators';
13
14 import * as path from 'path';
15 import { MyGeneratorGeneratorSchema } from './schema';
16
17 export default async function (tree: Tree, options: MyGeneratorGeneratorSchema) {
18
19 tree.write('readme.txt', 'Manfres was here!');
20
21 await libraryGenerator(tree, options);
22
23 const libsDir = getWorkspaceLayout(tree).libsDir;
24 const projectRoot = `${libsDir}/${options.name}`;
25
26 const templateOptions = {
27 projectName: options.name,
28 template: ''
29 };
30
31 generateFiles(
32 tree,
33 path.join(__dirname, 'files'),
34 projectRoot,
35 templateOptions
36 );
37
38 await formatFiles(tree);
39 }
What’s particularly useful is the fact that generators are simply functions that can be called in other
generators. This means that existing generators can be combined to create new ones.
To add additional parameters passed via the options object, the interface in the file schema.d.ts as
well as the JSON schema in schema.json need to be extended accordingly. The former one is used in
the TypeScript code and the latter one is used by Nx to validate the parameters passed at command
line.
If it is necessary to change existing TypeScript files, the TypeScript Compiler API⁹¹ can help. This
API is included in TypeScript and represents code files as syntax trees.
The tsquery⁹² package, which is very popular in the community, supports searching these data
structures. It allows you to formulate queries that are based on CSS selectors. Such queries, for
example, can determine functions, classes, or methods that are located in a file.
1 nx g @plugin-demo/my-plugin:my-generator my-lib
Here, @my-workspace is the name of the current workspace and my-plugin is the name of the library
with our workspace plugin. The name my-generator refers to the generator we’ve added to the
plugin. my-lib is the value for the name parameter. Actually this should be specified with --name
mylib. However, the generator’s schema.json by default specifies that this value can alternatively
be taken from the first command line argument.
If everything goes as planned, the generator creates a new library and a file based on the template
shown. It also generates a readme.txt:
Testing Generators
Nx also simplifies the automated testing of generators. It also offers auxiliary constructs, such as
a Tree object, which only simulates a file system in main memory and does not write it to disk.
Bonus: Automate your Architecture with Nx Workspace Plugins 191
In addition, Nx also generates the basic structure for a unit test per generator. To make it fit our
implementation shown above, let’s update it as follows:
1 // libs/my-plugin/src/generators/my-generator/generator.spec.ts
2
3 import { createTreeWithEmptyWorkspace } from '@nrwl/devkit/testing';
4 import { Tree, readProjectConfiguration } from '@nrwl/devkit';
5
6 import generator from './generator';
7 import { MyGeneratorGeneratorSchema } from './schema';
8
9 describe('my-plugin generator', () => {
10 let appTree: Tree;
11 const options: MyGeneratorGeneratorSchema = { name: 'test-lib' };
12
13 beforeEach(() => {
14 appTree = createTreeWithEmptyWorkspace();
15 });
16
17 it('should export constant0', async () => {
18 await generator(appTree, options);
19 const config = readProjectConfiguration(appTree, 'test-lib');
20 expect(config).toBeDefined();
21
22 const generated = `${config.sourceRoot}/index.ts`;
23 const content = appTree.read(generated, 'utf-8');
24 expect(content).toContain(`const constant0 = 'test-lib';`);
25 });
26 });
The unit test shown here creates a memory-based Tree object using createTreeWithEmptyWorkspace
and calls our generator. It then checks whether there is a configuration for the generated library and
whether it has the generated file.
To run this unit test, call
1 nx test my-plugin
1 nx build my plugin
2
3 npm publish dist\libs\my-plugin --registry http://localhost:4873
Here, we assume that verdaccio is used as the npm registry and that it’s started locally on port 4873.
Without specifying the --registry switch, npm uses the public registry at registry.npmjs.org.
The npm package simply needs to be installed in the consuming workspace. After that, you can then
use your generator as usual:
Conclusion
Workspace plugins significantly simplify the development of plugins to automate repeating project-
internal tasks. This is not only due to the numerous helper methods, but above all to the tooling: Nx
generates the basic structure of plugins and generators including unit tests Changes can be tried out
immediately in the current workspace. If necessary, these libraries can also be exported via npm and
used in other projects.
Another plus point is the straightforward API that Nx provides us: Generators are just functions
that can easily call each other. This means that existing functionalities can be orchestrated into new
ones.
Bonus: The Core of Domain-Driven
Design
It’s been a bit more than 20 years since the publication of Eric Evans’ groundbreaking book Domain-
Driven Design: Tackling Complexity in the Heart of Software⁹³ that established the DDD movement.
This book is still a best-seller, and a vivid community has extended DDD since then: There are
dedicated international conferences, books, courses, and new concepts that practitioners have added.
While there are several perspectives on DDD, I want to reflect on the core of this approach here.
My goal is to - shed some light on the focus of DDD, - why there are wrong impressions about it, -
its relationship to object-orientation, - and whether it can be adapted to fit other paradigms.
For this, I’m primarily citing interviews with and presentations of Eric Evans. To provide additional
examples, I also cite further sources. Before, I start with a quick overview of DDD to get everyone
into the boat.
DDD in a Nutshell
Domain-driven Design focuses on a deep understanding of the real-world (problem) domain a
software system is written for. Domain experts (e.g., experts for invoicing) work closely together
with software experts to create a models of that domain. A model represents aspects of the real
world (concepts, relationships, processes) that are interesting for the software in question and is
directly expressed in the source code.
Strategic Design
DDD consists of two original disciplines: Strategic Design⁹⁴ is about discovering subdomains that
represent individual parts of the problem domain. For subdomains, bounded contexts⁹⁵ are defined.
Each bounded context gets an own model that follows an Ubiquitous Language⁹⁶. This Ubiquitous
Language reflects the vocabulary used in real-world and is used by domain experts as well as by
software experts – verbally, in written form, in diagrams, and in code.
Having several individual models instead of one sole overly system-wide model allows for a more
meaningful representation of the different sub-domains. This also prevents tight coupling and
reduces complexity.
⁹³https://www.youtube.com/watch?v=7yUONWp-CxM
⁹⁴https://www.thoughtworks.com/en-cl/insights/blog/evolutionary-architecture/domain-driven-design-in-10-minutes-part-one
⁹⁵https://martinfowler.com/bliki/BoundedContext.html
⁹⁶https://martinfowler.com/bliki/UbiquitousLanguage.html
Bonus: The Core of Domain-Driven Design 194
The following example shows two bounded contexts. Each of them has its own view on the concept
of a product and, hence its own representation:
Sales and Invoicing are two different bounded contexts with their own representation of an product
Tactical Design
While Strategic Design leads to an overarching architecture, Tactical Design⁹⁷ provides several
building blocks that help to implement the model within the individual contexts. Examples are Value
Objects and Entities⁹⁸, Aggregates⁹⁹ defining whole-part relationships (e.g. an Order with Order
Lines) with consistency rules (invariants) that create some implications for transaction management,
and repositories for persisting and loading aggregates and entities.
Strategic Design has also been adopted by the Microservice community¹⁰² to identify boundaries
between services. Similarly¹⁰³, the Micro Frontend community is leveraging Strategic Design too.
Besides this, it is also used for monolithic applications¹⁰⁴.
Team Topologies¹⁰⁵ is another relatively young discipline that favors the Bounded Context for
splitting a system into individual parts different teams can work on.
More on DDD
You find more details on DDD in the blog articles linked above. If you prefer recordings, you find an
excellent one about Strategic Design here¹⁰⁶ and a discussion about prioritizing bounded contexts
which leads to the idea of a Core Domain there¹⁰⁷.
We need some room to move. Different people need to be able to operate in a space and
have different views and innovate.
[…] the most fundamental pattern of Domain-driven Design is probably the ubiquitous
language. […]
[A model] applies within a certain context, and that context has a definitely defined limit,
[it’s] a bounded context.
With those two ingredients, I would say, someone is doing Domain-driven Design, and
there are a lot of other practices that help solve more specific problems.
¹⁰⁹https://www.infoq.com/interviews/domain-driven-design-eric-evans/
Bonus: The Core of Domain-Driven Design 197
[…] all the strategic design stuff is way back at the back. […] it’s so far back that most
people never get to it really.
Another thing I would do is try to change the presentation of the building blocks […]
things like the entities and value objects […] [People] come away thinking that that’s
really the core of DDD, whereas, in fact, it’s really not.
I really think that the way I arranged the book gives people the wrong emphasis, so that’s
the biggest part of what I do is rearrange those things.
However, he adds that Tactical Design is important because it helps to translate the conceptual model
into code.
A similar point of view is expressed in Eric Evans’ keynote at DDD Europe 2016¹¹¹, where he criticizes
the “over-emphasis on building blocks”.
The reason that everything is expressed in terms of objects is because objects were king
in 2003-2004, and what else would I have described it as people […] used objects.
He explains that there need to be some changes to apply Tactical Design to FP:
If you are going at it from a functional point of view, then […] your implementations are
going to look quite different.
Also here¹¹⁴, he mentions the need for “rethinking […] building blocks” when switching to FP.
This needed adaption is also slightly addressed in Vaughn Vernon’s book Domain-Driven Design
Distilled¹¹⁵ that is considered a standard reference in the DDD community and known for its easy
readability. He mentions that in functional DDD, the data structures are Immutables (records), and
pure functions implement the business logic:
Rather than modifying the data that functions receive as arguments, the functions return
new values. These new values may be the new state of an Aggregate or a Domain Event
that represents a transition in an Aggregate’s state.
More insights on functional DDD can be found in Functional and Reactive Domain Modeling¹¹⁶ and
Domain Modeling Made Functional¹¹⁷.
[An actor] can maintain that state in a consistent […] way […] that respects the invariance
of that particular aggregate. […]
This discussion also fits the recently prominently observed talk “The Aggregate is dead. Long live
the Aggregate!”¹²¹ by Milan Savić and Sara Pellegrini. This talk, presented at several conferences,
¹¹⁴https://www.youtube.com/watch?v=dnUFEg68ESM
¹¹⁵https://www.amazon.de/Domain-Driven-Design-Distilled-Vaughn-Vernon/dp/0134434420/
¹¹⁶https://www.amazon.de/Functional-Reactive-Domain-Modeling-Debasish/dp/1617292249
¹¹⁷https://www.amazon.de/Domain-Modeling-Made-Functional-Domain-Driven/dp/1680502549/ref=sr_1_1?dib=eyJ2IjoiMSJ9.
lh7eRp45b3q6zBsCJIM6A9pbWVDAtn1NHumx_LEXffyVsOwKnxYYneAoMlkH2uNgkp_HFULSJ-rwTdOFOEarPA.bLt4ucE-6GEo4H-
Q2mzBuGdY5ezuSBBo62I6AOzkLLg&dib_tag=se&keywords=ddd+functional&qid=1704980818&sr=8-1
¹¹⁸https://www.youtube.com/watch?v=GogQor9WG-c
¹¹⁹https://www.youtube.com/watch?v=R2IAgnpkBck
¹²⁰https://www.youtube.com/watch?v=GogQor9WG-c
¹²¹https://www.youtube.com/watch?v=Q89patz4lgU
Bonus: The Core of Domain-Driven Design 199
discusses some criticism of the traditional implementation of Aggregates and proposes an alternative
implementation using messaging and event sourcing.
More generally, such approaches correlate with Eric Evans’s above-cited keynote from 2018¹²², where
he emphasizes the need to give people room to innovate DDD.
At DDD Europe 2016¹²³, Eric Evans mentioned two further paradigms that can be used for creating
models in DDD:
• Relational
• Graphs
Relational modeling might come as a surprise. However, he does not refer to a comprehensive
(generalized) normalized schema that is the opposite of thinking in bounded contexts. Instead,
having several specialized schemas fits the mindset of DDD. Also, he finds that SQL can be a good
way to express how to compare and manipulate big sets.
With Graphs, Eric Evans means more than just using a Graph Database. He sees graph theory as a
“classic modeling paradigm that goes back long before computer [science].” For him, graphs are a
way to model “a certain kind of problems” using nodes and edges as abstractions.
Conclusion
In its core, DDD emphasizes that Domain Experts and Software Experts should jointly explore a
domain and model individual, prioritized bounded contexts respecting an ubiquitous language.
Tactical Design as described by the original book on DDD helps to implement these models
in an Object-oriented way. In addition, there are alternatives and adaptions (e.g. for Functional
Programming).
Some communities just refer to Strategic Design (e.g., Micro Services, Micro Frontends, Team
Topologies) and use it to subdivide a system along domain boundaries.
¹²²https://www.youtube.com/watch?v=R2IAgnpkBck
¹²³https://www.youtube.com/watch?v=dnUFEg68ESM
Literature
• Evans, Domain-Driven Design: Tackling Complexity in the Heart of Software¹²⁴
• Wlaschin, Domain Modeling Made Functional¹²⁵
• Ghosh, Functional and Reactive Domain Modeling¹²⁶
• Nrwl, Monorepo-style Angular development¹²⁷
• Jackson, Micro Frontends¹²⁸
• Burleson, Push-based Architectures using RxJS + Facades¹²⁹
• Burleson, NgRx + Facades: Better State Management¹³⁰
• Steyer, Web Components with Angular Elements (article series, 5 parts)¹³¹
¹²⁴https://www.amazon.com/dp/0321125215
¹²⁵https://pragprog.com/book/swdddf/domain-modeling-made-functional
¹²⁶https://www.amazon.com/dp/1617292249
¹²⁷https://go.nrwl.io/angular-enterprise-monorepo-patterns-new-book
¹²⁸https://martinfowler.com/articles/micro-frontends.html
¹²⁹https://medium.com/@thomasburlesonIA/push-based-architectures-with-rxjs-81b327d7c32d
¹³⁰https://medium.com/@thomasburlesonIA/ngrx-facades-better-state-management-82a04b9a1e39
¹³¹https://www.softwarearchitekt.at/aktuelles/angular-elements-part-i/
About the Author
Manfred Steyer
Manfred Steyer is a trainer, consultant, and programming architect with focus on Angular.
For his community work, Google recognizes him as a Google Developer Expert (GDE). Also, Manfred
is a Trusted Collaborator in the Angular team. In this role he implemented differential loading for
the Angular CLI.
Manfred wrote several books, e. g. for O’Reilly, as well as several articles, e. g. for the German Java
Magazine, windows.developer, and Heise.
He regularly speaks at conferences and blogs about Angular.
Before, he was in charge of a project team in the area of web-based business applications for
many years. Also, he taught several topics regarding software engineering at a university of applied
sciences.
Manfred has earned a Diploma in IT- and IT-Marketing as well as a Master’s degree in Computer
Science by conducting part-time and distance studies parallel to full-time employments.
You can follow him on Twitter¹³² and Facebook¹³³ and find his blog here¹³⁴.
¹³²https://twitter.com/ManfredSteyer
¹³³https://www.facebook.com/manfred.steyer
¹³⁴http://www.softwarearchitekt.at
Trainings and Consulting
Learn more about this and further architecture topics regarding Angular and huge enterprise as well
as industrial solution in our advanced Online Workshop¹³⁵:
Save your ticket¹³⁶ for one of our remote or on-site workshops now or request a company
workshop¹³⁷ (online or In-House) for you and your team!
Besides this, we provide the following topics as part of our training or consultancy workshops:
¹³⁸https://www.angulararchitects.io/en/angular-workshops/
¹³⁹https://www.angulararchitects.io/subscribe/
¹⁴⁰https://twitter.com/ManfredSteyer