Apstra User Guide
Apstra User Guide
Apstra User Guide
2 User Guide
Published
RELEASE
2023-09-26
ii
Table of Contents
Get Started
Devices | 1
Design | 2
Resources | 2
Blueprints | 2
Next Steps | 3
Apstra GUI
Analytics (Blueprints)
Analytics Introduction | 9
Dashboards | 12
Anomalies | 13
iii
Anomalies (Analytics) | 14
Widgets | 14
Widgets Introduction | 14
Edit Widget | 16
Delete Widget | 17
Probes | 17
Probes Introduction | 18
Create Probe | 24
Import Probe | 25
Edit Probe | 25
Export Probe | 25
Delete Probe | 26
Reports | 26
Blueprint-Wide Search | 29
Physical | 32
Build | 33
Manage Configlets | 43
iv
Selection | 44
Topology | 46
Topology (Datacenter) | 46
Nodes | 52
Nodes (Datacenter) | 53
Unassign Device (Datacenter) | 54
Delete Node | 86
Links | 89
Links (Datacenter) | 90
Interfaces | 140
Racks | 154
Pods | 159
Planes | 172
Virtual | 177
Statistics | 289
Policies | 289
vii
Implementation | 319
Catalog | 330
viii
Configlets | 335
Tags | 344
Tasks | 347
Primitives | 350
Pre-defined | 362
Create Connectivity Template for Multiple VNs on Same Interface (Example) | 362
Create Connectivity Template for Layer 2 Connected External Router (Example) | 365
Method 1 | 369
Method 2 | 370
Blueprints | 390
Physical | 396
Selection | 397
Topology | 399
Systems | 401
Links | 441
Catalog | 465
Tags | 474
Tasks | 477
Uncommitted (Blueprints)
Query | 501
Devices
Terminology | 520
Telemetry | 678
Services | 679
Design
Templates | 754
Tags | 777
Resources
Providers | 789
Platform
Users | 810
Roles | 814
Security | 816
Overview | 819
Developers | 847
Guides
Overview | 950
Troubleshooting | 984
Mixed Uplink Speeds between Leaf Devices and Spine Devices | 985
References
Devices | 1009
Analytics | 1031
Hypervisor and Fabric LAG Config Mismatch Probe (Virtual Infra) | 1075
xxviii
Hypervisor and Fabric VLAN Config Mismatch Probe (Virtual Infra) | 1076
Limitations | 1189
Graph | 1207
Get Started
IN THIS SECTION
Devices | 1
Design | 2
Resources | 2
Blueprints | 2
Next Steps | 3
Welcome! Juniper Apstra (formerly known as AOS) automates all aspects of the data center network
design, build, deploy, and operation phases. It leverages advanced intent-based analytics to continually
validate the network, thereby eliminating complexity, vulnerabilities, and outages resulting in a secure
and resilient network. To get started, you'll install and configure the Apstra software. Then you'll replace
the SSL certificate and default passwords to increase security. You can then start building the elements
of your physical network. Depending on the complexity of your design, other tasks may be required in
addition to the ones included in this general workflow.
Devices
Access the "Apstra GUI" on page 3 and get your devices ready.
1. "Device profiles" on page 555 (Devices > Device Profiles) represent the physical devices in your
network. Many device profiles are predefined for you. Check the list, and if one that you need is not
included, you can create it.
2. "Add devices" on page 536 to be managed by the Apstra environment.
2
Design
1. Logical devices (Design > Logical Devices) are abstractions of physical devices. They allow you to
specify device capabilities before selecting specific vendor hardware. Check the logical device design
(global) catalog for ones that meet your requirements; create them if needed.
2. Interface maps (Design > Interface Maps) combine device profiles and logical devices. Check the
interface map design (global) catalog for ones that meet your requirements; create them if needed.
3. Rack types (Design > Rack Types) are logical representations of racks. Check the rack type design
(global) catalog for ones that meet your requirements; create them if needed.
4. Templates (Design > Templates) are used to build rack designs (blueprints). Check the template design
(global) catalog for one that meets your requirements; create it if needed.
Resources
Create resource pools ("ASNs" on page 780, "IPv4 addresses" on page 784, and "IPv6 addresses" on
page 786 if needed) for your network. When you're ready to assign resources to your blueprint, you'll
specify a resource pool, then the resources will automatically be assigned from that pool.
Blueprints
1. Create a "blueprint" on page 5 from one of the templates in the design section.
2. Assign "resources" on page 33, "device profiles" on page 36, and "devices" on page 37 (S/Ns) to
build the network (Blueprints > <your_blueprint_name> > Staged > Physical > Build)
3. Review the calculated cabling map (Blueprints > <blueprint_name> > Staged > Physical > Links), then
cable up the physical devices according to the map. If you have a set of pre-cabled switches, ensure
that you have configured interface maps according to the actual cabling so that calculated cabling
matches actual cabling.
4. When you've finished building your network, commit the blueprint (Blueprints >
<your_blueprint_name> > Uncommitted). Committing a blueprint initiates work on the intent and
pushes configuration changes on assigned devices to realize it on the network.
5. Review the "blueprint dashboard" on page 6 (Blueprints > Dashboard) for "anomalies" on page
502. If you have cabling anomalies, the likely reason is a mismatch in calculated cabling and actual
cabling. Either re-cable the switches, recreate the blueprint with appropriate interface maps or use
the "Apstra-CLI" on page 929 utility to override the cabling in the blueprint with discovered cabling.
3
SEE ALSO
Next Steps
After your deployment is running, you can "build" on page 186 the virtual environment with "virtual
networks" on page 177 and "routing zones" on page 199, as needed.
Apstra GUI
IN THIS SECTION
SUMMARY
Access the Apstra GUI to design, build, deploy, operate and validate your network.
1. From the latest web browser version of Google Chrome or Mozilla FireFox, enter the URL https://
<apstra_server_ip> where <apstra_server_ip> is the IP address of the Apstra server (or a DNS name that
resolves to the IP address of the Apstra server).
4
2. If a security warning appears, click Advanced and Proceed to the site. The warning occurs because
the SSL certificate that was generated during installation is self-signed, and you didn't replace it with
a signed one when you installed the software. We recommend, for security reasons, that you replace
the SSL certificate.
3. From the login page, enter username admin and the secure password that you set when you
configured the Apstra server. (Entering the password incorrectly too many times locks you out for a
few minutes depending on how password requirements have been configured.) The main screen
appears.
Next Steps: See the "Get Started" on page 1 section of this guide for the general workflow for building
your network, with links to more information.
RELATED DOCUMENTATION
If you reset (a lost) Apstra GUI admin password to the default, we highly recommend that you
immediately change it to a secure one. User admin has full root access. Juniper is not responsible for
security-related incidents because of not changing default passwords.
1. SSH into the Apstra server as user admin (ssh admin@<apstra-server-ip> where <apstra-server-ip> is
the IP address of the Apstra server.)
5
admin@aos-server:~$ aos_reset_admin_password
Resetting UI "admin" user password to default "admin"
Successfully reset admin's password
admin@aos-server:~$
3. Log in to the Apstra GUI (default password: admin), then navigate to Platform > User Management >
Users.
4. Click username admin, then click the Change Password button (top-right)
5. Enter a secure password that meets the complexity requirements, then re-enter the new password.
6. Click Change Password to update the password.
IN THIS SECTION
Datacenter blueprints are created from templates. Make sure a suitable template exists in the global
catalog (Design > Templates).
1. From the left navigation menu in the Apstra GUI, click Blueprints, then click Create Blueprint.
6
2. Enter a unique name and select a template from the Template drop-down list. A preview shows
template parameters, topology preview, structure, external connectivity, and policies.
3. Click Create to create the blueprint and return to the blueprint summary view.
RELATED DOCUMENTATION
IN THIS SECTION
Blueprint Summaries | 6
Blueprint Dashboard | 7
Blueprint Summaries
From the left navigation menu, click Blueprints to go to the blueprint summaries page. This page shows a
summary of each individual blueprint. At the top of the page, indicators show various statuses across all
blueprints (deployment status, anomalies, root causes, build errors and warnings, and uncommitted
changes. This is useful when you have many blueprints in your Apstra instance. To quickly filter to show
only blueprints that meet a certain criteria, click one of the indicators. If blueprints don't have any issues,
the indicators are green. If there are any issues, the indicator is red. In the example below, clicking the
red part in Anomalies results in displaying only the blueprints that include anomalies. (This Apstra
instance has only one blueprint anyway, but you get the idea.)
7
Blueprint Dashboard
From the left navigation menu in the Apstra GUI, click Blueprints, then click the name of a blueprint to
go to its dashboard.
The dashboard shows the overall health and status of a blueprint. Statuses are indicated by color: green
for changes that succeeded, yellow for changes that are in progess, and red for changes that failed. The
deployment status section includes statuses for service configuration, ready configuration, and drain
configuration. The anomalies section includes statuses for all probes, IP fabric, generic system
connectivity, liveness, deployment status, route verification, leaf peering, and more. The nodes status
section includes statuses for deployment, BGP, cabling, config, interface, liveness, route, and hostname.
You can see in the example below, we have some issues with the IP fabric and leaf peering. You can click
the red indicators for details.
8
You can display analytics dashboards on the blueprint dashboard to have additional network information
on one screen. To add them, navigate to Analytics > Dashboards and turn ON the analytics dashboards'
default toggle.
To delete a blueprint you must have permission (in the user roles that you're assigned).
1. From the blueprint, click Dashboard, then click the Delete button (top-right).
2. Enter the blueprint name, then click Delete to delete the blueprint and go to the blueprint summary
view.
9
Analytics (Blueprints)
IN THIS SECTION
Analytics Introduction | 9
Dashboards | 12
Anomalies | 13
Widgets | 14
Probes | 17
Reports | 26
Analytics Introduction
IN THIS SECTION
Analytics Dashboard | 10
Managed devices generate large amounts of data over time. On their own these data are voluminous
and unhelpful. With Intent-Based Analytics (IBA) you can combine intent from the "graph" on page 1207
with current and historic data from devices to reason about the network at-large.
Data generated by devices are ingested via "agents" on page 610 and sent to the Apstra server. With the
use of "probes" on page 18, data can be aggregated across devices in response to operator
configuration. Combining probes with intent from the blueprint graph generates a reduced set of data
that can be more easily reasoned about. You can directly inspect advanced data from the Apstra GUI or
from "REST API" on page 871 to gain real-time insight about the network. It can also be streamed out
with our existing streaming infrastructure. Also, based on the state of this advanced data, "anomalies" on
page 14 can be raised.
While operating IBA at scale, using many probes, disk usage can grow significantly within the Apstra
server VM. This is expected because the system will persist at least enough samples to maintain data for
the requested duration for all time-series for all existing probes. Additionally, the system will create
10
checkpoint (backup) files up to a configured limit. Settings in the /etc/aos/aos.conf file indicate how often
to rotate logs and remove old checkpoint files. Using IBA can increase disk usage to tens of gigabytes. If
this is an issue, you can adjust the log rotation settings to reduce disk usage.
Additional space may be used by system snapshots and old images from any in-place Apstra server
upgrades. These can be deleted or moved off the system to increase free disk space.
Managed devices generate large amounts of data over time. On their own these data are voluminous
and unhelpful. With Intent-Based Analytics (IBA) you can combine intent from the "graph" on page 1207
with current and historic data from devices to reason about the network at-large.
Agents ingest data that devices generate and send them to the Apstra server. With IBA "probes" on
page 18, you can aggregate data across devices based on how they are configured. Combining probes
with intent from the blueprint graph generates a reduced set of data. You can directly inspect advanced
data from the Apstra GUI or from "REST API" on page 871 to gain real-time insight about the network.
You can stream data out with our existing streaming infrastructure. Also, based on the state of this
advanced data, probes can raise "anomalies" on page 14.
While operating IBA at scale, using many probes, disk usage can grow significantly within the Apstra
server VM. This is expected because the system will persist at least enough samples to maintain data for
the requested duration for all time-series for all existing probes. Additionally, the system will create
checkpoint (backup) files up to a configured limit. Settings in the /etc/aos/aos.conf file indicate how often
to rotate logs and remove old checkpoint files. Using IBA can increase disk usage to tens of gigabytes. If
this is an issue, you can adjust the log rotation settings to reduce disk usage.
System snapshots and old images from in-place Apstra server upgrades may use additional space. You
can delete them or move them off the system to increase free disk space.
Analytics Dashboard
Analytics dashboards monitor the network and raise alerts to anomalies. Specific dashboards are
automatically created and enabled based on the state of the "active (operational) blueprint" on page
486. You can also instantiate predefined dashboards and create your own.
• You cannot configure the trigger logic that determines when dashboards are auto-created, but you
can create/instantiate your own dashboards.
• Probes that you've created and not modified are reused instead of creating duplicates of those
probes.
11
• "Widgets" on page 14 within each dashboard monitor different aspects of the network and raise
alerts to relevant anomalies.
• When you enable a dashboard, the required probes and widgets are instantiated. If you update or
delete associated probes and/or widgets, the dashboard may enter an invalid state. Invalid
dashboards are not automatically repaired.
• You can display analytics dashboards on the blueprint "Blueprint Dashboard" on page 7 to have
additional network information on one screen. To add them, turn ON the analytics dashboards'
default toggles.
• When upgrading the controller, the auto-creation behavior of dashboards occurs on preexisting
active blueprints, in the same way as for newly-created blueprints.
From the blueprint, navigate to Analytics > Dashboards to go to the analytics dashboard. You can create,
clone, edit, and delete analytics dashboards. System-generated dashboards are labeled with System and
user-generated (and user-modified) dashboards are labeled with the user's name. Select a Display mode
(summary, preview, expanded) to view dashboards in various levels of detail.
12
Dashboards
IN THIS SECTION
1. From the blueprint, navigate to Analytics > Dashboards and click Configure Auto-Enabled
Dashboards. Dashboards are listed with their descriptions, widgets used, and toggles for auto-
enablement.
2. Toggle the dashboards ON to auto-enable them or OFF to disable auto-generation.
1. From the blueprint, navigate to Analytics > Dashboards, click Create Dashboard, then select
Instantiate Predefined Dashboard from the drop-down list.
2. Select a predefined dashboard from the drop-down list. For more information about predefined
dashboards, see "Predefined Dashboards" on page 1031 in the References section.
3. Click Create to instantiate the dashboard and return to the list view.
1. From the blueprint, navigate to Analytics > Dashboards, click Create Dashboard, then select New
Dashboard from the drop-down list.
2. Enter a name and (optional) description.
3. Select a layout (one-column, two-column, three-column) and if you want the dashboard to appear on
the blueprint Dashboard tab, toggle on Default.
4. Add and/or create "widgets" on page 14 to include in the dashboard.
5. Click Create Dashboard to create the dashboard and return to the table view.
A large dashboard may take some time to create. You can monitor the status at the bottom of the
screen under Active Tasks.
1. From the blueprint, navigate to Analytics > Dashboards and click the Edit button for the dashboard to
edit.
2. Make your changes by creating, adding, editing and/or deleting widgets.
3. Click Update to change the dashboard and return to the table view.
1. From the blueprint, navigate to Analytics > Dashboards and click the Delete button for the dashboard
to delete.
2. If you want to delete all widgets and probes that are exclusively used this dashboard, check the
check box. Deleting unnecessary widgets and probes frees up disk space.
3. Click Delete Dashboard to delete the dashboard and return to the table view.
Anomalies
IN THIS SECTION
Anomalies (Analytics) | 14
14
Anomalies (Analytics)
From the blueprint, navigate to Analytics > Anomalies to go to the list of anomalies that the IBA probes
have detected. You can search for specific anomalies by filtering Probe Label, Stage Name, and Tags in
the Query box.
To display a condensed view of the anomaly count per probe/stage, check the Group by stage check
box. Example: If three stages of the first of two probes are generating anomalies, and two stages of the
second probe are generating anomalies, Group by Stage shows five entries in a table, each one
representing one stage with anomalies.
NOTE: The blueprint "Blueprint Dashboard" on page 7 shows a summary of all anomalies
including those that IBA probes generated. Clicking the All Probes gauge on the dashboard takes
you to a list of anomalies (Analytics > Anomalies).
Widgets
IN THIS SECTION
Widgets Introduction | 14
Edit Widget | 16
Delete Widget | 17
Widgets Introduction
Widgets generate data that are based on Intent-based Analytics "probes" on page 18. The widget type
determines whether it returns a total count of a particular type of anomaly, or displays outputs
generated from stages and processors in an IBA probe. Some widgets are created automatically (but
they are not deleted automatically). You can view widgets by themselves or you can add them to
analytics dashboards. You can create widgets before you create the dashboard or while you're creating
it.
15
From the blueprint, navigate to Analytics > Widgets to go to the widgets table view. You can create,
clone, edit and delete widgets.
RELATED DOCUMENTATION
Analytics Introduction | 9
Create Anomaly Heat Map Widget | 15
Create Stage Widget | 16
1. From the blueprint, navigate to Analytics > Widgets and click Create Widget.
2. Select Anomaly Heat Map from the Type drop-down list and enter a name.
3. Enter row tags, column tags, and (optional) description.
4. Click Create to create the widget and return to the table view.
Creating a large widget may take some time. You can monitor the status under the Active Tasks
section at the bottom of the screen.
RELATED DOCUMENTATION
Widgets Introduction | 14
16
IN THIS SECTION
1. From the blueprint, navigate to Analytics > Widgets and click Create Widget.
2. Select Stage from the Type drop-down list and enter a name.
3. Select a probe and a stage, then customize the output as needed.
4. Click Create to create the widget and return to the table view.
Creating a large widget may take some time. You can monitor the status under the Active Tasks
section at the bottom of the screen.
1. From the blueprint, navigate to Analytics > Probes and select a probe.
2. Select a stage within the probe and click the Create dashboard widget button (right-side). The stage
is preselected for you in the dialog that appears.
3. Configure the parameters as needed.
4. Click Create to create the widget and return to the detail view of the probe. The widget appears in
the widgets table view (Analytics > Widgets) and when you create or update an analytics dashboard,
the new widget appears as an option.
SEE ALSO
Widgets Introduction | 14
Edit Widget
You can modify auto-created widgets, although defaults should work in most cases. Modifying widgets
affects any dashboards that they're used in.
17
1. From the blueprint, navigate to Analytics > Widgets and click the Edit button for the widget to edit.
2. Make your changes.
3. Click Update to stage the changes and return to the table view.
RELATED DOCUMENTATION
Widgets Introduction | 14
Delete Widget
You can't delete a widget if it's being used in a dashboard.
1. From the table view (Analytics > Widgets) or the details view, click the Delete button for the widget
to delete.
2. Click Delete Widget to stage the deletion and return to the table view.
RELATED DOCUMENTATION
Widgets Introduction | 14
Probes
IN THIS SECTION
Probes Introduction | 18
Create Probe | 24
Import Probe | 25
Edit Probe | 25
Export Probe | 25
Delete Probe | 26
18
Probes Introduction
IN THIS SECTION
Processors | 18
Ingestion Filters | 19
Stages | 23
Data Sources | 23
Probes are the basic unit of abstraction in Intent-Based Analytics. Generally, a given probe consumes
some set of data from the network, does various successive aggregations and calculations on it, and
optionally specifies some conditions of said aggregations and calculations on which anomalies are
raised.
Probes are Directed Acyclic Graphs (DAGs) where the nodes of the graph are processors and stages.
Stages are data, associated with context, that can be inspected by the operator. Processors are sets of
operations that produce and reduce output data from input data. The input to processors are one-or-
many stages, and the output from processors are also one-or-many stages. The directionality of the
edges in a probe DAG represent this input-to-output flow.
Importantly, the initial processors in a probe are special and do not have any input stage. They are
notionally generators of data. We shall refer to these as source processors.
IBA works by ingesting raw telemetry from collectors into probes to extract knowledge (ex: anomalies,
aggregations etc.). A given collector publishes telemetry as a collection of metrics, where each metric
has identity (viz, set of key-value pairs) and a value. IBA probes, often with the use of graph queries,
must fully specify the identity of a metric to ingest its value into the probe. With this feature, probes can
ingest metrics with partial specification of identity using ingestion filters, thus enabling ingestion of
metrics with unknown identities.
Some probes are created automatically. These probes will not be deleted automatically. This keeps
things simple operationally and implementation-wise.
Processors
The input processors of a probe handle the required configuration to ingest raw telemetry into the
probe to kickstart the data processing pipeline. For these processors, the number of stage output items
19
(one or many) is equal to the number of results in the specified graph query(s). If multiple graph queries
are specified, for example. graph_query: [A, B], and query A matches 5 nodes and query B matches 10
nodes, results of query A will be accessible using query_result indices from 0 to 4, and results of query B
using indices from 5 to 14.
If a processor's input type and/or output type is not specified, then the processor takes a single input
called in, and produces a single output called out.
Some processor fields are called expressions. In some cases, they are graph queries and are so noted. In
other cases, they are Python expressions that yield a value. For example, in the Accumulate processor,
duration may be specified as integer with seconds, for example 900, or as an expression, for example 60 *
15. However, expressions could be more useful: there are multiple ways to parametrize them.
Expressions support string values. Processor configuration parameters that are strings and support
expressions should use special quoting when specifying static value. For example, state: "up" is not valid
because it'll refer to the variable "up", not a static string, so it should be: state: '"up"'.
An expression is always associated with a graph query and is run for every resulting match of that query.
The execution context of the expression is such that every variable specified in the query resolves to a
named node in the associated match result. For more information, see "Service Data Collector" on page
1156 example.
Graph-based processors have been extended with query_tag_filter allowing the ability to filter graph
query results by tags (new in version 4.0). In IBA probes, tags are used only as filter criteria for servers
and external routers, specifically for the ECMP Imbalance (External Interfaces) probe and the Total East/
West Traffic probe. For specific processor information, see "Probe Processors" on page 1114 in the
References section.
Ingestion Filters
With "ingestion filters" one query result can ingest multiple metrics into a probe. Table data types are
used to store multiple metrics as part of a single stage output item. Table data types include table_ns,
table_dss, table_ts - to correspond to existing types - ns, dss, ts -respectively.
Collection filters determine the metrics that are collected from the target devices.
A collection filter for a given collector on a given device, is simply a collection of ingestion filters present
in different probes. You can also specify it as part of enabling a service outside the context of IBA or
probes but existing precedence rules for service enablement apply here - only filters at a given
precedence level are aggregated. When multiple probes specify an ingestion filter targeting a specific
service on a specific device, the metrics collected are a union - in other words, a metric is published
when it matches any of the filters. This is why, the data is also filtered by the controller component prior
to ingesting into the IBA probes.
20
This filter is evaluated by telemetry collectors, often to better control even what subset of available
metrics is fetched from the underlying device operating system. For example, to fetch only a subset of
routes instead of getting all routes which can be a huge number. In any case, only the metrics matching
the collection filter are published as the raw telemetry.
As part of enabling a service on a device, you can now specify collection filters for services. This filter
becomes an additional input provided to collectors as part of "self.service_config.collection_filters".
Following are the design/usability goals for filters (ingestion and collection)
• Most often cases are match any, match against a given list of possible values, equality match,
range check if key has numeric values.
2. Efficient evaluation - given the filters are evaluated in the hot paths of collection or ingestion.
3. Aggregatable - multiple filters are aggregated so this aggregation logic need not become the
responsibility of individual collectors.
4. Programming language neutral - components operating on filters can be in Python or C++ or some
other language in future.
5. Programmable - be amenable to future programmability around the filters, by the controller itself
and/or collectors, to enhance things like usability, performance etc.
Considering the above goals, following is a suggested and illustrative schema for filter1. Refer to
ingestion filter sections for specific examples to understand this better.
FILTER_SCHEMA = s.Dict(s.Object(
'type': s.Enum(['any', 'equals', 'list', 'pattern', 'range', 'prefix']),
'value': s.OneOf({
'equals': s.OneOf([s.String(), s.Integer()]),
'list': s.List(s.String(), validate=s.Length(min=1)),
'pattern': s.List(s.String(), validate=s.Length(min=1)),
'range': s.AnomalyRange(), validate=s.Length(min=1),
'prefix': s.Object({
'prefixsubnet': s.Ipv6orIpv4NetworkAddress(),
'ge_mask': s.Optional(s.Integer()),
'le_mask': s.Optional(s.Integer()),
'eq_mask': s.Optional(s.Integer())
})
21
), key_type=s.String(description=
'Name of the key in metric identity. Missing metric identity keys are '
'assumed to match any value'))
One instance of filter specification is interpreted as AND of all specified keys (aka per-key constraints).
Multiple filter specifications coming from multiple probes are considered as OR at the filter level.
NOTE: The schema presented here is only for communicating the requirements and engineering
is free to choose any way that accomplishes stated use cases.
NOTE: Items context is available as long as the items set is unchanged from the original set
derived from the collector processor configuration. After data goes through a processor that
changes this set, for example any grouping processor, it's no longer available.
From the blueprint, navigate to Analytics > Probes to go to the probes table view. To go to a probe's
details, click its name. You can instantiate, create, clone, edit, delete, import, and export probes.
You can display stages in some probes in various ways. For example, when you click the probe named
Device Traffic, you'll see the image below. Changing the data source for Average Interface Counters
22
from Real Time to Time Series gives you the option to view the time series as separate graphs, combined
graphs: linear or combined graphs: stacked (as of Apstra version 4.0). Also, you can see the disk space
used on each probe, as applicable.
CAUTION: If the Apstra controller has insufficient disk space, older telemetry data files
are deleted. To retain older telemetry data, you can increase capacity with "Apstra VM
Clusters" on page 836.
The structure and logic of non-linear probes with tens of processors is not easily distinguished in the
standard view. You can click the expand button (top of left panel) to see an expanded representation of
how the processors are inter-related (new in version 4.0). For example, the image below shows the
23
Stages
[Stages are referred to below, so you need to define them before then.]
Data Sources
On applicable stages, you can specify the source to use for collecting data, either real time or time
series. With time series, you can customize the manner in which the data is collected as follows:
• None
• anyOf - boolean - True if true foar at least one of the samples in the period
• How far back in time to collect (the last number of minutes, hours, or days)
RELATED DOCUMENTATION
Create Probe | 24
RELATED DOCUMENTATION
Probes Introduction | 18
Create Probe
1. From the blueprint, navigate to Analytics > Probes, click Create Probe, then select New Probe.
2. Enter a name and (optional) description.
3. To be able to filter by your own defined categories, enter tag(s).
4. Probes are enabled by default. This means that data is collected and processed (potentially creating
anomalies) as soon as the probe is created. To disable the probe, toggle off Enabled. When you're
ready to start collecting and processing data, edit the probe to enable it.
5. Click Add Processor, select a processor type, then click Add to add the processor to the probe. For
more information about individual processors, see "Probe Processors" on page 1114 in the References
section.
25
RELATED DOCUMENTATION
Probes Introduction | 18
Import Probe
1. From the blueprint, navigate to Analytics > Probes, then click Create Probe and select Import Probes
from the drop-down list.
2. Either click Choose Files and navigate to the file(s) on your computer, or drag and drop the file(s) from
your computer into the dialog window.
3. Click Import to import the probe and return to the table view.
RELATED DOCUMENTATION
Probes Introduction | 18
Edit Probe
Editing a probe affects any widgets and dashboards that are associated with it.
1. From the table view (Analytics > Probes) or the details view, click the Edit button for the probe to
edit.
2. Make your changes.
3. Click Update to stage the changes and return to the table view.
RELATED DOCUMENTATION
Probes Introduction | 18
Export Probe
1. From the blueprint, navigate to Analytics > Probes and click the name of the probe to export.
2. Click the Export button (top-right) to see a preview of the file to be exported.
3. To copy the contents, click Copy, then paste it.
4. To download the JSON file to your local computer, click Save as File.
5. When you've copied and/or downloaded the file, click the X to close the dialog.
26
RELATED DOCUMENTATION
Probes Introduction | 18
Delete Probe
You can't delete a probe if a widget is using it.
1. From the table view (Analytics > Probes) or the details view, click the Delete button for the probe to
delete.
2. Click Delete Probe to stage the deletion and return to the table view.
RELATED DOCUMENTATION
Probes Introduction | 18
Widgets Introduction | 14
Reports
IN THIS SECTION
IN THIS SECTION
Traffic Report | 27
27
NOTE: This feature is classified as a Juniper Apstra Technology Preview feature. These features
are "as is" and are for voluntary use. Juniper Support will attempt to resolve any issues that
customers experience when using these features and create bug reports on behalf of support
cases. However, Juniper may not provide comprehensive support services to Tech Preview
features.
For additional information, refer to the "Juniper Apstra Technology Previews" on page 1223 page
or contact "JuniperSupport" on page 893.
The Device Health report analyzes device health. To generate the report, the probes for device system
health and device telemetry health must be enabled.
The Optical Transceiver report analyzes optical transceivers telemetry patterns and trends. To generate
the report, the probes for optical transceivers and device traffic must be enabled.
Traffic Report
The Traffic report analyzes device traffic patterns and trends. To generate the report, the probes for
device traffic and device system health must be enabled.
RELATED DOCUMENTATION
NOTE: This feature is classified as a Juniper Apstra Technology Preview feature. These features
are "as is" and are for voluntary use. Juniper Support will attempt to resolve any issues that
customers experience when using these features and create bug reports on behalf of support
cases. However, Juniper may not provide comprehensive support services to Tech Preview
features.
For additional information, refer to the "Juniper Apstra Technology Previews" on page 1223 page
or contact "JuniperSupport" on page 893.
28
1. From the blueprint, navigate to Analytics > Reports and click the Generate Report button in the
Actions panel for the report to generate.
2. Enter the following information:
• aggregation interval
3. Click Generate to generate the report and return to the table view.
RELATED DOCUMENTATION
IN THIS SECTION
Blueprint-Wide Search | 29
Physical | 32
Virtual | 177
Policies | 289
Catalog | 330
Tasks | 347
Blueprint-Wide Search
You can search the entire blueprint from the Staged Exact Match | 30
(and Active) tabs (new in Apstra version 4.2.0). Wildcards | 31
Field References | 32
Composite Queries | 32
To search the staged blueprint, enter your search criteria in the Search field at the top of the Staged tab.
For assistance with search, you can click the search field to see a tooltip describing the ways you can
search.
30
Exact Match
To find an exact match, enter the exact value for the object. For example, you can enter 64513 to find that
ASN. The results in our example show that it's assigned to spine2.
Additional metadata is returned that tells you what else is associated with the object. Click All document
properties to see this information.
From your results you can click an object name (in blue) to go to its details.
31
Wildcards
You can search using wildcards. Let's say you want to search for ASNs that begin with 64. By adding the
wildcard character *, you get all objects that begin with 64. (Enter 64*.) Five results are loaded by default.
To see additional results, click Load more results. If there are no more results, No more results appears at
the bottom.
32
Field References
You can include a reference to a field in your search to receive more relevant results. With the ASN
example, if you search 64*, there may be other entities besides ASNs that begin with 64. If you know
you're looking for an ASN, you can enter a search query, such as asn:"64*". As you begin typing results
auto-fill to help you with the query. You can press the tab key to autocomplete.
Composite Queries
You can combine searches into one query. Returning to the ASN example, say you want to find ASNs
beginning with 64 and that also have the leaf role. You can enter the search query, role:"leaf" asn:"64*".
Physical
IN THIS SECTION
Build | 33
Selection | 44
Topology | 46
Nodes | 52
Links | 89
Interfaces | 140
Racks | 154
33
Pods | 159
Planes | 172
Build
IN THIS SECTION
Manage Configlets | 43
IN THIS SECTION
You can assign resources, release previously used resources and go to resource pool management. The
resource assignment section has a convenient shortcut button, Manage resource pools, that takes you to
resource pool management. From there, you can monitor resource usage and create additional resource
pools, as needed.
2. Red status indicators mean that resources need to be assigned. Click a red status indicator, then click
the Update assignments button.
3. Select a pool from which to pull the resources, then click the Save button. The required number of
resources are automatically assigned to the resource group. When the red status indicator turns
green, the resource assignment has been successfully staged.
NOTE: You can also assign resources on a per-device basis (especially useful if you have a
predefined resource mapping). Select the device from the Topology view or Nodes view, then
assign the resource from the Properties section of the Selection panel (right-side). Since you'd
not be using a resource pool to assign from, the No pools assigned message remains in the
Build panel. (This is also where you can see the specific resource that was assigned from a
resource pool.)
table view to indicate that they have been retained (but the build section shows that no resources are
assigned). Situations like this can (but do not always) result in build errors. Examples of where we want
resources to persist include:
• Revert operations.
If you don't need to re-use the same resources, reset the resource groups by clicking the Reset resource
group overrides button (shown in the overview image above). Then you can unallocate resources, and
allocate new ones, as applicable.
SEE ALSO
1. From the "blueprint" on page 6, navigate to Staged > Physical > Build > Device Profiles.
2. Click a red status indicator, then click the Change interface maps assignment button (looks like an
edit button). You assign device profiles by assigning interface maps.
3. Select the appropriate interface map from the drop-down list for each node. Or, to assign the same
interface map to multiple nodes, select the ones that use the same interface map (or all of them with
one click), then select the interface map from the drop-down list located above the selections, and
37
4. Click Update Assignments. When the red status indicator turns green, the device profile assignments
have been successfully staged.
IN THIS SECTION
statistics for cabling, LLDP, transceivers and more. Any issues, such as miscabling or physical link errors,
cause a telemetry alarm. You can address and correct the anomalies before deploying the device.
It's common to have a committed blueprint without any deployed devices. You can deploy devices as
required, in batches, one by one, or all in one go. If you want to assign devices without deploying them,
set the deploy mode to Ready, which puts devices in the In Service Ready state. This configuration is
called Ready Config (previously known as Discovery 2 Config).
NOTE: When resetting system IDs (serial number) Discovery 1 configuration is re-applied. Before
physically uninstalling the agent, it is good practice to fully erase the device configuration and
uninstall the device agent.
NOTE: You can also use apstra-cli to bulk-assign system IDs to devices either with a CSV text file
or the blueprint set-serial-numbers command.
1. From the blueprint, navigate to Staged > Physical > Build > Devices, and click the status indicator for
Assigned System IDs (if the nodes list is not already displayed). Unassigned devices are indicated in
39
yellow.
2. Click the Change System IDs assignments button (below Assigned System IDs) and, for each node,
select system IDs from the drop-down list. (If you don't see an expected serial number (system ID),
40
you may still need to acknowledge the device (Devices > Managed Devices).)
3. When you select a system ID, the deploy mode changes to Deploy by default. If you don't want to
deploy the device yet, change the deploy mode here. When you're ready to deploy the device, return
here to set the deploy mode back to Deploy.
4. Click Update Assignments to stage the changes. Before the task is completed you can click Active
Tasks at the bottom of the screen to see its progress.
5. Commit changes to the blueprint to deploy device(s) into the active fabric. Device state changes to In
Service Active and the configuration is called Service Config.
As soon as you deploy a device, anomalies may appear on the dashboard. When telemetry data is
verified against Intent, anomalies resolve themselves. This can take a fair amount of time in some
cases, especially for BGP sessions and advertising routes.
Deploying devices can have different implications depending on the device vendor. Juniper Junos
devices, for example, have the following characteristics with regards to raising anomalies:
• show interface commands don't list interfaces on ports that do not have a transceiver plugged in.
This means Interface Down anomalies can't be raised for these interfaces. Such interfaces can be
recognized using the show virtual-chasses vc-port, and have a status of 'Absent'.
• If a virtual network endpoint is configured on a leaf interface, Apstra expects an EVPN type 3
route for that interface. If this interface is down, Junos does not advertise the RT-3, resulting in a
41
"Missing Route" anomaly. If this anomaly is undesirable, we recommend that you remove the
interface from the virtual network until the interface is up.
After deploying devices a new running config is collected, called the Golden Config, which serves as
Intent. Running configuration is continuously collected and compared against this Golden config.
When a deployment fails, Golden Config is unset. Protocol related anomalies like BGP or LLDP are
only raised if devices at both ends are deployed.
2. From the Assigned System IDs list, click the name of the node that you want to assign. Device details
are displayed (deploy mode, serial number, hostname rendered, incremental and pristine config, as
42
applicable).
NOTE: You can also select a node name in the Selected Nodes drop-down list (left-middle) to
go to these device details.
3. To assign a system ID, click the Edit button for S/N, select the system ID from the drop-down list, and
click the Save button to stage the change. (If you don't see the expected serial number (system ID),
you may still need to acknowledge the device (Devices > Managed Devices).
4. To remove an existing S/N instead of assigning one, click the Edit button for S/N, then click the red
square to stage the change.
2. Click the Device tab in the right panel (if it's not already selected).
3. Enter a different S/N. (You can also access configuration files from here: rendered, incremental,
pristine).
4. Click the Save button to stage the changes.
SEE ALSO
Manage Configlets
Configlets are vendor-specific. Apstra software automatically ensures that configlets of a specific vendor
are not assigned to devices from a different vendor.
If the configlets you need are not in the blueprint catalog (Staged > Catalog > Configlets), then you need
to import them.
44
RELATED DOCUMENTATION
Selection
IN THIS SECTION
While in the Apstra environment, you may need device information that's obtained via CLI commands.
Traditionally, you need to log in to a machine with access to the device management network, open a
terminal, find device IP addresses, SSH to each of them, then run the required CLI commands. As of
Apstra version 4.2.0, you can bypass these steps and run show commands for Juniper devices directly
from the Apstra GUI. You can execute CLI commands from within the staged or active blueprint, or from
the Managed Devices page. The steps below are for Datacenter blueprints.
1. From the blueprint, navigate to Staged > Physical > Topology (or Staged > Physical > Nodes) and
select a Juniper device node.
2. In the Selection section that appears in the right panel, on the Device tab, click Execute CLI
Command.
45
3. In the dialog that opens type show, then press the space bar. Available commands appear that you can
scroll through to select, or you can start typing the command and it will auto-fill. In our example
we're looking for BGP neighbors. We typed show, space, then b, which filtered the commands to only
include those with the letter b. We selected bgp, then pressed the space bar to show available
arguments for bgp. We typed n to show commands including the letter n. We'll select neighbor to
complete the command.
4. From the drop-down list, select how you want to view the results: text, XML or JSON.
5. Click Execute to return show command results. We used Text Mode for our example.
46
RELATED DOCUMENTATION
Topology
IN THIS SECTION
Topology (Datacenter) | 46
Topology (Datacenter)
IN THIS SECTION
2D Topology View | 47
3D Topology View | 48
Before you push your changes to the active blueprint you can view progressive changes in the staged
blueprint. This staging area allows you to validate that the pending changes are compliant with the
intent, and that they work together with available resources and devices before you deploy the network.
Many node and link operations are performed from the Topology view. See "Nodes" on page 53 and
"Links" on page 90 for more information.
You can view topologies in 2D or 3D view, and selections within topologies as neighbors, links, or virtual
network endpoints, as applicable.
47
2D Topology View
From the blueprint, navigate to Staged > Physical > Topology to go to the 2D topology view.
• To make topology elements larger, click the Expand Nodes check box.
• To display the links between elements, click the Show Links check box.
• To display a different layer, select the layer from the Layer drop-down list. Uncommitted Changes is
an example of one of the layers you could display. The nodes with uncommitted changes are shown
in yellow. The changes that apply to this layer are specific to the nodes themselves, such as ASN,
loopback IP addresses and deploy modes. It doesn't apply to such changes as adding routing zones,
virtual networks or connectivity templates on those nodes.
• To display additional information (node name, hostname, role, link, tags, as applicable), hover over a
node or link.
• To display a different label (name, hostname, S/N), select a different label from the Topology Label
drop-down list.
48
• To display a specific rack topology, click the rack element or select the rack from the Selected Rack
drop-down list.
• To display a specific node topology, click the node element in the topology or select the node from
the Selected Node drop-down list.
3D Topology View
NOTE: This feature is classified as a Juniper Apstra Technology Preview feature. These features
are "as is" and are for voluntary use. Juniper Support will attempt to resolve any issues that
customers experience when using these features and create bug reports on behalf of support
cases. However, Juniper may not provide comprehensive support services to Tech Preview
features.
For additional information, refer to the "Juniper Apstra Technology Previews" on page 1223 page
or contact "JuniperSupport" on page 893.
49
From the blueprint, navigate to Staged > Physical > Topology and click 3D.
• You can zoom in and out, move left and right, and reset to the default size and orientation.
• To display additional information (node name, hostname, role, as applicable) hover over a node.
• To display rack topology (in 2D), click the rack element or selecting the rack from the Selected Rack
drop-down list.
• To display node topology (in 2D), click the node element or select the node from the Selected Node
drop-down list.
50
• To display aggregate links, click the Show Aggregate Links check box.
• To display unused ports, click the Show Unused Ports check box.
• To display a different label (name, hostname, S/N), select a different label from the Topology Label
drop-down list (right side).
• To display a particular neighbor type (all neighbors, generic, leaf, spine, and so on) select it from the
Show drop-down list.
• To display available operations for a selected node or interface select the check box(es).
• To see details, hover over a node. Hovering over a generic system shows applied connectivity
templates.
51
Nodes
IN THIS SECTION
Nodes (Datacenter) | 53
Delete Node | 86
53
Nodes (Datacenter)
From the blueprint, navigate to Staged > Physical > Nodes to go to the Nodes view.
• In table view, you can select which details to display (from the drop-down list).
• You can click the name of a node in the table to display information in the right panel (such as
telemetry, properties, and tags).
Many node operations are performed from the Topology view, and some can also be performed directly
in the Nodes view. See the following sections for more information.
54
IN THIS SECTION
2. In the Device panel (on the right), click the Edit button for deploy mode, and change it to Undeploy,
then click the Save button.
55
NOTE: Another way to get to the Device selection panel from the Topology view (or Nodes,
Links, Racks, or Pods view) is to click the Devices tab in the Build panel (on the right), click the
status indicator for Assigned System IDs (to display the nodes and assigned system IDs), then
click the node name that you want to unassign.
4. Click the red square in the S/N section to unassign the system ID.
56
5. Click Uncommitted and commit changes to the blueprint to remove the device from the fabric.
The device is still under Apstra management. It's ready and available to be assigned to any blueprint.
To remove the device completely from Apstra management, "remove the device from Managed Devices"
on page 552.
57
2. Click the Change System IDs assignments button (below Assigned System IDs), then in the dialog that
opens click the Remove assignment button for the device to remove. The deploy mode is
automatically unselected.
3. Click Update Assignments (bottom-right in dialog) to stage the change and return to the Topology
view.
4. Click Uncommitted and commit changes to the blueprint to remove the device from the fabric.
The device is still under Apstra management. It's ready and available to be assigned to any blueprint.
To remove the device completely from Apstra management, "remove the device from Managed Devices"
on page 552.
58
SEE ALSO
IN THIS SECTION
• Deploy - Adds service configuration and puts the device fully in service.
• Ready - Adds Ready configuration (hostnames, interface descriptions, port speeds / breakouts)
(previously called Discovery 2 config). Changing from deploy to ready removes service
configuration.
• Drain - Takes a device out of service for maintenance. For more information, see "Draining Device
Traffic" on page 539.
When you're ready to activate changes, commit them from the Uncommitted tab.
Set Deploy Mode (from Selection Panel)
1. From the blueprint, navigate to Staged > Physical.
2. Either from the Topology view or the Nodes view, select a node.
3. If it's not already selected, click the Device tab in the Selection panel (on the right).
4. Click the Edit button for Deploy Mode and select a deploy mode.
5. Click the Save button to stage the new deploy mode.
59
When you're ready to activate changes, commit them from the Uncommitted tab.
Set Deploy Mode (from Nodes View)
You can change the deploy mode for one or more nodes at the same time from the Nodes view.
1. From the blueprint, navigate to Staged > Physical > Nodes and check one or more check boxes for
the node(s) to change. (You can narrow your search with the drop-down lists for planes, pods, and
racks as applicable.)
2. Click the Set Deploy Mode button (fourth of five buttons above the nodes list) and select a deploy
mode. (To filter selection before changing deploy mode, you can use the query.)
3. Click Set Deploy Mode to stage the change and return to the Nodes view.
When you're ready to activate changes, commit them from the Uncommitted tab.
SEE ALSO
When to use a generic system and when to use an external generic system:
Generic System
• For middleware devices, such as firewalls, load balancers, external routers and so on*
* In many cases, middleware boxes only connect to a single border leaf pair in a rack, but configuring it
as an external generic system allows it to be visually separated outside of the rack. However, if there is a
requirement such as connecting to an external router (MX) via BGP and you want to provide rack
redundancy, then you would use an external generic system to allow this multi-rack connectivity.
60
IN THIS SECTION
Systems that are not managed by Apstra, like external routers and firewalls, are called generic systems.
You specify their roles with tags. If the system is part of a rack topology we call it a generic system.
2. Select the node check box to see the operations available for that node (and that you have
permissions for).
NOTE: You can also get to the selection page from the Nodes view. From the blueprint,
navigate to Staged > Physical > Nodes, click the node name in the table, then click the node
name that appears at the top of the Selection panel (on the right side of the page).
62
3. Click Add generic system and enter a unique label and (optional) hostname.
4. Select the representation for the new node (none, logical device, or logical device with interface
map), then select the appropriate logical device or interface map from the drop-down list, as
applicable. (Logical devices allow you to define port roles.)
5. Enter the port channel ID min and max. If you leave the values at zero, any available port-channel
may be used. (Prior to Apstra version 4.2.0, all non-default port channel numbers had to be unique
per blueprint. Port channel ranges could not overlap. This requirement has been relaxed, and now
they need only be unique per system).
6. Enter tags (optional) to identify the role(s) of the new generic system, then click Next.
63
7. Select an available port and transformation. The gray Add Link button turns green.
9. Click Create to stage the change and return to the Topology view.
When you're ready to activate your changes, commit them from the Uncommitted tab.
2. Select the node check box to see the operations available for that node (and that you have
permissions for).
NOTE: You can also get to the selection page from the Nodes view. From the blueprint,
navigate to Staged > Physical > Nodes, click the node name in the table, then click the node
name that appears at the top of the Selection panel (on the right side of the page).
66
3. Click Copy existing generic and select the generic system from the drop-down list.
The link table appears.
4. Click Select interface to go to ports.
5. Select a port and transformation, then click Confirm to return to the dialog.
6. Click Submit to stage the change and return to the Topology view.
67
When you're ready to activate your changes, commit them from the Uncommitted tab.
NOTE: You can also create generic systems when you create rack types during the Design phase.
SEE ALSO
IN THIS SECTION
When you want to connect your Apstra-managed fabric to a system that's not managed in the Apstra
environment, you use generic systems and external generic systems. These systems can be external
routers, firewalls, or whatever else you want; you specify their roles with tags. If the system is part of a
rack topology, we call it a generic system. If the system is not part of a rack topology, we call it an
external generic system. This page shows you a couple of ways to add external generic systems.
68
2. Select the node check box to see the operations available for that node (and that you have
permissions for).
69
NOTE: You can also get to the selection page from the Nodes view. From the blueprint,
navigate to Staged > Physical > Nodes, click the node name in the table, then click the node
name that appears at the top of the Selection panel (on the right side of the page).
3. Click Add external generic and enter a unique label and (optional) hostname.
4. Select the representation for the new node (none, logical device, or logical device with interface
map), then select it from the drop-down list as applicable. (Selecting a logical device allows you to
define port roles.)
5. Enter the port channel ID min and max (new in 4.2.0). The values in the range are used to allocate PC
IDs for all leafs, spines, and superspines attached to this external generic system. If you leave the
values at zero, any available port-channel may be used. (Prior to Apstra version 4.2.0, all non-default
port channel numbers had to be unique per blueprint. Port channel ranges could not overlap. This
requirement has been relaxed, and now they need only be unique per system.)
6. Enter tags (optional) to identify the role(s) of the new external generic system, then click Next. The
Create Links dialog opens.
7. Select an available port and transformation, then click the Add Link button that turns from gray to
green.
70
9. Click Create to stage the change and return to the Topology view.
When you're ready to activate your changes, commit them from the Uncommitted tab.
2. Enter a hostname, and if you want to be able to define port roles select a logical device from the
drop-down list.
3. Enter the port channel ID min and max (new in 4.2.0). The values in the range are used to allocate PC
IDs for all leafs, spines, and superspines attached to this external generic system. If you leave the
values at zero, any available port-channel may be used. (Prior to Apstra version 4.2.0, all non-default
port channel numbers had to be unique per blueprint. Port channel ranges could not overlap. This
requirement has been relaxed, and now they need only be unique per system.)
4. Enter tags (optional) to identify the role(s) of the new external generic system.
5. Click Create to stage the changes and return to the Nodes view.
You've created an external generic system that's not yet linked. You can either select the node (leaf,
spine) first then link to the external generic system, or you can select the external generic system first,
then link to a node. See below for links to the procedures.
SEE ALSO
1. From the blueprint, navigate to Staged > Physical > Topology and select the leaf to connect to the
new access switch.
72
2. Select the leaf check box to see the operations available for that leaf (and that you have permissions
for).
NOTE: You can also get to the selection page from the Nodes view. From the blueprint,
navigate to Staged > Physical > Nodes, click the leaf name in the table, then click the leaf
name that appears at the top of the Selection panel (on the right side of the page).
73
3. Click Add access switch and enter a unique label and hostname.
7. Select available ports and transformations, as applicable. The gray Add Link button turns green.
74
9. Click Create to stage the change and return to the Topology view.
When you're ready to activate your changes, commit them from the Uncommitted tab.
75
RELATED DOCUMENTATION
IN THIS SECTION
2. Select the node check box to see the operations available for that node (and that you have
permissions for).
When you're ready to activate your changes, commit them from the Uncommitted tab.
2. Click the Add/Remove Tags button and update tags as needed. When you create new tags here they
are added to the blueprint catalog.
3. Click Add/Remove Tags to stage the change and return to the Nodes view.
When you're ready to activate your changes, commit them from the Uncommitted tab.
78
IN THIS SECTION
You can add and update the port channel ID range on generic systems and, as of Apstra version 4.2.0,
external generic systems. You can do this from Topology view or the Nodes view.
CAUTION: Changing port channel range is an invasive operation and may lead to
reassigning existing port channel IDs.
2. To see the current port channel ID range (and other details) hover over the system.
3. Select the check box for the system to see operations available for that system (and that you have
permissions for).
80
4. Click Update Port Channel ID Range and edit the min and/or max values, as needed. (Prior to Apstra
version 4.2.0, all non-default port channel numbers had to be unique per blueprint. Port channel
ranges could not overlap. This requirement has been relaxed, and now they need only be unique per
system.)
5. Click Update to stage your changes and return to the Topology view.
2. In the table of generic systems and external generic systems, edit the min and/or max port channel
ID values, as needed. (Prior to Apstra version 4.2.0, all non-default port channel numbers had to be
unique per blueprint. Port channel ranges could not overlap. This requirement has been relaxed, and
now they need only be unique per system.)
81
3. Click Update to stage your changes and return to the Nodes view.
IN THIS SECTION
When you're ready to activate changes, Commit them from the Uncommitted tab.
Edit Hostname (from Selection Panel)
1. From the blueprint, navigate to Staged > Physical > Nodes and select a node name (not the check
box). (You can narrow your search with the drop-down lists for planes, pods, and racks as applicable,
as of Apstra version 4.0.)
82
2. If it's not already selected, click the Device tab in the Selection panel (on the right). (You can also
access the Selection panel from Staged > Physical > Topology.)
3. Enter a different hostname. (You can also change deploy mode and system ID and access
configuration files from here: rendered, incremental, pristine).
4. Click the Save button to stage the changes.
1. From the blueprint, navigate to Staged > Physical > Nodes and click the Edit server names and
hostnames button (second of three buttons above the nodes view).
2. Make your changes.
3. Click Update to stage the changes and return to the nodes view.
Any associated link names do not automatically update to match the changed server names and/or
hostnames. You can manually "change the link names" on page 131 to match so when you are reviewing
an updated cabling map the names align.
SEE ALSO
IN THIS SECTION
1. From the blueprint, navigate to Staged > Physical > Nodes and click the Edit generic system names
and hostnames button (second of three buttons above the nodes view).
2. Make your changes.
3. Click Update to stage the changes and return to the nodes view.
Any associated link names do not automatically update to match the changed server names and/or
hostnames. You can manually "change the link names" on page 131 to match so when you are reviewing
an updated cabling map the names align.
You can change device properties such as name, interface map, ASN, and loopback IP, depending on the
node chosen.
85
1. From the blueprint, navigate to Staged > Physical > Nodes and select a node name (not the check
box). You can narrow your search with the drop-down lists for planes, pods, racks and access groups,
as applicable.
2. Click the Properties tab in the right panel.
3. You can change device properties such as name (must be changed to a unique name), interface map,
ASN, and loopback IP, depending on the node chosen. The attributes that can be edited have an Edit
button associated with them. Change properties as applicable.
NOTE: If you changed leaf names in a leaf pair, the leaf pair name does not change. You can
manually change the leaf pair name to correspond with the new leaf names. This is especially
useful when assigning leaf pairs when you create virtual networks.
1. From the blueprint, navigate to Staged > Physical > Nodes and select a node name (not the check
box). (You can narrow your search with the drop-down lists for planes, pods, and racks as applicable,
as of Apstra version 4.0.)
86
3. Click Node's Static Routes to go to Staged > Virtual > Static Routes where you can see that node's
static routes.
Delete Node
1. From the blueprint, navigate to Staged > Physical > Topology and select the node to delete.
87
2. Select the check box to see the operations available for that node (and that you have permissions
for).
88
NOTE: You can also get to the selection page from the Nodes view. From the blueprint,
navigate to Staged > Physical > Nodes, click the node name in the table, then click the node
name that appears at the top of the Selection panel (on the right side of the page).
3. Click Delete node to go to its dialog. All links towards the system will be deleted and connectivity
templates will be unassigned for you.
89
4. Click Delete to stage the deletion and return to the Topology view.
When you're ready to activate your changes, commit them from the Uncommitted tab.
Links
IN THIS SECTION
Links (Datacenter) | 90
Links (Datacenter)
IN THIS SECTION
From the blueprint, navigate to Staged > Physical > Links to go to the Links view.
Many link operations are performed from the Topology view, and some can also be performed directly in
the Links view. See the following sections for more information.
Links Example
In this example, each link is assigned a unique /31 subnet from the IP Pool.
That is, the links between spine1 and all leafs (in ascending order) are assigned subnets first.
92
The links between spine2 and all leafs are then assigned subnets, and so on.
1. From the blueprint, navigate to Staged > Physical > Topology and select a node that can connect to a
leaf.
93
2. Select the node check box to see the operations available for that node (and that you have
permissions for).
94
NOTE: You can also get to the selection page from the Nodes view. From the blueprint,
navigate to Staged > Physical > Nodes, click the node name in the table, then click the node
name that appears at the top of the Selection panel (on the right side of the page).
4. Select the leaf to link to from the drop-down menu, then select an available port and transformation.
The gray Add Link button turns green.
6. Click Create to stage the change and return to the Topology view.
96
When you're ready to activate your changes, commit them from the Uncommitted tab.
1. From the blueprint, navigate to Staged > Physical > Topology and select a node that can connect to a
spine.
97
2. Select the node check box to see the operations available for that node (and that you have
permissions for).
98
NOTE: You can also get to the selection page from the Nodes view. From the blueprint,
navigate to Staged > Physical > Nodes, click the node name in the table, then click the node
name that appears at the top of the Selection panel (on the right side of the page).
4. Select the spine to link to from the drop-down menu, then select an available port and
transformation. The gray Add Link button turns green.
6. Click Create to stage the change and return to the Topology view.
100
When you're ready to activate your changes, commit them from the Uncommitted tab.
1. From the blueprint, navigate to Staged > Physical > Topology and select a node that can connect to a
generic system.
101
2. Select the node check box to see the operations available for that node (and that you have
permissions for).
102
NOTE: You can also get to the selection page from the Nodes view. From the blueprint,
navigate to Staged > Physical > Nodes, click the node name in the table, then click the node
name that appears at the top of the Selection panel (on the right side of the page).
4. Select an available port, transformation, and the generic system to link to. The gray Add Link button
turns green.
6. Click Create to stage the change and return to the Topology view.
105
When you're ready to activate your changes, commit them from the Uncommitted tab.
1. From the blueprint, navigate to Staged > Physical > Topology and select a node that can connect to
an external generic system.
106
2. Select the node check box to see the operations available for that node (and that you have
permissions for).
107
NOTE: You can also get to the selection page from the Nodes view. From the blueprint,
navigate to Staged > Physical > Nodes, click the node name in the table, then click the node
name that appears at the top of the Selection panel (on the right side of the page).
4. Select an available port, transformation, and the external generic system to link to. The gray Add Link
button turns green.
6. Click Create to stage the change and return to the Topology view.
110
When you're ready to activate your changes, commit them from the Uncommitted tab.
If your platform does not support it, do not attempt to create leaf peer links. Currently, Junos devices do
not support any peer links, and SONiC devices do not support L3 peer links.
1. From the blueprint, navigate to Staged > Physical > Topology and select the MLAG member that
needs a peer link.
111
2. Select the node check box to see the operations available for that node (and that you have
permissions for).
112
NOTE: You can also get to the selection page from the Nodes view. From the blueprint,
navigate to Staged > Physical > Nodes, click the node name in the table, then click the node
name that appears at the top of the Selection panel (on the right side of the page).
6. Click Add to stage the change and return to the Topology view. (BGP session is added as applicable.)
114
When you're ready to activate your changes, commit them from the Uncommitted tab.
Form LAG
1. From the blueprint, navigate to Staged > Physical > Topology and select the node to add as a member
of a LAG.
115
2. Select the interface check box to see the operations available for that interface (and that you have
permissions for).
116
• LACP (Active) - actively advertises LACP BPDU even when neighbors do not.
• LACP (Passive) - doesn't generate LACP BPDU until it sees one from a neighbor.
• Static LAG (no LACP) - Static LAGs don't participate in LACP and will conditionally operate in
forwarding mode.
117
4. Click Update to stage your changes and return to the Topology view.
When you form a LAG, it inherits any connectivity templates assigned on the individual links. The
LAG is created, but LACP configuration won't be pushed to the device until connectivity templates
are applied. When you form a LAG, it inherits any connectivity templates assigned on the individual
links.z
When you're ready to activate your changes, commit them from the Uncommitted tab.
You can add a link between a LAG and a generic system (new in Apstra version 4.2.0).
1. From the blueprint, navigate to Staged > Physical > Topology and select a leaf that is part of a LAG.
(Alternatively, you can click the leaf name from the Nodes table at Staged > Physical > Nodes.)
118
2. Check the Show Unused Ports check box, select a LAG interface and one of the unused port
interfaces, then click Create link in LAG.
4. Dual-attached links to leaf groups (evpn-esi) must be symmetric. Add the second link, as applicable.
5. Click Create to create the link and return to the Topology view.
When you're ready to activate your changes, commit them from the Uncommitted tab.
Break LAG
It’s common to break a LAG towards a server into individual links, then reform the LAG from individual
links, all while keeping the same VLAN allocation (when re-bootstrapping the server for example). You
can break a LAG while preserving any assigned connectivity templates.
1. From the blueprint, navigate to Staged > Physical > Topology and select the node with the LAG to
break.
120
2. Select the interface check boxes for the LAG (or click the port-channel representation) to see the
operations available for those interfaces (and that you have permissions for).
121
3. Click Break LAG to go to its dialog with details on the LAG to break.
4. Click Break to stage your changes and return to the Topology view.
When you're ready to activate your changes, commit them from the Uncommitted tab.
1. From the blueprint, navigate to Staged > Physical > Topology and select the MLAG member that
needs an updated link LAG mode.
122
2. Select the interface check box to see the operations available for that interface (and that you have
permissions for).
123
3. Click Update LAG mode and select the new LAG mode:
• LACP (Active) - actively advertises LACP BPDU even when neighbors do not.
• LACP (Passive) - doesn't generate LACP BPDU until it sees one from a neighbor.
• Static LAG (no LACP) - Static LAGs don't participate in LACP and will conditionally operate in
forwarding mode.
4. Click Update to stage your changes and return to the Topology view.
When you're ready to activate your changes, commit them from the Uncommitted tab.
IN THIS SECTION
2. Select the interface check box to see the operations available for that interface (and that you have
permissions for).
4. Click Update to update link tags and return to the Selection view.
When you're ready to activate your changes, commit them from the Uncommitted tab.
126
2. Click Add/Remove Tags to see tags that are in the blueprint catalog.
3. Select existing tag(s) or create new one(s) that will be tagged to the link and added to the blueprint
catalog.
4. Click Update Tags to update the tags and return to the Links view.
When you're ready to activate your changes, commit them from the Uncommitted tab.
2. Click the Add/Remove Tags button and update tags as needed. When you create new tags here they
are added to the blueprint catalog.
128
3. Click Add/Remove Tags to stage the change and return to the Links view.
When you're ready to activate your changes, commit them from the Uncommitted tab.
IN THIS SECTION
To change link speeds between spine-leaf and superspine-spine you must "change the rack" on page
157.
1. From the blueprint, navigate to Staged > Physical > Topology and select the node where you want to
change link speed.
129
2. Select the interface check box to see the operations available for that interface (and that you have
permissions for).
130
3. Click Update link speed and select the new link speed from the drop-down list. Only speeds that are
available for that link/interface are listed (as of Apstra version 4.2.0).
4. Click Update to stage your changes and return to the Topology view.
When you're ready to activate your changes, commit them from the Uncommitted tab.
2. Select new link speeds for one or more links from the drop-down lists. Only speeds that are available
for that link/interface are listed.
3. Click Update to stage the changes and return to the Links view.
When you're ready to activate your changes, commit them from the Uncommitted tab.
If you have changed server names and/or hostnames for switches, any associated link names do not
automatically update to match. This may cause confusion when reviewing an updated cabling map in the
132
Uncommitted tab. You can change link names to match your other name changes. You can also change
link IP for endpoints from here.
1. From the blueprint, navigate to Staged > Physical > Links and click the name of the link to change.
2. Go to the Properties tab in the right panel.
3. Depending on the link chosen, you can change link properties such as name and Link IP for
endpoints. The attributes that can be edited have an Edit button associated with them. Change
properties as applicable.
When you change link IP for an endpoint, you must remove link IP from the other endpoint first.
Otherwise you will get validation error “User-specified link IPv4 addresses not in the same subnet”.
When you assign new link IP to an endpoint, the link IP for the other endpoint is automatically
assigned from the same subnet.
133
IN THIS SECTION
You can delete links from the Neighbors view or the Links view of a selection in a blueprint.
2. From the Neighbors view, select the node check box to see the operations available for that node
(and that you have permissions for).
3. Click Delete Link to go to its dialog and review deletion details. Any connectivity templates that are
applied on the link will be unassigned.
135
4. Click Delete to stage the deletion and return to the Neighbors view of the selected node.
When you're ready to activate your changes, commit them from the Uncommitted tab.
• Select one or more links in the left column and click the Delete button above the table.
• Click the Delete button in the right column for the one link to delete.
4. Review deletion details in the dialog that opens. Any connectivity templates that are applied on the
link(s) will be unassigned.
5. Click Delete to stage the deletion and return to the Links view of the selected node.
When you're ready to activate your changes, commit them from the Uncommitted tab.
138
Data center technicians may find a printed cabling map useful when wiring in switches, or remote
network operators may find it useful for viewing IP assignments. It's available in CSV and JSON formats.
You can copy the contents or download the file to your local computer.
1. From the blueprint, navigate to Staged > Physical > Links and click the Export cabling map button
(second of five buttons above the links list), then select JSON or CSV.
2. Click Copy to copy the contents or click Save As File to download the file.
3. When you've copied or downloaded the cabling map, close the dialog to return to the Links view.
NOTE: You can also export cabling maps from Active > Physical > Links.
1. From the blueprint, navigate to Staged > Physical > Links and click the Import cabling map button
(first of five buttons above the links list).
2. Either click Choose File and navigate to the file on your computer, or drag and drop the file onto the
dialog window.
3. Click Import to import the cabling map and return to the links view.
IN THIS SECTION
Situations when you might want to edit the cabling map include:
• to specify a different port from the one that the Apstra cabling algorithm selected
• You can use Batch clear override to clear all Interface and IPv4/IPv6 values for a specific device
type.
• To drop the override for either an interface name or IPv4/IPv6 address, submit an empty value in
the corresponding field.
3. Click Update to stage your changes and return to the Links view.
Next Steps:
When you're ready to activate your changes, commit them from the Uncommittedd tab.
4. From the cabling map (Staged > Physical > Links) click the Import cabling map button to see the
dialog for importing a cabling map.
5. Either click Choose File and navigate to the revised file on your computer, or drag and drop the file
onto the dialog window.
6. Click Import.
Next Steps:
When you're ready to activate your changes, commit them from the Uncommitted tab.
If you've already cabled up your devices, you can have Apstra discover your existing cabling instead of
using the cabling map prescribed by Apstra. All system nodes in the blueprint must have system IDs
assigned to them.
1. From the blueprint, navigate to Staged > Physical > Links and click the Fetch discovered LLDP data
button (fifth of five buttons above links list).
2. If staged data is identical to LLDP discovery results, you will see a message with that statement. Your
actual cabling matches the Apstra cabling map. No further action is needed.
3. If staged data is different from LLDP discovery results, the message includes the number of links that
are different.
4. Scroll to see details of the diffs (in red), or check the Show only links with LLDP diff? checkbox to see
only the differences.
5. To accept the changes and update the map to match LLDP data, click Update Stated Cabling Map
from LLDP. You might also need to reset resource group overrides.
Interfaces
IN THIS SECTION
Interfaces Introduction
SUMMARY
The interfaces table in the blueprint lists interface details. You can add tags to interfaces and port
channels; and you can administratively disable and enable interfaces from the Apstra GUI.
Interface details are listed in an interfaces table in the blueprint. To go to the table, navigate to Staged >
Physical > Interfaces. From the table, you can access additional details about the associated system
node, links and the interface itself. You can tag physical interfaces and aggregated logical interfaces (port
channels). Tags become part of the graph, which means you can use them for configuration.
You can see interface tags from various locations in the GUI. The interfaces table includes a column for
tags, as shown above. You can also see interface tags when you hover over an interface in the topology
view (along with the admin state and other information).
142
The device context also includes interface tag information. (Select a node, then from the Device tab on
the right, at the bottom, click Device Context.)
You can administratively disable and enable interfaces from the Apstra GUI. To show only the interfaces
in the table that you've disabled or enabled, select the state from the Interface Admin State drop-down
list in the Links filter.
143
RELATED DOCUMENTATION
You can add tags to interfaces and use them for Update from Topology Neighbors View | 144
configuration. Update from Topology Interfaces View | 145
You can manage interface tags from various locations in the Apstra GUI.
144
4. Select or deselect existing tags from the drop-down lists and/or add new tags, as needed.
5. Click Add/Remove Tags to stage the tag changes and return to the interfaces table.
To deploy the change to the active blueprint, commit from the Uncommitted tab.
Update from Topology Interfaces View
1. From the blueprint, navigate to Staged > Physical > Topology and select the leaf with the interface to
tag.
146
2. Click Interfaces view, select one or more check boxes for the interfaces to tag, then click the Add/
Remove Tags button that appears above the table.
147
3. Select or deselect existing tags from the drop-down lists and/or add new tags, as needed.
4. Click Add/Remove Tags to stage the tag changes and return to the interfaces table.
To deploy the change to the active blueprint, commit from the Uncommitted tab.
Update from Interfaces Table
1. From the blueprint, navigate to Staged > Physical > Interfaces and select one or more check boxes for
the interfaces to tag.
148
2. Click the Add/Remove Tags button that appears above the table.
3. Select or deselect existing tags from the drop-down lists and/or add new tags, as needed.
4. Click Add/Remove Tags to stage the tag changes and return to the interfaces table.
To deploy the change to the active blueprint, commit from the Uncommitted tab.
SEE ALSO
SUMMARY
149
You can add tags to aggregated logical interfaces (port channels) and use them for configuration.
1. From the blueprint, navigate to Staged > Physical > Topology and select the leaf with the port
channel to tag.
4. Select or deselect existing tags from the drop-down lists and/or add new tags, as needed.
5. Click Add/Remove Tags to stage the tag changes and return to the interfaces table.
To deploy the change to the active blueprint, commit from the Uncommitted tab.
RELATED DOCUMENTATION
SUMMARY
1. From the blueprint, navigate to Staged > Physical > Topology and select the leaf that's connected to
the applicable generic system or external generic system.
2. Select the check box for the applicable interface, then click Disable interface to stage the change.
152
To deploy the change to the active blueprint, commit from the Uncommitted tab.
RELATED DOCUMENTATION
SUMMARY
153
1. From the blueprint, navigate to Staged > Physical > Topology and select the leaf that's connected to
the applicable generic system or external generic system.
2. Select the check box for the applicable interface, then click Enable interface to stage the change.
154
To deploy the change to the active blueprint, commit from the Uncommitted tab.
RELATED DOCUMENTATION
Racks
IN THIS SECTION
Racks (Datacenter)
From the blueprint, navigate to Staged > Physical > Racks to go to the Racks view.
• You can filter racks to show all, selected only, or unselected only.
You can control the growth of your network by adding, editing and deleting complete racks in a running
blueprint. This flexible fabric expansion (FFE) feature is supported on both 3-stage and 5-stage Clos
networks. (In 5-stage topologies, you can also "add and remove pods" on page 160, and (as of version
4.0.1) "increase the number of superpines per plane" on page 173. Although, you cannot add or remove
planes themselves.) You can also "change rack names" on page 156.
156
Rack types are embedded into blueprints from the global catalog. The rack type in the global catalog and
the blueprint are initially the same. When you use FFE operations (for example to change link speeds,
add generic systems or add/remove links) the rack type is modified and its timestamp is updated. The
rack type name in the global catalog and the blueprint are still the same, but their contents are now
different from each other.
You may want to use your own rack naming schema (for example, your rack names could be based on
their physical locations). In these cases you can modify the existing rack names.
1. From the blueprint, navigate to Staged > Physical > Racks and select the rack that you want to
change.
2. In Rack Properties (right panel) click the Edit button for the rack name.
3. Change the name to a unique one and click the Save button to stage the change.
NOTE: You can also change rack names from the active blueprint.
Add Rack
The easiest and fastest way to expand your network is to add a rack.
1. From the blueprint, navigate to Staged > Physical > Racks and click the Add Racks button (+).
2. If your blueprint is for a 5-stage topology, select the pod that needs a rack.
3. From the Rack Type drop-down list, select a rack type to preview and validate. (To go to a different
preview, select a different rack type.)
4. Enter the number of racks to add.
5. If you uncheck Keep existing cabling in the fabric after change, port assignments are re-calculated
and you may need to re-cable. When in doubt, leave this box checked.
6. Click Add to stage the rack addition and return to the table view.
7. "Assign device profiles" on page 36 and "system IDs" on page 37 (serial numbers) to the new rack(s).
8. Commit the changes to your blueprint to configure the rack(s) and complete the fabric expansion.
Next Steps:
To assign virtual networks to your new rack, see "Assign / Unassign Virtual Networks" on page 187. You
can assign many VNs at the same time to one or more nodes.
157
If you can't make certain changes directly in the blueprint rack, you can export the rack type to the
global catalog and update it there.
1. From the blueprint, navigate to Staged > Physical > Racks and click the Export rack to global catalog
button (first of three buttons).
NOTE: If the rack type is inconsistent with the same-named one in the global (design) catalog,
you won't be able to export the rack type. Rack types are embedded in blueprints from the
global catalog. When you use Flexible Fabric Expansion (FFE) operations (for example to
change link speeds, add generic systems or add/remove links) the blueprint rack type is
modified. The rack type name in the global catalog and the blueprint are still the same, but
their contents are now different from each other. When rack types are inconsistent, you can
create a rack type in the global catalog that meets your new requirements.
Next Steps: From the left navigation menu, navigate to Design > Rack Types and edit the rack type in
the global catalog. (Or, if you couldn't export the rack type, create one that meets your new
requirements.) Then from the blueprint, "Update the rack" on page 157 to use the revised (or new) rack
type from the global catalog.
Edit Rack
You can change running racks while preserving many rack characteristics (such as leaf/server/link names
and virtual network (VN) endpoints if labels have not changed). To edit a rack, you export its rack type to
the global catalog with a unique name, update that rack type in the global catalog, then, in the blueprint,
select the updated rack type to replace the one in the blueprint.
VN endpoints remain as long as the server and link labels between the old and new rack type are the
same.
CAUTION: If it's not possible to retain VN endpoints, you must re-assign them. Review
pending changes on the Uncommitted tab before committing. If you don't want to
commit the changes, you can revert them.
158
NOTE: If you don't need to retain rack details, we recommend that you "delete the rack" on page
158 and "add a replacement rack" on page 156, instead of editing the rack.
1. Ensure that the global catalog or the blueprint includes a suitable rack type for replacement.
2. From the blueprint, navigate to Staged > Physical > Racks and click the Edit button for the rack to
edit (second of three buttons).
3. From the New Rack Type drop-down list, select the required rack type.
4. If you added new devices, "assign device profiles" on page 36 and "system IDs" on page 37 (serial
numbers) to them.
5. You have the option of reviewing the Incremental Config to see the changes that will be pushed to
the device(s). If devices were assigned, a full config push is performed.
6. Commit the changes to the blueprint to push all required configuration changes to the devices in the
modified rack.
RELATED DOCUMENTATION
Delete Rack
Before deleting a rack that has live traffic on it, you may want to take its devices out-of-service by
draining them. For information, see "Drain Device Traffic" on page 539.
1. To delete a rack from the blueprint, navigate to Staged > Physical > Racks and click the Delete button
for the rack to delete (third of three buttons).
• If you will be adding a rack back into your system, leave the Keep existing cabling in the fabric
after change box checked.
• If you will not be replacing the rack in your system, uncheck the Keep existing cabling in the fabric
after change box. Otherwise, the intent will not match the actual topology anymore, and you will
encounter anomalies, such as for cabling and BGP.
2. Click Delete Rack to stage the deletion and return to the table view.
159
3. Commit the changes to the blueprint. Configuration on any running devices will be erased and the
devices will be ready to be decommissioned.
Pods
IN THIS SECTION
Pods (Datacenter)
From the blueprint, navigate to Staged > Physical > Pods to go to the Pods view.
From the Pods view, you can view pod capacity and change pod names. 3-stage topologies can have
only one pod. If your topology is for 5-stage, you can add and remove entire pods. The ability to add
pods to your running blueprint allows for organic growth of large networks without having to pre-design
every pod. For more information about building 5-stage topologies, see "5-stage Clos Architecture" on
page 946.
See the following sections for more information about adding, editing and deleting pods.
You can add pods to 5-stage topologies, but not to 3-stage topologies.
1. From the blueprint, navigate to Staged > Physical > Pods, and click the Add Pods button (+) (center-
left). (This button is disabled on 3-stage topologies.)
2. From the Pod Type drop-down list, select a pod type to preview and validate. To go to a different
preview, select a different pod type.
161
RELATED DOCUMENTATION
1. From the blueprint, navigate to Staged > Physical > Pods and click the pod name to change.
162
2. In Pod Properties (right panel) click the Edit button for the name.
3. Change the name and click the Save button to stage the change.
4. Commit the changes to your blueprint to activate the name change.
RELATED DOCUMENTATION
As a Day 2 operation, you can add spines per pod on both 3-stage and 5-stage blueprints.
CAUTION: Plan carefully. After you've added spines, you won't be able to remove them.
Make sure you have enough ports with specific roles and speeds for additional spine(s).
3. In the Count field, enter the total number of spines you want:
• On 5-stage blueprints, the number of spines must be a multiplier of the number of superspine
planes.
CAUTION: Plan carefully. After you've added spines, you won't be able to remove
them.
164
4. Click Update to stage your changes and return to the Pods view.
When you're ready to activate changes, commit them from the Uncommitted tab.
RELATED DOCUMENTATION
As a Day 2 operation, you can add links per superspine on 5-stage blueprints.
1. From the blueprint, navigate to Staged > Physical > Pods.
2. Click the Update spine config button on the bottom-right of the card for the pod to change.
3. In the Link per superspine field, enter the total number of links you want between spines and
superspines. You can only add links. Plan carefully. After you add links, you won't be able to remove
them later.
166
4. Click Update to stage your changes and return to the Pods view.
When you're ready to activate changes, commit them from the Uncommitted tab.
RELATED DOCUMENTATION
As a Day 2 operation, you can change the link speed between spines and superspines on 5-stage
blueprints.
Make sure link speed is supported on the links / ports (speeds must be part of the port transformations).
1. From the blueprint, navigate to Staged > Physical > Pods.
2. Click the Update spine config button on the bottom-right of the card for the pod to change.
3. In the Link per superspine speed drop-down list, select the new link speed.
168
4. Click Update to stage your changes and return to the Pods view.
When you're ready to activate changes, commit them from the Uncommitted tab.
RELATED DOCUMENTATION
As a Day-2 operation, you can increase capabilities with a different spine logical device on both 3-stage
and 5-stage blueprints. (On 5-stage topologies you can also "change the superspine logical device" on
page 176.) Changes affect the entire pod, not just a node. Based on the change, this could be disruptive.
3. From the Spine Logical Device drop-down list, select a different logical device.
170
4. Click Update to stage your changes and return to the Pods view.
Build errors appear because interface maps need to be assigned.
5. Click the Device Profiles tab in the right panel and assign interface maps, as needed.
RELATED DOCUMENTATION
Delete Pod
When you delete a pod, all of its devices are removed from the blueprint; this could be highly impactful.
Before deleting a pod that has live traffic on it, you may want to take its devices out-of-service by
draining them. For more information, see the "Drain Device Traffic" on page 539 page.
3. Click the Delete button (trash can) for the pod(s) to delete.
4. Click Delete Pod to stage the deletion and return to the table view.
5. Commit the changes to your blueprint. Configuration on any running devices is erased and the
devices are ready to be decommissioned.
RELATED DOCUMENTATION
Planes
IN THIS SECTION
Planes (Datacenter)
Planes are groups of superspines in 5-stage blueprints. Every 5-stage topology has at least one plane.
As a Day 2 operation, you can add superspines to planes in 5-stage Clos networks. The maximum
number of superspines is limited by the number of available spine ports of type superspine. When you
add superspines, additional superspine nodes are created with the same logical devices that are used in
the existing blueprint template. You must manually "assign the interface maps for the device profiles" on
page 36 of each new node. When the devices are physically ready, you can "assign" on page 37 each
node with their corresponding system IDs (serial numbers). When you commit pending changes, the
superspines are configured and they become part of the control and data plane, taking part of forward
traffic between pods.
You can also change the superspine logical device on planes to add or update superspine port capacity
on 5-stage blueprints. This change is for all planes (not per plane) which, based on the change, could be
disruptive. Changing the logical device requires that you specify a different interface map, and possibly a
new device profile.
173
From the blueprint, navigate to Staged > Physical > Planes to go to the Planes view.
As a Day 2 operation, you can add superspines per plane on 5-stage blueprints.
1. From the 5-stage blueprint, navigate to Staged > Physical > Planes and click the Change number of
superspines per plane button.
174
2. In the Superspines per plane field, enter the total number of superspines you want. You can only add
superspines per plane. Plan carefully. After you add superspines, you won't be able to remove them
later.
175
3. Click Update to stage your changes and return to the Planes view.
When you're ready to activate changes, commit them from the Uncommitted tab.
RELATED DOCUMENTATION
As a Day 2 operation, you can change the superspine logical device on planes to add or update
superspine ports capacity on 5-stage blueprints. This change is for all planes (not per plane) which, based
on the change, could be disruptive. Changing the logical device requires that you specify a different
interface map, and possibly a new device profile.
1. From the 5-stage blueprint, navigate to Staged > Physical > Planes and click the Change number of
superspines per plane button.
2. Select a diferent logical device from the Superspine Logical Device drop-down list.
3. Click Update to stage your changes and return to the Planes view.
Build errors appear because interface maps need to be assigned.
4. Click the Device Profiles tab in the right panel and assign interface maps, as needed.
RELATED DOCUMENTATION
Virtual
IN THIS SECTION
Statistics | 289
Virtual Networks
IN THIS SECTION
You can create an overlay network in an Apstra blueprint by creating virtual networks (VN)s to group
physically separate endpoints into logical groups. These collections of Layer 2 forwarding domains are
either VLANs or VXLANs.
• Can deploy in Layer 2-only mode (for example, isolated cluster networks for database replication)
• Can deploy with Layer 3 gateway (SVI) IP address on rack leaf, hosted with or without first-hop
redundancy
• The control plane selected (Static VXLAN Routing or MP-EBGP EVPN) when configuring the
template for your blueprint determines what is configured in the VN. (MP-EBGP EVPN provides a
control plane for VXLAN routing.)
• VXLAN-EVPN capabilities for VXLAN VNs are dependent on network device makes and models. For
more information see the evpn_support_addendum:Apstra EVPN Support Addendum.
For complete VN feature compatibility for supported Network Operating Systems (NOS), see the Apstra
Feature Matrix for the applicable release (in the Reference section). For detailed capability information
for a device, contact your network device vendor or "Juniper Support" on page 893.
Name Description
Routing Zone • VLAN - default routing zone only (used for the underlay network)
Name Description
Default VLAN ID • Layer 2 VLAN ID on the switch that the VN is assigned to.
(VLAN only)
• If left blank, it's auto-assigned from static pool (2-4094).
• If you assign it, we don't recommend assigning VLAN ID 1 for active VNs.
• Arista reserves 1006-4094 for internal VLANs for routed ports. You can modify
"reserved" VLAN ID range with the EOS vlan internal allocation policy
configuration command. You can apply it to all EOS devices using a SYSTEM configlet
before configuring and deploying VNs.
• Using reserved VLAN IDs may cause deployment errors, but not build errors.
VNI(s) (VXLAN only) Layer 2 VXLAN ID on the switch that the VN is assigned to. If left blank, it's auto-
assigned from resource pools. Create up to 40 VNs at once by entering ranges or
individual VNI IDs separated by commas (for example: 5555-5560, 7777). Commit the
first 40 VNs before creating additional ones.
DHCP server Enabled/Disabled - DHCP relay forwarder configuration on SVI. Implies L3 routing on
SVI
Name Description
IPv4 subnet (if • IPv4 subnet - (for example: 192.168.100.0/24) (can't use batching VLANs)
connectivity is
enabled) • IPv4 CIDR length - automatically assigns a subnet with the specified length (for
example: /26)
• If left blank, it's auto-assigned a /24 subnet network from resource pools
IPv6 Connectivity Enabled/Disabled - IPv6 connectivity for SVI routing. You must enable IPv6 in blueprint.
If the template uses IPv4 spine-to-leaf link types, you can't use IPv6 in default routing
zone and for VLAN type VNs.
• If left blank, it's auto-assigned a /64 subnet network from resource pools.
• If assigned automatically, the IP is derived from the assigned VNs SVI pools.
L3 MTU Default value is from Virtual Network Policy. You can update the value here for these
specific virtual networks.
Assigned to The racks that the VN is assigned to. For more information, see table below.
181
Assigned To Description
Details
Pod Name (5- 5-stage Clos networks include pods, and you can select leaf devices within each pod to
stage) extend VNs to those devices.
Bound to The racks assigned. For MLAG racks, the leaf pair is shown. For VLANs, if more than one rack
is selected, multiple rack-local VLAN-based VNs are created.
Link Labels Label assigned to rack (for example, ext-link-1, single-link, single-link, ext-link-0)
• VN of type VXLAN:
• Some NOS types automatically allocate unicast IPv4 addresses when an anycast
IPv4 gateway is present: (Junos when in an ESI pair).
• If a NOS type forbids co-existence of an anycast IPv4 address with an unicast IPv4
address, a blueprint error will be raised (Sonic).
• VN of type VLAN - All NOS types require unicast IPv4 addresses when the IPv4
anycast address is enabled.
• If a NOS type forbids co-existence of an anycast IPv4 address with a unicast IPv4
address, a blueprint error will be raised.
• Permits you to manually create an optional unicast IPv4 address for purposes such as
BGP peering or static routing.
IPv4 Address / You can set the first-hop-redundancy IP address for the SVI (VRRP, VARP and so on). If left
IPv6 Address blank, the SVI IP address is assigned from the selected pool. When you bind an EVPN
connectivity template to a Layer 2 application point, the SVI IP address is used as the
source / destination for the BGP session, static routes and so on.
182
From the blueprint, navigate to Staged > Virtual > Virtual Networks to go to the virtual network table
view. You can create, edit, import, export, and delete virtual networks.
IN THIS SECTION
3. Select the "routing zone" on page 199 to associate with the VN(s). (VLANs must use the default
routing zone.)
4. If you're creating VLANs, you can specify the default VLAN ID(s) or leave it blank to automatically
assign it from a resource pool.
5. If you're creating VXLANs, you can specify VNIs or leave it blank to automatically assign it from a
resource pool.
6. If you're creating VXLANs and you enter a VLAN ID (on leaf devices), you can select the check box
to Reserve across blueprint. This enforces the same rule across the fabric and helps you to honor
the same VLAN policy across racks when adding new racks.
7. If you enable DHCP Service, enter a subnet. A DHCP relay forwarder is configured on the SVI. This
option also implies Layer 3 routing on this SVI. (You assign the DHCP server in the routing zone.)
8. If you enable IPv4 Connectivity, enter a subnet, unless you're batch creating VNs. Then enter an
IPv4 CIDR length, or leave subnet blank to allow auto-assignment.
9. If you enable Virtual Gateway IPv4, enter an IPv4 address.
10. If IPv6 is enabled in the blueprint (Policies > Fabric Addressing Policy), and you enable IPv6
Connectivity, enter a subnet, unless you're batch creating VNs. Then enter an IPv6 CIDR length, or
leave subnet blank to allow auto-assignment.
11. If you enable Virtual Gateway IPv6, enter an IPv6 address.
12. To create connectivity templates for the VN(s), check the box for Tagged and/or Untagged, as
applicable.
13. To override the default MTU value, enter a value for L3 MTU.
184
14. Select and configure racks to assign to the VN. See Virtual Networks on page 181 overview for
details.
15. Click Create to stage the VN and return to the table view.
16. Assign IPv4 (IPv6) resources for SVI subnets. Navigate to Staged > Virtual > Virtual Networks and
"assign resources" on page 33 in the Build panel (right-side).
17. For VXLAN only: Assign VTEP IPs. Navigate to Staged > Virtual > Virtual Networks and assign
resources in the Build panel (right-side). (You can display the VTEPs list in the nodes table (Staged >
Physical > Nodes). Select the type of VTEP to display from the Columns drop-down list (above the
table).)
• Single Leaf Nodes require one VTEP IP and an anycast VTEP IP for all switches in the VN.
• MLAG Leaf-pair Nodes require a common VTEP IP for the leaf-pair and an anycast VTEP IP for
all switches in the VN.
18. To deploy changes to the active blueprint, click the Uncommitted tab to review and commit (or
discard) changes.
SEE ALSO
7. Click Import to import the virtual networks, stage the changes, and return to the table view.
Next Steps:
IN THIS SECTION
You can assign resources, release previously used resources and go to resource pool management from
the virtual build panel. The resource assignment section has a convenient shortcut button, Manage
resource pools, that takes you to resource pool management. From there, you can monitor resource
usage and create additional resource pools, as needed.
2. Red status indicators mean that resources need to be assigned. Click a red status indicator, then click
the Update assignments button.
3. Select a pool from which to pull the resources, then click the Save button. The required number of
resources are automatically assigned to the resource group. When the red status indicator turns
green, the resource assignment has been successfully staged.
187
IN THIS SECTION
You can assign (and unassign) multiple VXLAN virtual networks at the same time from the Apstra GUI.
• Unassign the VN from one or more nodes by deselecting the applicable node check box(es).
188
3. Click Update to stage the changes and return to the table view.
3. In the dialog that opens, you can see the associated routing zone, VN type and VN ID by hovering
over the VNs that are already assigned.
4. Your selected VXLANs appear above the table on the left. The table shows the VNs that are already
assigned to nodes in the network. Select the check boxes for one or more nodes. The Bulk assign
VXLANs and Bulk unassign VXLANs buttons become available.
190
• To assign your selected VXLANs to the nodes you just selected, click the Bulk assign VXLANs
button. The VNs to be assigned turn green.
• To unassign your selected VXLANs that are already assigned to the nodes you just selected, click
the Bulk unassign VXLANs button. The VNs to be unassigned turn red (as shown in the
screenshot example above).
6. Click Assign to stage your changes and return to the table view.
IN THIS SECTION
When you're ready to activate your changes, commit them from the Uncommitted tab.
3. Or to export specific virtual networks instead of all of them, check their check boxes, then click the
same button as in the previous step (now called Export selected virtual networks) (new in Apstra
version 4.2.0).
192
4. Click Copy to copy the contents, or click Save As File to download the file.
5. When you've copied or downloaded the virtual networks, close the dialog to return to the table
view.
6. Paste the contents, or open the CSV file, in a spreadsheet program (such as Google Sheets or
Microsoft Excel).
7. Update virtual networks as needed, then save the file.
8. In the Apstra GUI, navigate to Staged > Virtual > Virtual Networks and click the Import virtual
networks button.
9. Either click Choose File and navigate to the file on your computer, or drag and drop the file onto
the dialog window, or as shown in the screenshot below, directly paste CSV file contents. Virtual
network details are displayed for your review.
193
10. Click Import to import the virtual networks, stage the changes, and return to the table view.
When you're ready to activate your changes, commit them from the Uncommitted tab.
SEE ALSO
SUMMARY
You can update many virtual networks quickly by exporting them as a CSV file, updating the file, then
importing the file back into the blueprint.
1. From the blueprint, navigate to Staged > Virtual > Virtual Networks.
194
2. To export all virtual networks, click the Export all virtual networks button as shown in the screenshot
above.
3. Or to export specific virtual networks instead of all of them, check their check boxes, then click the
same button as in the previous step (now called Export selected virtual networks) (new in Apstra
version 4.2.0).
4. Click Copy to copy the contents or click Save As File to download the file.
5. When you've copied or downloaded the virtual networks, close the dialog to return to the table view.
Next Steps: Update the CSV file (with a spreadsheet program), then import it back into your blueprint.
RELATED DOCUMENTATION
You can import multiple virtual networks (as a CSV file) into your blueprint. (Tip: First export virtual
networks so you'll have the schema set up for you in the CSV file.)
1. From the blueprint, navigate to Staged > Virtual > Virtual Networks and click the Import virtual
networks button.
2. Either click Choose File and navigate to the file on your computer, drag and drop the file onto the
dialog window, or as shown in the screenshot below, directly paste CSV file contents. Virtual network
details are displayed for your review.
3. Click Import to import the virtual networks, stage the changes, and return to the table view.
When you're ready to activate your changes, commit them from the Uncommitted tab.
196
RELATED DOCUMENTATION
IN THIS SECTION
When you delete virtual networks, any connectivity templates that are assigned to those virtual
networks are automatically unassigned (as of Apstra version 4.2.0). Those unassigned connectivity
templates become available to be assigned elsewhere, or to be deleted. (In previous versions, you had to
manually find and unassign connectivity templates before you could delete virtual networks.)
2. The Delete Virtual Network dialog that opens shows the virtual network to be deleted. Click the
drop-down triangle to show (or hide) the connectivity templates that will be unassigned.
197
3. Click Delete to stage the deletion and return to the table view.
NOTE: If you get an error, it's probably because there's a dependency that you need to
remove manually. If a connectivity template refers to an object (like a virtual network
endpoint) that is created by another connectivity template, you need to unassign that
dependent object. Then you can return to step 1.
When you're ready to activate your changes, commit them from the Uncommitted tab.
2. In the Delete Virtual Networks dialog that opens, click the drop-down triangles to show (or hide) the
virtual networks to be deleted and the connectivity templates to be unassigned.
3. Click Delete to stage the deletion and return to the table view.
NOTE: If you get an error, it's probably because there's a dependency that you need to
remove manually. If a connectivity template refers to an object (like a virtual network
endpoint) that is created by another connectivity template, you need to unassign that
dependent object. Then you can return to step 1.
199
When you're ready to activate your changes, commit them from the Uncommitted tab.
SEE ALSO
Routing Zones
IN THIS SECTION
A routing zone is an L3 domain, the unit of tenancy in multi-tenant networks. You create routing zones
for tenants to isolate their IP traffic from one another, thus enabling tenants to re-use IP subnets. In
addition to being in its own VRF, each routing zone can be assigned its own DHCP relay server and
external system connections. You can create one or more virtual networks within a routing zone, which
means a tenant can stretch its L2 applications across multiple racks within its routing zone. For virtual
networks with Layer 3 SVI, the SVI is associated with a Virtual Routing and Forwarding (VRF) instance
for each routing zone isolating the virtual network SVI from other virtual network SVIs in other routing
zones. If you're using multiple routing zones, external system connections must be from leaf switches in
the fabric. Routing between routing zones must be accomplished with external systems. All SVIs
configured for virtual networks in this zone are in the default VRF. This is the same VRF used for the
underlay or fabric network routing between network devices. All blueprints include a default routing
policy. The number of routing zones is limited only by the network devices being used.
200
Parameter Description
Route Target Only EVPN routing zones use route targets. The
rendered EVPN L3-VNI route target represents the
built-in, automatic route target that is associated with
the EVPN routing zone VRF. When using EVPN remote
gateway features for Data Center Interconnect, this
route target must be imported by the EVPN fabric
external to this fabric. This route target is composed of
"<VNI_ID>:1" where "1" is hard-coded. If route target
is not assigned, then a VNI must be assigned.
DHCP Servers
Resources
Virtual Networks
201
(Continued)
Parameter Description
Interfaces
From the blueprint, navigate to Staged > Virtual > Routing Zones to go to the routing zones table view.
You can create, edit, import, export and delete routing zones and assign DHCP servers to them.
IN THIS SECTION
1. From the blueprint, navigate to Staged > Virtual > Routing Zones and click Create Routing Zone.
2. Enter a unique VRF name (15 characters or fewer).
3. You can leave the remaining fields as is to use default values and have resources assigned from pools,
or you can configure them manually. See the "routing zone" on page 199 introduction for details.
4. Click Create to create the routing zone and return to the table view.
Assign resources (leaf loopback IPs, leaf L3 peer links) to the new routing zone.
SEE ALSO
1. From the blueprint, navigate to Staged > Virtual > Routing Zones and click Export routing zones.
[image]
2. Click Copy to copy the contents or click Save As File to download the file.
3. Paste the contents, or open the CSV file, in a spreadsheet program (such as Google Sheets or
Microsoft Excel).
4. Enter routing zones details into the spreadsheet, then save the file.
5. In the Apstra GUI, navigate to Staged > Virtual > Routing Zones and click Import routing zones.
[image]
6. Either click Choose File and navigate to the file on your computer, drag and drop the file onto the
dialog window, or directly paste CSV file contents into the dialog window. Routing zone details are
displayed for your review.
7. Click Import to import the routing zones, stage the changes, and return to the table view.
Next Steps:
Assign resources. Each leaf network device in each routing zone requires a loopback IP. If IPv6 is
enabled on the blueprint, you must also assign IPv6 addresses to the routing zone. After you've assigned
connectivity templates to your external generic systems, you'll also need to assign IP addresses.
203
1. From the blueprint, navigate to Staged > Virtual > Routing Zones and click the name of the routing
zone that needs a DHCP server assigned to it.
3. Enter the IPv4 address (or IPv6 address) for the DHCP server and click Add DHCP Server. To add an
additional server, enter the IP address and click Add DHCP Server again.
4. Click Update to stage the assignment and return to the routing zone detail view.
When you're ready to activate your changes, commit them from the Uncommitted tab.
204
Each leaf network device in each routing zone requires a loopback IP. If IPv6 is enabled on the blueprint,
you must also assign IPv6 addresses to the routing zone. After you've assigned connectivity templates
to your external generic systems, you'll also need to assign IP addresses.
1. From the blueprint, navigate to Staged > Virtual > Routing Zones.
2. Red status indicators in the Build panel (on the right) indicate that resources need to be assigned.
Click a red indicator and click the Update assignments button.
3. Select a pool from which to pull the resources, then click the Save button. (For information about IP
address pools, see "IP Pools" on page 784.) When the red status indicator turns green, the required
resources are successfully assigned.
4. Repeat the steps to assign resources from pools until all required resources have been assigned.
NOTE: You can also assign individual IP addresses to links by clicking the name of the routing
zone in the table view, scrolling down to the Interfaces section, clicking the Edit IP addresses
205
IN THIS SECTION
2. Click the Edit button (upper-right) for the selected routing zone.
When you're ready to activate your changes, commit them from the Uncommitted tab.
3. Or to export specific routing zones instead of all of them, select their check boxes, then click the
same button as in the previous step (now called Export selected routing zones) (new in Apstra
version 4.2.0).
4. Click Copy to copy the contents, or click Save As File to download the file.s
5. When you've copied or downloaded the routing zones, close the dialog to return to the table view.
6. Paste the contents, or open the CSV file, in a spreadsheet program (such as Google Sheets or
Microsoft Excel).
7. Update routing zones as needed, then save the file.
8. In the Apstra GUI, navigate to Staged > Virtual >Routing Zones and click the Import routing zones
button.
208
9. Either click Choose File and navigate to the file on your computer, or drag and drop the file onto
the dialog window. Routing zone details are displayed for your review.
10. Click Import to import the routing zones, stage the changes, and return to the table view.
When you're ready to activate your changes, commit them from the Uncommitted tab.
SEE ALSO
SUMMARY
You can update many routing zones quickly by exporting them as a CSV file, updating the file, then
importing the file back into the blueprint.
1. From the blueprint, navigate to Staged > Virtual > Routing Zones.
209
2. To export all routing zones, click the Export all routing zones button as shown in the screenshot
above.
3. Or to export specific routing zones instead of all of them, check their check boxes, then click the
same button as in the previous step (now called Export selected routing zones) (new in Apstra
version 4.2.0).
4. Click Copy to copy the contents or click Save As File to download the file.
5. When you've copied or downloaded the routing zones, close the dialog to return to the table view.
Next Steps: Update the CSV file with a spreadsheet program, then import it back into your blueprint.
RELATED DOCUMENTATION
You can import multiple routing zones (as a CSV file) into your blueprint. (Tip: First export routing zones
so you'll have the schema set up for you in the CSV file.)
1. From the blueprint, navigate to Staged > Virtual > Routing Zones and click the Import routing zones
button.
2. Either click Choose File and navigate to the file on your computer, drag and drop the file onto the
dialog window, or directly paste CSV file contents into the dialog window. Routing zone details are
displayed for your review.
3. Click Import to import the routing zones, stage the changes, and return to the table view.
When you're ready to activate your changes, commit them from the Uncommitted tab.
RELATED DOCUMENTATION
IN THIS SECTION
When you delete routing zones, all virtual networks created under the routing zone are also deleted, and
the connectivity templates that are assigned are automatically unassigned (as of Apstra version 4.2.0).
Those unassigned connectivity templates become available to be assigned elsewhere, or to be deleted.
(In previous versions, you had to manually find and unassign connectivity templates before you could
delete routing zones.)
2. The Delete Routing Zone dialog that opens shows the routing zone and virtual networks to be
deleted. Click the drop-down triangle to show (or hide) the connectivity templates that will be
unassigned.
212
3. Click Delete to stage the deletion and return to the table view.
NOTE: If you get an error, it's probably because there's a dependency that you need to
remove manually. If a connectivity template refers to an object (like a virtual network
endpoint) that is created by another connectivity template, you need to unassign that
dependent object. Then you can return to step 1.
When you're ready to activate your changes, commit them from the Uncommitted tab.
2. In the Delete Routing Zones dialog that opens, click the drop-down triangles to show (or hide) the
routing zones to be deleted and the connectivity templates to be unassigned
3. Click Delete to stage the deletion and return to the table view.
NOTE: If you get an error, it's probably because there's a dependency that you need to
remove manually. If a connectivity template refers to an object (like a virtual network
endpoint) that is created by another connectivity template, you need to unassign that
dependent object. Then you can return to step 1.
214
SEE ALSO
To see details including peer configuration, click the Protocol Session ID.
Virtual Infrastructure
IN THIS SECTION
IN THIS SECTION
VM Visibility | 220
IN THIS SECTION
Limitations | 218
With Apstra vCenter integration, you have VM visibility of your virtualized environments. This feature
helps to troubleshoot various VM connectivity issues. Inconsistencies between virtual network settings
(VMware Port Groups) and physical networks (Apstra Virtual Networks) that might affect VM
connectivity are flagged.
To accomplish this, the Apstra software identifies the ESX/ESXi hosts and thereby the VMs connected
to Apstra-managed leaf switches. LLDP information transmitted by the ESX/ESXi hosts is used to
218
associate host interfaces with leaf interfaces. For this feature to work, LLDP transmit must be enabled
on the VMware distributed virtual switch.
The Apstra software also connects to vCenter to collect information about VMs, ESX/ESXi hosts, port
groups and VDS. Apstra extensible telemetry collectors collect this information. The collector runs in an
offbox agent and uses pyVmomi to connect to vCenter. On first connect, it downloads all of the
necessary information and thereafter polls vCenter every 60 seconds for new updates. The collector
updates the discovered data into the Apstra Graph Datastore allowing VM queries and alerts to be
raised on physical/virtual network mismatch.
Supported Versions
The specific test and qualification for version 7.0 is three vCenter servers on three different routing
zones: zone 1 supports 3000 VMs, zone 2 supports 1000 VMs, and zone 3 supports 1000 VMs. We
support vCenter managed data center stretched clusters. vCenter segregation is based on workload, not
location.
Limitations
vCenter integration does not support DVS port group with VLAN type Trunking.
1. From the left navigation menu, navigate to External Systems > Virtual Infra Managers and click
Create Virtual Infra Manager.
2. Enter the vCenter IP address (or DNS name), select VMware vCenter Server, then enter a username
and password.
3. Click Create to launch an offbox container running vCenter. While the container is connecting, the
state is DISCONNECTED. When the container successfully connects, the state changes to
CONNECTED.
4. When vCenter is connected, from the blueprint, navigate to Staged > Virtual > Virtual Infra and click
Add Virtual Infra.
5. Select the vCenter Server from the Virtual Infra Manager drop-down list, then click Create to stage
the change.
When you are ready to deploy, commit the changes from the Uncommitted tab.
220
VM Visibility
When Apstra software manages virtual infra, you can query VMs by name. From the blueprint, navigate
to Active > Query > VMs and enter search criteria. VMs include the following details:
Parameter Description
Port Group Name:VLAN ID The VNIC’s portgroup and the VLAN ID associated
with the portgroup
Two predefined analytics dashboards (as listed below) are available that instantiate predefined virtual
infra probes.
SEE ALSO
Analytics Introduction | 9
Auto-Remediation Overview
Automatic remediation of virtual network anomalies is available without user intervention. This can
reduce operational cost when network operators don't need to investigate each anomaly and check for
details and intervene to mitigate anomalies. VxLAN auto-remediation is a policy configured while adding
vCenter/NSX-T to a blueprint. Anomaly remediation is done in accordance with this policy.
Some of the constraints and validations that take place before the remediation happens are listed below:
• When remediation policy is set to VLAN, that is rack-local, routing zone can only be the default one.
• If VLAN ID for virtual network spanning multiple hypervisors is the same, a single layer 2 broadcast
domain is assumed. For such scenarios, the VLAN remediation policy must be set to VXLAN as for
missing VLAN anomalies it is checked on all the ToR leaf devices connected to different hypervisors
having virtual network with the same VLAN ID. If this is mistakenly chosen as VLAN type, validation
errors are generated.
• Errors are flagged for different types of remediation policies (For example, if one is VXLAN type and
other is VLAN type) are found attached to different virtual infras (such as two different vCenter
servers) having the same VLAN ID in anomalies.
• If two different virtual infra servers are mapped in a blueprint and they have the same VLAN IDs then
it is checked as two separate virtual networks by VXLAN auto-remediation policy.
Enable Auto-Remediation
1. From the blueprint, navigate to Staged > Virtual > Virtual Infra and click Add Virtual Infra.
2. Select the Virtual Infra Manager from the drop-down list.
3. Click VLAN Remediation Policy to see the attributes to configure.
4. Select the VN Type from the drop-down list.
• VXLAN (inter-rack) (default) Assumes VXLAN virtual network and looks for VN mismatch in all of
the related ToRs in the Apstra fabric.
223
• VLAN (rack-local) Select VLAN if the VLAN footprint on local vSphere does not extend to other
ToR leaf devices in a fabric.
5. Select the Routing zone. (If VN type is rack-local only the default routing zone is allowed.)
6. Click Create.
After enabling the VLAN remediation policy as inter-rack, Apstra software searches for matching local
VLANs in all ToRs connecting any member host (hypervisor for example) participating in the virtual infra
virtual network. If such a VN is found, it simply extends that VN to also be bound to the ToR in question
with the same local VLAN. If it's not found, a new inter-rack VN is created in the specified routing zone.
• VLAN mismatch anomalies create one virtual network for one vCenter Distributed Virtual Switch
(vDS) port group that is attached to hypervisors connected to leaf ports of ToRs in Apstra fabric.
• You cannot delete a routing zone that is being referenced in remediation policy.
NOTE: For an EVPN-enabled fabric, we recommend that you have VN type as inter-rack or
VXLAN in a specific routing zone.
1. From the blueprint, navigate to Analytics > Probes and click one of the instantiated predefined probe
names.
2. Click Remediate Anomalies on a given stage. The Apstra software automatically updates the staged
blueprint by adding/removing/updating VN endpoints and VNs to resolve the anomalies.
3. Review the staged configuration in terms of virtual network parameters, then commit the
configuration. The Apstra software indicates if there are no detected changes. This could happen if
you invoke remediation more than once.
4. Review and commit the changes on the Uncommitted tab.
5. Return to the predefined probe to view any remaining anomalies.
1. From the blueprint, navigate to Staged > Virtual > Virtual Infra and click the Delete button for the
virtual infra to disable.
2. Click Uncommitted (top menu) and commit the deletion.
3. From the left navigation menu, navigate to External Systems > Virtual Ingra Managers and click the
Delete button for the virtual infra to disable.
224
NSX-T Integration
IN THIS SECTION
IN THIS SECTION
Limitations | 225
You can integrate NSX-T with Apstra software to help deploy fabric VLANs that are needed for
deploying NSX-T in the data center or for providing connectivity between NSX-T overlay networks and
fabric underlay networks. You can accelerate NSX-T deployments by making sure the fabric is ready in
terms of LAG, MTU and VLAN configuration as per NSX-T transport node requirements. This feature
also helps network operators with fabric visibility in terms of seeing all the NSX-T VMs, VM ports, and
physical gateway ports. NSX-T integration helps identify issues on the fabric and on the virtual
infrastructure. It eliminates manual config validation tasks between the NSX-T nodes side and the ToR
switches.
When NSX-T VM is attached into VLAN Transport, VM query shows TOR switch/interface information
together. When NSX-T VM is attached into Overlay Transport, VM query doesn't show TOR switch/
interface information. Be sure to add ESXi host in generic systems, not external generic systems.
As of Apstra version 4.1.2, you can create Virtual Infra Managers for NSX-T Manager version 3.2.x using
DVS mode. You can also add multiple Virtual Infra Managers per blueprint. This is useful when you have
multiple NSX-T Managers or multiple vCenter Servers hosted in the same fabric blueprint. You'll need to
provide the vCenter compute managers information (address and credentials) when you add the NSX-T
Virtual Infra.
225
Limitations
• NSX-T Edge VM migration is supported only within a rack. Attempting to migrate between racks
results in BGP disruption. You can migrate the NSX-T Edge VM from the ESXi host connected to leaf
pair (that is, ToR-Leaf and ToR-Right) to the other ESXi host which is connected to single leaf with the
rack.
• (Apstra versions 4.1.1 and 4.1.0 only) Having more than one NSX-T virtual infra in a blueprint is not
supported. We recommend only one virtual infra per blueprint.
• (Apstra versions 4.1.1 and 4.1.0 only) NSX-T integration does not support DVS port group with
VLAN-type trunking.
1. From the left navigation menu, navigate to External Systems > Virtual Infra Managers > Create
Virtual Infra Manager.
2. Enter the NSX-T manager IP address (or DNS name), select VMware NSX-T Manager and enter a
username and password.
226
3. Click Create to create the virtual infra manager and return to the table view. When the connection
is successful, the connection state changes from DISCONNECTED to CONNECTED.
227
4. When NSX-T is connected, from the blueprint, navigate to Staged > Virtual > Virtual Infra > Add
Virtual Infra.
5. Select the NSX-T manager from the Virtual Infra Manager drop-down list, then click VLAN
Remediation Policy to expose additional fields. The information entered here is used in Intent-
based analytics (IBA) probes that can remediate anomalies.
6. Select the VN type and routing zone.
• If VLAN (rack-local) is selected, you must use the default routing zone.
• If VXLAN (inter-rack - when VN extends to different ToRs in the fabric) is selected you can
select a different routing zone.
7. Click Create to stage the virtual infra manager and return to the table view. The new virtual infra
manager appears in the table.
8. Click Uncommitted (top menu) to review changes, then click Commit (top-right) to add the NSX-T
manager to the active blueprint.
228
9. Create a Routing Zone in the blueprint and specify the VLAN ID, VNI and Routing Policies. Routing
Zone maps to a VRF on which BGP peering towards NSX-T Edge node is established.
10. For the GENEVE Tunnels to come up between the Transport Nodes in NSX-T, connectivity must be
established via Juniper Apstra Fabric. This will be ensured by creating VXLAN VN in Apstra and
assigning correct port mapping in ToR leaf devices towards Transport Node. VLAN ID for Overlay
VXLAN VN defined in Apstra must match the one mapped in Overlay Profile in NSX-T for Transport
229
Nodes. Also, the same IP subnet as that of the TEP Pool in NSX will be used.
11. Since we checked the box to Create Connectivity Template for in last step during VXLAN VN
creation in Apstra a Connectivity Template of type Virtual Network is automatically created under
Blueprints > Staged > Connectivity Templates as shown below:
230
12. Assign the interfaces to the Connectivity Template created above towards Transport nodes in NSX-
T side.
13. Once the configuration is rendered towards devices we can observe GENEVE Tunnels between
Transport and Edge nodes are UP in NSX-T Manager.
NOTE: When you install the NSX Edge as a virtual appliance or host Transport Node, use
the default uplink profile. If the Failover teaming policy is configured for an uplink profile,
then you can only configure a single active uplink in the teaming policy. Standby uplinks are
not supported and must not be configured in the failover teaming policy.
To see a list of the VMs connected to the hypervisor, navigate to the dashboard and scroll to fabric
health for VMware option.
You can also query VMs that are hosted on hypervisors connected to ToR leaf devices. From the
blueprint, navigate to Active > Query > VMs.
Parameter Description
Port Group Name:VLAN ID The VLAN ID which NSX-T port groups are using.
Overlay VM to VM traffic in a NSX-T enabled Data
Center tunnels between transport nodes over this
Virtual network.
To search for nodes in the physical topology that have VMs, navigate to Active > Physical and select Has
VMs? from the Nodes drop-down list.
234
If the VM is moved from one Transport node to another in NSX-T it can be visualized in Apstra under
Active > Physical > Nodes > Generic System (Node_name). Select the VMs tab as shown below:
Two predefined analytics dashboards (as listed below) are available that instantiate predefined virtual
infra probes.
SEE ALSO
Analytics Introduction | 9
1. From the blueprint, navigate to Staged > Virtual > Virtual Infra and click the Delete button for the
virtual infra to disable.
2. Click Uncommitted (top menu) and commit the deletion.
3. From the left navigation menu, navigate to External Systems > Virtual Infra Managers and click the
Delete button for the virtual infra to disable.
IN THIS SECTION
Overview | 236
Overview
Juniper Apstra supports NSX-T Edge connectivity requirements using connectivity templates.
Connectivity templates can be used both where NSX-T Edge is hosted on Bare Metal or when used as a
virtual machine.
We support VRF lite enabled Tier-0 edge Gateway using connectivity templates.
The use cases below relate to connectivity templates for NSX-T 3.0 Edge:
per below screenshot which will help for tunnelling traffic between Transport Nodes. This is called
Overlay Transport Zone.
• Create three Distributed Port Groups for respective vmnics and VLAN Trunking to be enabled on all
the Nodes as per the networking depicted in previous screenshot.
238
• Create respective Uplink profiles for Overlay and VLAN Transport Zones in NSX Manager(UI).
• After NSX-T is configured on the Transport nodes, a Tunnel endpoint(TEP) IP pool is created in the
NSX UI as below:
• Now create the NSX-T Edge VM in NSX Manager UI as below. It is used as the device for north-south
communication and BGP peering with Juniper Apstra Fabric. Also configure VDS on the Edge Nodes
under NSX Manager(UI) for respective overlay and Uplink interfaces.
239
• Tier-0 Gateway in the NSX-T Edge cluster provides a gateway service between the logical and
physical network. In NSX Manager create T0 Gateway which connects to the ToR Leaf via BGP to
240
• Configure BGP peering on NSX T0 GW towards Juniper Apstra Fabric in NSX Manager.
• For NSX-T integration with Juniper Apstra, see "NSX-T Integration" on page 224
First create a Routing Zone in Juniper Apstra UI which maps to a VRF. Then need to setup IP Link
Primitive based connectivity template to establish BGP peering from the NSX-T Edge node to Fabric
as below:
Specify the routing zone on which the IP link will be added and respective VLAN ID.
242
BGP peering can be built over these VLANs in VRF gateways for route exchange with the upstream
Juniper Apstra fabric. Inter-VRF traffic is routed through the physical Juniper Apstra fabric.
• In NSX-T Manager create the VLAN Segments for the Uplink networks for the tenants.
• In NSX-T Manager create the VRF-enabled Tier-0 Gateway for the tenants and add the uplink
interfaces on the VRF enabled Gateways. Thereafter add the BGP neighbors.
243
• From the Apstra GUI, setup the Routing Zone and the respective VNs on which BGP session will be
established towards ToR leaf devices as below:
244
• Create connectivity template under Staged option for the VNs created before and assign the
respective interfaces towards NSX-T Edge VM.
245
Navigate to Staged > Connectivity Templates > Add Template > Primitives > Custom Static Route to
inject default route:
See "Set up NSX-T VRF Lite" section for details on creating uplink VLAN interfaces on T0 Gateway. This
VLAN should be IPv6-enabled.
Create a connectivity template for each of the VXLAN VN and enable BGP towards IPv6 neighbor on
NSX-T Edge as below:
247
IN THIS SECTION
Overview | 247
Overview
Apstra software can connect to the NSX-T API to gather information about the inventory in terms of
hosts, clusters, VMs, portgroups, vDS/N-vDS, and NICs within the NSX-T environment. Apstra can
integrate with NSX-T to provide Apstra admins visibility into the application workloads (aka VMs)
running and alert them about any inconsistencies that would affect workload connectivity. Apstra
Virtual Infrastructure visibility helps provide underlay/overlay correlation visibility and use IBA analytics
for overlay/underlay.
248
You cannot view the NSX Inventory in Apstra until the NSX-T manager is associated to a blueprint.
As per above screenshot inventory collection for NSX-T is done via Apstra extensible telemetry
collector.
IN THIS SECTION
N-VDS | 251
NSX-T uses the following terminology for their control plane and data plane components. Also please
find respective correlation with respect to Apstra.
249
Transport Zones
Transport Zones (TZ) define a group of ESXi hosts that can communicate with one another on a physical
network.
1. Overlay Transport Zone: This transport zone can be used by both transport nodes or NSX
edges.When an ESXi host or NSX-T Edge transport node is added to an Overlay transport zone, an N-
VDS is installed on the ESXi host or NSX Edge Node.
2. VLAN Transport Zone: It can be used by NSX Edge and host transport nodes for its VLAN uplinks.
Each Hypervisor Hosts can only belong to one Transport Zone at a given point of time.
250
A newly created VLAN VN tagged towards an interface in Apstra fabric corresponds to a VLAN based
transport zone as per the screenshots below:
251
Here tagged VLAN VN is mapped to the respective Transport Zone in NSX-T with traffic type as VLAN.
N-VDS
An NSX-managed virtual distributed switch provides the underlying forwarding and is the data plane of
the transport nodes.
Here TEP are Tunnel Endpoints used for the NSX overlay networking (geneve encapsulation/
decapsulation). P1/P2 are pNICs mapped to the uplink profile(U1/U2).
253
N-VDS are instantiated at the Hypervisor level and can be thought of Virtual switch connected to the
ToR physical leaf devices as below:
254
Transport Node
VMs hosted on different Transport nodes communicate seamlessly across the overlay network. A
transport node can belong to:
This can be compared to setting end hosts(servers) in an Apstra blueprint to be part of VLAN (leaf-local)
or VXLAN (inter-leaf) Virtual Network.
255
The NSX Edge provides routing services and connectivity to networks that are external to the NSX-T
deployment. It is required for establishing external connectivity from the NSX-T domain, through a
Tier-0 router via BGP or static routing.
NSX Edge VMs have uplinks towards ToR leaves needing a separate VLAN transport zone. Apstra fabric
must be configured with the corresponding VLAN Virtual Network.
NOTE: NSX-T Edge Bare Metal or VM form factors are Transport nodes and discovered as
hypervisors in Apstra. However, VM edge Transport nodes can't be correlated to the connected
ToR Leaf.
It provides control plane functions for NSX-T Data Center logical switching and routing components.
NSX Manager
It is a node that hosts the API services, the management plane, and the agent services.
256
• In NSX-T Transport nodes are hypervisor hosts and they can be correlated to server nodes in a
Blueprint connected to the ToR leaf devices. In NSX-T Data Center, ESXi hosts are prepared as
Transport Node which allows nodes to exchange traffic for virtual networks on Apstra Fabric or
amongst network on nodes. You must ensure hypervisors (ESXi) networking stack is sending LLDP
packets to aid the correlation of ESXi hosts with server nodes in the blueprint.
• PNIC is the actual physical network adapter on ESXi or hypervisor host. Hypervisor PNICs can be
correlated to the server interface on the Blueprint. LAG or Teaming configuration is done on the links
mapped to these physical NICs. This can be correlated to bond configuration done on the ToR leaf
devices towards the end servers.
• In NSX-T integration with Apstra VM virtual networks are discovered. These can be correlated to
blueprint virtual networks. In case VMs need to communicate with each other over tunnels between
hypervisors VMs are connected to the same logical switch in NSX-T(called N-VDS). Each logical
switch has a virtual network identifier (VNI), like a VLAN ID. This corresponds to VXLAN VNIs as in
Apstra fabric physical infrastructure.
257
• The NSX-T Uplink Profile defines the network interface configuration facing the fabric in terms of
LAG and LACP config on PNIC interfaces. The uplink profile is mapped in Transport node for the links
from the hypervisor/ESXi towards top-of-rack switches in Apstra Fabric.
• VNIC defines Virtual Interface of transport nodes or VMs. N-VDS switch does mapping of physical
NICs to such uplink virtual interfaces. These Virtual Interfaces can be correlated to server interface
ports of Apstra Fabric.
IN THIS SECTION
Hypervisor | 257
VNIC | 267
Vnet | 278
Hypervisor
To obtain NSX-T API response for respective hypervisor hosts and understand the correlation you can
use graph query. To open the GraphQL Explorer, click the “>_” button
After that in the graph explorer we can type a graph query on the left as per the screenshot below using
GraphQL:
To check for respective Label for the transport nodes below query can be used:
Request:
{
hypervisor_nodes{
label
}
}
259
Response:
{
"data": {
"hypervisor_nodes": [
{
"label": "zz-karun-nsxt.cvx.2485377892354-357746820-TN-2"
},
{
"label": "zz-AndyF-nsxt.cvx.2485377892354-4240714876-TN-2"
}
]
}
}
Hypervisors which act as Transport Nodes can be visualized in Apstra under Active tab with Has
Hypervisor = Yes option as below:
To obtain respective hostname for the transport nodes below query can be used:
260
Request:
{
hypervisor_nodes {
hostname
}
}
Response:
{
"data": {
"hypervisor_nodes": [
{
"hostname": "localhost"
},
{
"hostname": "ubuntu-bionic-nsxt"
}
]
}
}
Hypervisor PNIC
Physical NICs are selected for uplink profile dedicated for the Overlay Network. NSX-T Uplink Profile
defines the network interface configuration for the PNIC interfaces facing the Apstra fabric in terms of
261
So the uplink profile is mapped in Transport node for the links from the NSX-T logical switch of the
hypervisor/ESXi hosts. It points towards top-of-rack switches in Apstra Fabric.
NSX-API Request/Response to check MAC address for the Transport node interfaces.
Request:
{
pnic_nodes {
id mac_address
262
}
}
Response:
{
"data": {
"pnic_nodes": [
{
"id": "1e2162c3-9ce6-4f35-afc2-217bb48ced49",
"mac_address": "52:54:00:88:41:28"
},
{
"id": "9752a438-1939-4648-bc8e-0494addf7c7e",
"mac_address": "52:54:00:04:d5:4f"
}
]
}
}
The MAC address shown in above example is learned on a LAG interface in Apstra Fabric towards the
NSX-T Transport Node. It is the MAC address of the ESXi host pNICs having LAG bond towards ToR leaf
devices in Apstra fabric.
The NSX-API Request/Response below checks the switch name attribute of transport node’s transport
zone.
Request:
{
pnic_nodes {
id switch_id
}
}
Response:
{
"data": {
"pnic_nodes": [
{
263
"id": "82586be7-2998-401f-82ba-11afa5bb9730",
"switch_id": "zz-cvx-nsxt.cvx.2485377892354-2902673742"
},
{
"id": "0043d742-405a-454f-9e9b-695d5dd14608",
"switch_id": "zz-cvx-nsxt.cvx.2485377892354-2902673742"
}
]
}
}
Switch ID attribute of the respective transport zone are read by NSX-T API from NSX manager as below:
Request:
{
pnic_nodes {
id label
}
}
Response:
{
"data": {
"pnic_nodes": [
{
264
"id": "82586be7-2998-401f-82ba-11afa5bb9730",
"label": "eth2"
},
{
"id": "0043d742-405a-454f-9e9b-695d5dd14608",
"label": "eth1"
},
{
"id": "b91a5725-7500-489b-a454-e05d7c311525",
"label": "eth0"
}
]
}
}
Transport nodes has the mapping of physical NICs which can be seen returned as labels according to
above NSX-T API response.
Please find below NSX-API Request/Response to check Transport node’s LLDP neighbor System name
attribute.
265
Request:
{
pnic_nodes {
id neighbor_name
}
}
Response:
{
"data": {
"pnic_nodes": [
{
"id": "82586be7-2998-401f-82ba-11afa5bb9730",
"neighbor_name": "leaf-2-525400C6DD2B"
},
{
"id": "0043d742-405a-454f-9e9b-695d5dd14608",
"neighbor_name": "leaf-2-525400C6DD2B"
},
{
"id": "b91a5725-7500-489b-a454-e05d7c311525",
"neighbor_name": "spine-1"
},
{
"id": "f77575fb-44ea-4ec7-9913-1c75b7af87bc",
"neighbor_name": "leaf-1-5254004D5560"
},
{
"id": "628d0f86-4bc1-4faf-8f3f-f1deb92ceee2",
"neighbor_name": "leaf-2-525400C6DD2B"
},
{
"id": "1e2162c3-9ce6-4f35-afc2-217bb48ced49",
"neighbor_name": "leaf-1-5254004D5560"
}
]
}
}
266
To obtain respective transport node’s LLDP neighbor interface name attribute below query can be used:
Request:
{
pnic_nodes {
id neighbor_intf
}
}
Response:
{
"data": {
"pnic_nodes": [
{
"id": "82586be7-2998-401f-82ba-11afa5bb9730",
"neighbor_name": "leaf-2-525400C6DD2B"
},
{
"id": "0043d742-405a-454f-9e9b-695d5dd14608",
"neighbor_name": "leaf-2-525400C6DD2B"
},
{
"id": "b91a5725-7500-489b-a454-e05d7c311525",
"neighbor_name": "spine-1"
},
{
"id": "f77575fb-44ea-4ec7-9913-1c75b7af87bc",
"neighbor_name": "leaf-1-5254004D5560"
},
{
"id": "628d0f86-4bc1-4faf-8f3f-f1deb92ceee2",
"neighbor_name": "leaf-2-525400C6DD2B"
},
{
"id": "1e2162c3-9ce6-4f35-afc2-217bb48ced49",
"neighbor_name": "leaf-1-5254004D5560"
}
]
267
}
}
Request:
{
pnic_nodes {
id neighbor_intf
}
}
Response:
{
"data": {
"pnic_nodes": [
{
"id": "82586be7-2998-401f-82ba-11afa5bb9730",
"neighbor_intf": "swp4"
},
{
"id": "0043d742-405a-454f-9e9b-695d5dd14608",
"neighbor_intf": "swp3"
},
{
"id": "b91a5725-7500-489b-a454-e05d7c311525",
"neighbor_intf": "eth0"
}
]
}
}
MTU size of 1600 or greater is needed on any network that carries Geneve overlay traffic must. Hence
in the NSX-T reply we can notice MTU value 1600 on network interfaces towards Transport nodes.
VNIC
• MAC address: Physical address attribute of transport node’s or VM's Virtual interface
You can check the VNIC mac address attribute with the below NSX-API Request/Response. This can be
of transport node’s interface Virtual Interface or can be for the Virtual Interface of the VMs. For
transport nodes under Host Switches select the Virtual NIC that matches the MAC address of the VM
NIC attached to the uplink port group.
Request:
{
vnic_nodes{
id mac_address
}
}
Response:
{
"data": {
"vnic_nodes": [
{
"id": "c84d8636-c28b-4db3-8747-37fadca4c7aa",
"mac_address": "1e:5c:3b:a2:ea:c3"
},
{
"id": "7d5826d8-0622-4a45-88d7-6b1e88bac62f",
"mac_address": "ca:0f:93:24:24:43"
}
]
}
}
NSX-API Request/Response to check VNIC label which signifies interface id attribute of transport
node’s virtual interface or device name attribute of virtual machine’s virtual interface.
269
Request:
{
vnic_nodes{
id label
}
}
Response:
{
"data": {
"vnic_nodes": [
{
"id": "c84d8636-c28b-4db3-8747-37fadca4c7aa",
"label": "hyperbus"
},
{
"id": "7d5826d8-0622-4a45-88d7-6b1e88bac62f",
"label": "nsx-switch.0"
},
{
"id": "473c2b7d-ab2f-41cd-9a4b-fcf2eb248fd6",
"label": "nsx-switch.0"
},
{
"id": "9553390b-754e-45ef-8976-e63396d554ee",
"label": "nsx-vtep0.0"
},
{
"id": "a00bb649-5032-462f-97e7-b6c4f5f1ac86",
"label": "nsx-vtep0.0"
}
]
}
}
Below is the NSX-API Request/Response to check VNIC Ipv4 address which signifies ip address
attribute of transport node’s virtual interface or for the virtual interface of logical port.
270
Request:
{
vnic_nodes{
id ipv4_addr
}
}
Response:
{
"data": {
"vnic_nodes": [
{
"id": "9553390b-754e-45ef-8976-e63396d554ee",
"ipv4_addr": "192.168.1.13"
},
{
"id": "a00bb649-5032-462f-97e7-b6c4f5f1ac86",
"ipv4_addr": "192.168.1.12"
}
]
}
}
271
Here “192.168.1.13” and “192.168.1.12” are ipv4 addresses for the bridge interface of the host
transport nodes i.e "nsx-vtep0.0" which acts as a virtual tunnel endpoint (VTEP) of the transport node.
Each hypervisor has a Virtual Tunnel Endpoint (VTEP) responsible for encapsulating the VM traffic inside
a VLAN header and routing the packet to a destination VTEP for further processing. This can be
compared to VXLAN Virtual Network anycast GW VTEP IP.
NSX-API Request/Response to check traffic types for the transport node’s virtual interface. Traffic type
for the transport node can be overlay type as per the example below or it can be of VLAN type. One can
add both the VLAN and overlay NSX Transport Zones to the Transport Nodes.
VLAN based Transport zone is mainly for uplink based traffic. In case VMs on different Hypervisor hosts
need to communicate to each other then overlay network should be used. It can be compared to VXLAN
Virtual network in Apstra Fabric.
272
Request:
{
vnic_nodes{
id traffic_types
}
}
Response:
{
"data": {
"vnic_nodes": [
{
"id": "9553390b-754e-45ef-8976-e63396d554ee",
"traffic_types": [
"overlay"
]
},
{
"id": "a00bb649-5032-462f-97e7-b6c4f5f1ac86",
"traffic_types": [
"overlay"
]
}
]
}
}
NSX-API Request/Response to obtain the mtu size for the transport node. MTU size for networks that
carry overlay traffic must be size of 1600 or greater as it carries Geneve overlay traffic. N-VDS and TEP
kernel interface all should have the same jumbo frame MTU size(i.e 1600 or greater).
Request:
{
vnic_nodes{
id mtu
}
}
273
Response:
{
"data": {
"vnic_nodes": [
{
"id": "9553390b-754e-45ef-8976-e63396d554ee",
"mtu": 1600
},
{
"id": "a00bb649-5032-462f-97e7-b6c4f5f1ac86",
"mtu": 1600
}
]
}
}
So Virtual Interface i.e NSX VTEP and vswitch should have mtu of 1600 as per screenshot above.
• Hashing_algorithm: Load balance algorithm attribute of host switch uplink lag profile
An uplink profile is mapped in a Transport node on the NSX-T side with policies for the links from the
hypervisor hosts to NSX-T logical switches.
The links from the Hypervisor hosts to NSX-T logical switches can comprise of the LAG or Teaming
configuration which must be tied to physical NICs.
NSX-API Request/Response to check the logical switch uplink LAG profile attribute.
Request:
{
port_channel_nodes {
id label
} id port_channel_policy_nodes {
id label
275
}
}
Response:
{
"data": {
"port_channel_nodes": [
{
"id": "bd86666b-239d-4baa-8715-d73ca40d7100",
"label": null
},
{
"id": "ff5a5b6b-a103-471a-bbfd-ee3dc8c6e1c7",
"label": null
}
],
"id": "rack-based-blueprint-9dfa0044",
"port_channel_policy_nodes": [
{
"id": "59f60d47-ca48-441d-a4a4-e570af7bdb72",
"label": "PTEST-LAG"
}
]
}
}
276
Uplink profile label can also be matched with one retrieved from the GUI in NSX-T Manager as below:
Below is NSX-API Request/Response to check the LACP mode attribute for the uplink LAG profile.
Request:
{
port_channel_nodes {
id
} id port_channel_policy_nodes {
id mode
}
}
Response:
{
"data": {
"port_channel_nodes": [
{
"id": "bd86666b-239d-4baa-8715-d73ca40d7100"
},
277
{
"id": "ff5a5b6b-a103-471a-bbfd-ee3dc8c6e1c7"
}
],
"id": "rack-based-blueprint-9dfa0044",
"port_channel_policy_nodes": [
{
"id": "59f60d47-ca48-441d-a4a4-e570af7bdb72",
"mode": "active"
}
]
}
}
NSX-API Request/Response to check load balancing algorithm attribute of host switch uplink profile.
Request:
{
port_channel_nodes {
278
id
} id port_channel_policy_nodes {
id hashing_algorithm
}
}
Response:
{
"data": {
"port_channel_nodes": [
{
"id": "bd86666b-239d-4baa-8715-d73ca40d7100"
},
{
"id": "ff5a5b6b-a103-471a-bbfd-ee3dc8c6e1c7"
}
],
"id": "rack-based-blueprint-9dfa0044",
"port_channel_policy_nodes": [
{
"id": "59f60d47-ca48-441d-a4a4-e570af7bdb72",
"hashing_algorithm": "srcMac"
}
]
}
}
From the LAG profile screenshot above it can be validated that it is using Source MAC Address based
load balancing algorithm.
Vnet
To obtain respective transport type attribute of the transport zone below query can be used. This mainly
signifies the type of traffic for a transport zone which can be Overlay or VLAN type.
Request:
{
vnet_nodes {
id vn_type
} id
}
Response:
{
"data": {
"vnet_nodes": [
{
"id": "a3320cc6-601e-4a81-abe9-8464ae054f18",
"vn_type": "overlay"
},
{
"id": "6bdd7cd9-82eb-433d-8360-076d9daddd1b",
"vn_type": "vlan"
}
],
"id": "rack-based-blueprint-9dfa0044"
}
}
280
NSX-API Request/Response to check the display name of the N-VDS logical switch.
Request:
{
vnet_nodes {
id label
} id
}
281
Response:
{
"data": {
"vnet_nodes": [
{
"id": "241ce8e1-b31d-4093-a1a3-2f99a29ac2f9",
"label": "mahi-nsxt-kvm-ls"
},
{
"id": "fef41435-ac20-4c4d-81c0-b7f3059d977b",
"label": "zz-cvx-nsxt.cvx.2485377892354-2902673742_1000"
},
{
"id": "6bdd7cd9-82eb-433d-8360-076d9daddd1b",
"label": "zz-cvx-nsxt.cvx.2485377892354-2902673742_VLAN-100-UPLINK-PROFILE-LAG"
}
],
"id": "rack-based-blueprint-9dfa0044"
}
}
Below is the NSX-API Request/Response to check VLAN ID attribute of a VLAN based logical switch for
the transport zone.
282
Request:
{
vnet_nodes {
id vlan
} id
}
Response:
{
"data": {
"vnet_nodes": [
{
"id": "e0b29951-7739-4ecb-8c87-5725a61f669a",
"vlan": 123
},
{
"id": "cdd0c6d5-fecb-44d8-84c4-06c685e8ef14",
"vlan": 2000
},
{
"id": "fef41435-ac20-4c4d-81c0-b7f3059d977b",
"vlan": 1000
},
{
"id": "6bdd7cd9-82eb-433d-8360-076d9daddd1b",
"vlan": 200
}
],
"id": "rack-based-blueprint-9dfa0044"
}
}
Here in Apstra Fabric VNI IDs 1000 and 2000 represent such VXLAN Virtual network for east-west L2
stretched traffic. Bridge backed logical switch on NSX-T should have the same VLAN IDs defined.
Request:
{
vnet_nodes {
id vni
} id
}
Response:
{
"data": {
"vnet_nodes": [
{
"id": "a3320cc6-601e-4a81-abe9-8464ae054f18",
"vni": 67595
},
{
"id": "b7923224-659b-4075-b69b-3edeb5726a32",
"vni": 67589
},
{
"id": "18b81c81-8ae1-46b1-83ca-05cd5b364a1c",
"vni": 67584
}
],
"id": "rack-based-blueprint-9dfa0044"
}
}
Endpoints (Virtual)
IN THIS SECTION
When you want more granularity in your security policies than virtual networks and routing zones can
provide, you'll use endpoints. Endpoints can be internal or external to the fabric. You can also combine
endpoints into groups.
Endpoints and security policies can be applied to Layer 2 IPv4 blueprints. (Blueprints with IPv6
applications enabled are not supported.) For more information about working with security policies, see
"Security Policies" on page 289.
From the blueprint, navigate to Staged > Virtual > Endpoints to go to endpoints. Click the name of a
section to go to its table view. You can create, clone, edit and delete endpoints. Then, when you create a
security policy you'll select the endpoints that you've created.
IN THIS SECTION
Parameter Description
Tags (optional) You can add tags for filtering or grouping beyond
membership custom groups or virtual networks (for
example “web server”, “db” and so on).
3. Click Create to stage the endpoint addition and return to the table view. Validation is performed to
ensure that the IP address is within the L2 subnet of the virtual network and that no endpoint with
the same IP address is within the same routing zone.
IN THIS SECTION
Parameter Description
Tags (optional) You can add tags for filtering or grouping beyond
membership custom groups or virtual networks (for
example “web server”, “db” and so on).
3. Click Create to stage the endpoint addition and return to the table view.
Enforcement points are supported on external-facing interfaces on border leaf devices only. They are
automatically created when you add external generic systems or external connectivity points to a
blueprint.
From the blueprint, navigate to Staged > Virtual > Endpoints > Enforcement Points to go to enforcement
points.
IN THIS SECTION
Parameter Description
(Continued)
Parameter Description
3. Click Create to stage the endpoint group addition and return to the table view.
Statistics
Policies
IN THIS SECTION
Security Policies
IN THIS SECTION
Endpoint connectivity is determined by reachability (the correct forwarding state in the network) and
security (connectivity must be permitted). Policies must be specified between L2 and L3 domains and
between more granular L2/L3 IP endpoints. Security policies allow you to permit or deny traffic
290
between the more granular endpoints. They control inter-virtual network traffic (ACLs on SVIs) and
external-to-internal traffic (ACLs in border leaf devices, external endpoints only). ACLs are rendered in
the appropriate device syntax and applied on enforcement points. Adding a new VXLAN Endpoint (for
example, adding a rack or adding a leaf to a virtual network) automatically places the ACL on the virtual
network interface. Adding a new generic system External Connectivity Point (ECP) (enforcement point)
automatically places ACL for external endpoint groups. You can apply security policies to Layer 2 IPv4-
enabled blueprints (IPv6 is not supported). For supported devices, refer to the Connectivity (from Leaf
Layer) table in the Feature Matrix in the Reference section.
Security policies consist of a source point (subnet or IP address), a destination point (subnet or IP
address), and rules to allow or deny traffic between those points based on protocol. Rules are stateless,
meaning responses to allowed inbound traffic are subject to the rules for outbound traffic (and vice
versa).
Rules can include traffic logging. The ACL is configured to log matches using whatever mechanism is
supported on the device. Log configuration is local to the network device; It's not on the Apstra server.
Parsing these logs is outside the scope of this document.
For a bi-directional security policy, you would create two instances of the policy, one for each direction.
You can apply more than one policy to each subnet/endpoint, which means the ordering of rules has an
impact on behavior. An implicit hierarchy exists between routing zones, virtual networks, and IP
endpoints, so you must consider how policies are applied at different levels of hierarchy. When one
rule's match set contains the other's match set (full containment), the rules can conflict. You can set the
rules to execute more specific rules first (“exception” focus/mode) or less specific first (“override” focus/
mode).
Rules can also conflict when there is a full containment situation between the rules but the action is the
same. In this case, there is potential for compression by using the less specific rule, and the more specific
rule becomes a “shadow” rule. When conflicting rules are detected, you are alerted and shown the
resolution.
A few cases where conflicting rules are identified are described below:
• Rules in policies between different pairs of IP endpoints (even if one is common to both pairs) are
non-overlapping given that the pairs of IP addresses are different. This causes a disjoint match set
from a source IP / destination IP perspective (different “IP signature”).
• Rules in policies between the same IP endpoints can overlap fields (such as destination port); Apstra
software checks for this.
• Rules in policies between different pairs of virtual networks (even if one virtual network is common
to both pairs) are non-overlapping given that the pairs of subnets are different. This causes a disjoint
match set from the source IP / destination IP perspective (different “IP signature”).
• Rules in policies between the same virtual networks can overlap fields (such as destination port);
Apstra software checks for this.
291
• When IP endpoint groups are used, they result in a set of IP endpoint pairs so the above discussion
related to IP endpoint pairs applies.
• Rules in policies between a pair of IP endpoints and a pair of parent virtual networks have
containment from an IP signature perspective. Apstra software analyzes destination port / protocol
overlap and classifies it as full-containment or non-full-containment conflict.
• Rules in policies between a pair of IP endpoints and a pair of virtual networks where at least one
virtual network is not parent are non-conflicting (different "IP signature").
• Rules in policies between a pair of IP endpoints and an IP endpoint - virtual network pair where the
virtual network is a parent have full containment from an IP signature perspective; Apstra software
analyzes the remaining fields.
• Rules in policies that contain external IP endpoints or endpoint groups must be analyzed from an IP
signature perspective as external points are not bound by any hierarchical assumptions.
• A routing zone is a set of virtual networks and IP endpoints so the above discussions apply.
• Destination point is internal (internal endpoint, internal endpoint group, virtual network, routing
zone)
To make composition tractable, both from an analysis point of view as well as from comprehending the
resulting composition it may be useful to limit the number of security policies that may apply to any
given endpoint/group.
Parameter Description
Description optional
(Continued)
Parameter Description
Tags optional
• External Endpoint
• Virtual Network
• Routing Zone
• Permit
(Continued)
Parameter Description
• UDP
• IP
• ICMP
From the blueprint, navigate to Staged > Policies > Security Policies > Policies to go to security policies.
You can create, clone, edit and delete security policies.
Before creating security policies, create "routing zones" on page 201, "virtual networks" on page 177,
"endpoints and endpoint groups" on page 284, in that order. They are the basis for creating security
policies.
294
1. From the blueprint, navigate to Staged > Policies > Security Policies > Policies and click Create
Security Policy.
2. Enter a name, and if you want the policy to be enabled leave the default. Otherwise, click the
Enabled toggle to disable it.
3. Select a source point type, and enter the source point.
4. Select a destination point type, and enter the destination point.
5. Click Add Rule, then enter a name and (optional) description.
6. Select an action from the drop-down list (Deny, Deny & Log, Permit, Permit & Log).
7. Select a protocol from the drop-down list (TCP, UDP, IP ICMP).
8. If you selected TCP or UDP, enter a port (or port range) for source and destination. (If you created
"TCP/UDP port aliases" on page 775, they appear in the drop-down list).
9. To add another rule, click Add Rule and configure as above.
NOTE: To the right of the Add Rule button you can automatically create a blocklist-type
policy by clicking Deny All or an allowlist-type policy by clicking Permit All.
10. You can adjust the rule order by clicking the Move up or Move Down buttons in each rule.
11. Click Create to stage the policy and return to the table view.
Policy Errors
1. Check the security policy in the table view for errors, which are highlighted in red.
3. When you resolve errors, the policy is no longer highlighted red and the Errors field is blank.
1. From the left navigation menu, navigate to Staged > Policies > Security Policies > Policies and click
the Edit button for the policy to edit.
2. Make your changes.
3. Click Edit to stage the changes and return to the table view.
1. From the left navigation menu, navigate to Staged > Policies > Security Policies > Policies and click
the Delete button for the policy to delete.
2. Click Delete to stage the deletion and return to the table view.
You can find security policies that are applied to specific subnets or points.
1. From the blueprint, navigate to Staged > Policies > Security Policies > Policy Search.
2. Select a source point type and enter a subnet or source point, as applicable.
3. Select a destination point type and enter a subnet or source point, as applicable.
296
From the blueprint, navigate to Staged > Policies > Security Policies > Conflicts to see any conflicts that
have been detected (Rule Conflicts column). Conflicts are resolved automatically whenever possible. By
default, more specific policies are applied before less specific ones, but you can change these security
policy settings. To see conflict details, click the icon in the Rule Conflicts column.
If the conflict was resolved automatically, Resolved by AOS appears in the Status column.
297
You can configure how you want to resolve conflicts and whether to permit or deny traffic.
1. From the blueprint, navigate to Staged > Policies > Security Policies > Settings.
2. Select options as appropriate.
• Conflict resolution
• Default action
SEE ALSO
Interface Policies
IN THIS SECTION
IEEE 802.1X is an IEEE Standard for network port-based Network Access Control. It is part of the IEEE
802.1 group of networking protocols. It provides an authentication mechanism to devices wishing to
attach to a LAN.
IEEE 802.1X defines the encapsulation of the Extensible Authentication Protocol (EAP) over IEEE 802,
which is known as "EAP over LAN" or EAPOL.
The authenticator acts as a security guard to a protected network. The supplicant (i.e., client device) is
not allowed access through the authenticator to the protected side of the network until the supplicant’s
identity has been validated and authorized. With 802.1X port-based authentication, the supplicant must
initially provide the required credentials to the authenticator - these will have been specified in advance
by the network administrator and could include a user name/password or a permitted digital certificate.
The authenticator forwards these credentials to the authentication server to decide whether access is to
be granted. If the authentication server determines the credentials are valid, it informs the authenticator,
which in turn allows the supplicant (client device) to access resources located on the protected side of
the network.
Extensions to 802.1X can also allow the authentication server to pass port-configuration options to the
authenticator. An example is using RADIUS value-pair attributes to pass a VLAN ID, allowing the
supplicant access to one of several VLANs.
299
You can manage 802.1X configuration on network devices with 802.1X server port authentication, a
collection of interface policy settings.
802.1X interface policy is supported on Junos (as a Tech Preview) and Arista EOS physical network
devices only. Juniper Evolved does not at this time support this feature.
NOTE: 802.1X interface policy on Junos has been classified as a Juniper Apstra Technology
Preview feature. These features are "as is" and voluntary use. Juniper Support will attempt to
resolve any issues that customers experience when using these features and create bug reports
on behalf of support cases. However, Juniper may not provide comprehensive support services
to Tech Preview features.
For additional information, refer to the "Juniper Apstra Technology Previews" on page 1223 page
or contact "Juniper Support" on page 893.
This policy setting enables the network to require L2 servers in a blueprint to authenticate to a RADIUS
server before being provided access to the network.
The network operator may require clients to authenticate using EAP-TLS, Certificates, simple username
& password, or MAC Authentication bypass.
300
NOTE: Support for encryption protocols, certificates, EAP, is negotiated between RADIUS
supplicant and RADIUS server, and is not controlled by the switch.
After authentication occurs, a RADIUS server may optionally set a VLAN ID attribute at authentication
time to move the supplicant into a defined VLAN, known by a leaf-specific VLAN ID.
This section describes the necessary tasks to create Interface Policies to be used with 802.1X server
port authentication and dynamic VLAN allocation.
Common Scenarios
The following are some common scenarios for 802.1X port authentication.
2. Switch (Authenticator) mediates EAP negotiation between supplicant and Radius (Authentication
Server)
3. Upon authentication, Radius sends an Access-Accept message to the switch which includes the
VLAN number for the device
2. Switch (Authenticator) mediates EAP negotiation between supplicant and Radius (Authentication
Server)
3. Finding no credential for the supplicant, Radius sends an Access-Reject message to the switch
4. The switch adds the device port to a designated Fallback (aka AuthFail/Parking) VLAN
Device does not support 802.1X, but the device MAC address is configured in Radius
2. Switch (Authenticator) does not receive a reply to its EAP-Request Identity message, indicating no
802.1X support
301
4. Radius sends an Access-Accept message to the switch which includes the VLAN number for the
device
Device does not support 802.1X, and device MAC address is not configured in Radius
2. Switch (Authenticator) does not receive a reply to its EAP-Request Identity message, indicating no
802.1X support
6. The switch adds the device port to a designated Fallback (aka AuthFail/Parking) VLAN
1. Create virtual networks (e.g. Data VLAN, Fallback VLAN, Dynamic VLAN)
Create virtual networks for the interface policy per the table below. We suggest creating these virtual
networks with a consistent VLAN ID among all leaf devices (instead of using a resource pool). For more
information about creating VLANs, see "Virtual Networks" on page 182.
Parameter Description
Data VLAN (assigned to ports) Interfaces will have 802.1X configuration if at least one
VLAN is assigned to the port. If a port does not have
any VLANs assigned, 802.1X configuration will not be
rendered on the interface. The interface will be
configured as a routed port.
302
(Continued)
Parameter Description
Dynamic VLAN (optional, assigned to leaf devices, not The RADIUS server itself optionally chooses the VLAN
ports) ID dynamically when the user (supplicant) is
authenticated and authorized. Apstra software does
not have control over Dynamic VLAN assignment. This
decision is made by RADIUS configuration, not by the
switch configuration.
Fallback VLAN (optional, assigned to leaf devices, not Fallback VLAN can be assigned to the user (supplicant)
ports) in case of authentication failure. For fallback, the
VLAN is controlled by the switch configuration.
Create the AAA server. For more information, see "AAA Servers (Blueprint)" on page 340.
You must create the policy before you can assign interfaces or fallback VLANs to it.
1. From the blueprint, navigate to Staged > Polices > Interface Policies and click Create Interface Policy.
303
• dot1x enabled - Requires ports to authenticate EAPOL before being given access to the network.
• Deny access - Completely blocks the port; no network access is permitted. No other parameters
are needed. Example: as a quarantine configuration to quickly deactivate ports that may be
infected.
• Multi-host** (default) - Allows all MAC addresses on the port to authenticate after the first
successful authorization. After the first host deauthorizes, all MACs on the port are de-
authenticated.
• Single-host - Permits a single host to authenticate; all other MACs are not permitted.
5. If you want to enable MAC Auth Bypass on Arista EOS, check the Enabled? box. Enabling MAC auth
bypass allows a switch to send the MAC address to the RADIUS server if the port does not
authenticate within the authentication timeout period. MAC Auth bypass (MAB) requests are only
sent if the client does not respond to RADIUS requests, or if the client fails authentication.
NOTE: MAC Auth bypass must be configured along with 802.1X port control.
CAUTION: MAC auth bypass failure behavior may be different between switch
vendors and major switch models.
6. Enter Re-auth Timeout (optional) to configure a time period (seconds). Re-authentication timeout
causes the switch to request any clients to re-authenticate to the network after the timeout expires.
This also re-triggers MAC Auth bypass.
7. Click Create to create the interface policy and return to the table view.
1. From the blueprint, navigate to Staged > Polices > Interface Policies, select the interface policy name
and scroll down to the Assigned To section.
304
2. Assign ports and interfaces: Click leaf names to expand interfaces, then click ports and interfaces to
assign them. Note that you cannot assign ports that are assigned to conflicting policies.
3. Assign fallback VN: Assigning the fallback virtual network is leaf-specific. To re-use the fallback on
multiple leaf devices, you have to assign it to each leaf. Any VN that is assigned to the leaf may be
used as a fallback virtual network - there are no restrictions.
4. After the policy is configured, the settings are now visible, including interfaces those settings apply
to.
NOTE: AAA, Dot1x, and Dot1x interface configurations are now pushed to the leaf devices.
The following is a part of sample config rendered for Arista EOS switch.
Routing Policies
IN THIS SECTION
Parameter Description
(Continued)
Parameter Description
Extra Import Routes (user-defined) • Prefix - IPv4 or IPv6 network address (format:
network/prefixlen) or IP address (interpreted as /32
network address).
(Continued)
Parameter Description
Export Policy • Spine Leaf Links - Exports all spine-leaf (fabric) links
within a VRF. EVPN routing zones do not have
spine-leaf addressing, so this generated list may be
empty. For routing zones of type Virtual L3 Fabric,
subinterfaces between spine-leaf are included.
(Continued)
Parameter Description
Extra Export Routes (user-defined) User-defined export routes. These policies are additive.
To advertise extra routes only, unselect all export
policies.
Aggregate Prefixes If you have routing zones associated with your routing
policy, and aggregate prefixes are supported on the
platform (see the "4.2.0 feature matrix" on page 989)
you can specify aggregate prefixes. These are the BGP
aggregate routes to be imported into the routing zone
(VRF) on all border switches. The aggregated routes
are sent to all generic system peers in a routing zone
(VRF).
(Continued)
Parameter Description
Expect Default IPv4 Route To add the expectation that the default route is used in
the default routing zone, check the box when you
create the policy. (This field applies to the default route
in the default routing zone only.) Checking this box
does not change any configuration; it generates the
expectation and raises an anomaly when the default
route is not present.
Expect Default IPv6 Route To add the expectation that the default route is used in
the default routing zone, check the box when you
create the policy. (This field applies to the default route
in the default routing zone only.) Checking this box
does not change any configuration; it generates the
expectation and raises an anomaly when the default
route is not present.
Associated Routing Zones Lists any routing zones that are associated with the
routing policy.
Associated Protocol Endpoints Lists any protocol endpoints that are associated with
the routing policy.
From the blueprint, navigate to Staged > Policies > Routing Policies to go to routing policies in the
blueprint. A default routing policy is associated with the default routing zone. You cannot change the
default routing policy, but you can create, clone, edit, and delete other routing policies as described
below.
310
1. From the blueprint, navigate to Staged > Policies > Routing Policies and click Create Routing Policy.
2. Configure the policy. For parameter details, see the Routing Policy Overview.
3. Click Create to stage the policy addition and return to the table view.
1. From the blueprint, navigate to Staged > Policies > Routing Policies and click the Edit button for the
policy to edit.
2. Make your changes.
3. Click Update (bottom-right) to stage the policy change and return to the table view.
1. From the blueprint, navigate to Staged > Policies > Routing Policies and click the Delete button for
the policy to delete.
2. Click Delete to stage the policy removal and return to the table view.
311
IN THIS SECTION
Routing zone constraints allow you to constrain server-facing interfaces that connect to specific routing
zones. Day-2 operators would be prevented from connecting a server to the wrong network, and assure
that a given server never gets added to the wrong network. The constraint can be defined in various
ways such as a list of allowed VRFs, a list of excluded VRFs, a maximum number of VRFs allowed, and so
on. Once the constraint is defined, you can enforce the constraint on server-facing interfaces using
connectivity templates of the type Routing Zone Constraint.
If you want to constrain more than one routing zone to a single port, you can group them, then specify
the group as a constraint when you create the routing zone constraint policy.
1. From the blueprint, navigate to Staged > Virtual > Routing Zone Groups and click Create Routing
Zone Group.
2. Enter a group name and (optional) tags.
3. In the Routing Zone drop-down list, select a routing zone to add to the group and click Add. The
routing zone is added to the Members list.
4. Repeat the previous step until you’ve added all the routing zones that you want in the group.
5. Click Create to create the group and return to the table view.
You can create a routing zone constraint policy, then later when you create a connectivity template you
can apply the policy to an application point. Some examples of how you could constrain VRFs include:
• One VRF maximum
1. From the blueprint, navigate to Staged > Policies > Routing Zone Constraints and click Create
Routing Zone Constraints.
2. Enter a name and (optional) maximum number of routing zones that the application point can be part
of.
3. Set the (optional) Routing Zones List Constraint.
a. Allow - only allow the specified routing zones (add specific routing zones to allow)
b. Deny - denies allocation of specified routing zones (add specific routing zones to deny)
If you need to, you can change or delete the policy after you've created it.
313
• If you edit the policy to increase the number of routing zones, you don't need to unassign
participating ports from the restriction.
• If you edit the policy to reduce the number of routing zones, ensure that all participating ports are in
compliance with the new restrictions before you save. Otherwise, you will receive an error.
• You can delete a constraint policy to free up any restrictions on the participating ports. These ports
should behave as if the constraint was never applied.
When you want to apply the constraint to an application point, add the Routing Zone Constraint
primitive to the connectivity template and specify the routing zone or routing zone group. For more
information about connectivity templates, see "Connectivity Templates" on page 348.
IN THIS SECTION
In Apstra version 4.2.0 and later, resources used for routing zones are already optimized (enabled) by
default. This means VRF configuration is rendered only on leafs where at least one server-endpoint is a
member of a virtual network in that routing zone.
In Apstra versions earlier than 4.2.0, all routing zones required resources. When you upgrade an Apstra
server from a pre-4.2.0 version to version 4.2.1 or later, optimization is disabled by default. Since
enabling optimization is disruptive, you must manually enable it yourself in this case. (Remembrer, you
can't upgrade to major releases, such as 4.2.0.)
1. From the blueprint, navigate to Staged > Policies > Routing Zone Policy and click Modify Settings.
2. In the Modify footprint optimization for routing zones dialog, select Disable or Enable, as
appropriate, then click Save Changes.
• Disabled - Resources are required for all routing zones (active and inactive).
• Enabled - Resources are required only on active routing zones (at least one server-endpoint is a
member of a virtual network in that routing zone).
RELATED DOCUMENTATION
IN THIS SECTION
IN THIS SECTION
By default, ESI MAC msb (most significant byte) is set to 2 on all blueprints. Every Apstra blueprint that's
connected must have a unique msb to prevent service-impacting issues. Before creating remote EVPN
gateways, change ESI MAC msb accordingly. (You can leave one of them at the default value.)
Apstra is programmed to assign a unique ESI MAC address starting with the value 00.00.00.00.00.01.
This feature allows you to manually configure the most significant byte (MSB) of the MAC address.
Updating this value results in the regeneration of all ESI MACs in the blueprint. This is necessary to
address the data center interconnect (DCI) use case requirement where ESI values must be unique
across multiple fabrics (blueprints). For example, if you have data centers DC1, DC2, and DC3 all
managed by Apstra and connected via Apstra DCI, by default, each of them will have the same internally
generated ESI MAC. You would use this feature to provide a unique value to DC2 and DC3.
RELATED DOCUMENTATION
IN THIS SECTION
Implementation | 319
Historically, enterprises have leveraged Data Center Interconnect (DCI) technology as a building block
for business continuity, disaster recovery (DR), or Continuity of Operations (COOP). These service
availability use cases primarily relied on the need to connect geographically separated data centers with
Layer 2 connectivity for application availability and performance.
With the rise of highly virtualized Software-Defined Data Centers (SDDC), cloud computing, and more
recently, edge computing, additional use cases have emerged:
• Colocation Expansion: Share compute and storage resources to colocation data center facilities.
• Resource Pooling: Share and shift applications between data centers to increase efficiency or
improved end-user experience.
• Rapid Scalability: Expand capacity from a resource-limited location to another facility or data center.
• Legacy Migration: Move applications and data off older and inefficient equipment and architecture to
more efficient, higher-performing, and cost-effective architecture.
With Apstra software, you can deploy and manage a vendor inclusive DCI solution that is simple,
flexible, and Intent-Based. Apstra utilizes the standards-based MP-BGP EVPN with VXLAN, which has
achieved broad software and hardware adoption in the networking industry. You can choose from a vast
selection of cost-effective commodity hardware from traditional vendors to white-box ODMs and
software options ranging from conventional vendor integrated Network Operating Systems (NOS) to
disaggregated open source options.
EVPN VXLAN is a standards-based (RFC-7432) approach for building modern data centers. It
incorporates both data plane encapsulation (VXLAN) and a routing control plane (MP-BGP EVPN
Address Family) for extending Layer 2 broadcast domains between hosts as well as Layer 3 routed
domains in spine-leaf networks. Relying on a pure Layer 3 underlay for routing of VXLAN tunneled
traffic between VXLAN Tunnel Endpoints (VTEPs), EVPN introduces a new address family to the MP-
317
BGP protocol family and supports the exchange of MAC/IP addresses between VTEPs. The
advertisement of endpoint MACs and IPs, as well as "ARP/ND-suppression", eliminates the need for a
great majority of Broadcast/Unknown/Multicast (BUM) traffic and relies upon ECMP unicast routing of
VXLAN, from Source VTEP to Destination VTEP. This ensures optimal route selection and efficient load-
sharing of forwarding paths for overlay network traffic.
Just as EVPN VXLAN works within a single site for extending Layer 2 between hosts, the DCI feature
enables Layer 2 connectivity between sites. The Apstra DCI feature enables the extension of Layer 2 or
Layer 3 services between data centers for disaster recovery, load balancing of active-active sites, or
even for facilitating the migration of services from one data center to another.
Limitations:
• IPv6 is not supported on Remote EVPN Gateways. (Actual EVPN routes can contain IPv6 Type 2 and
Type 5.)
IN THIS SECTION
You can implement Data Center Interconnect using the following methods:
• Gateways (GW)
For assistance with selecting the best option for your organization, consult your Apstra Solutions
Architect (SA) or Systems Engineer (SE).
• You can extend Apstra DCI to other Apstra-managed data centers, non-Apstra managed data centers,
or even to legacy non-spine-leaf devices.
DCI "Over the Top" is a transparent solution, meaning EVPN routes are encapsulated into standard IP
and hidden from the underlying transport. This makes the extension of services simple and flexible and
is often chosen because data center teams can implement it with little to no coordination with WAN or
Service Provider groups. This reduces the implementation times and internal company friction. However,
the tradeoff is scalability and resilience.
Gateway (GW)
Building upon the Apstra Remote EVPN Gateway capability, you can optionally specify that the Remote
EVPN Gateway is an external generic system (tagged as an external router) in the same site, thus
extending the EVPN attributes to said gateway. This solution creates a fault domain per site, preventing
failures from affecting convergence in remote sites and creating multiple fault domains. IP/MAC
endpoint tables for remote sites are processed and held in state on a generic system (tagged as external
router) gateway. You can also implement WAN QoS and security, along with optimizations that the
transport technology makes available (MPLS TE for example). However, this solution is more
operationally complex, requiring additional hardware and cost.
319
Using the Apstra Remote EVPN Gateway capability, you can optionally specify that the Remote EVPN
Gateway is an ASBR WAN Edge Device. This end-to-end EVPN enables uniform encapsulation and
removes the dedicated GW requirement. It is operationally complex but has greater scalability as
compared to both "DCI Using Gateway" and "Over the Top".
Implementation
IN THIS SECTION
You can extend routing zones and virtual networks (VN) to span across Apstra-managed blueprints
(across pods) or to remote networks (across data centers) that Apstra doesn't manage. This feature
introduces the EVPN Gateway (GW) role, which could be a switch that participates in the fabric or
RouteServer(s) on a generic system (tagged as a server) that is connected to the fabric.
• Span Layer 3 isolation domains (VRFs / routing zones) to multiple Apstra-managed pods (blueprints)
or extend to remote EVPN domains.
• Help extend EVPN domain from Apstra to Apstra-managed and Apstra to unmanaged pods.
320
• No VXLAN traffic termination on the spine devices - connect external generic systems (tagged as
external routers) on spine devices. This is to support IPv4 (underlay) external connectivity. Here spine
devices don't need to terminate VXLAN traffic, unlike border leaf devices, when connected to
external generic systems (tagged as external routers). In a nutshell, using this can exchange IPv4
routes to remote VTEPs (in the default routing zone/VRF) and only Layer 3 connectivity is required:
When BGP EVPN peering is done "over the top", the Data Center Gateway (DC-GW) is a pure IP
transport function and BGP EVPN peering is established between gateways in different data centers.
The next sections describes the procedures for interconnecting two or more BGP-based Ethernet VPN
(EVPN) sites in a scalable fashion over an IP network. The motivation is to support extension of EVPN
sites without having to rely on typical Data Center Interconnect (DCI) technologies like MPLS/VPLS,
which are often difficult to configure, sometimes proprietary, and likely legacy in nature.
"Over the Top" is a simple solution that only requires IP routing between data centers and an adjusted
MTU to support VXLAN encapsulation between gateway endpoints. In such an implementation, EVPN
routes are extended end-to-end via MP-BGP between sites. Multi-hop BGP is enabled with the
assumption that there will be multiple Layer 3 hops between sites over a WAN. Otherwise the default
TTL decrements to 0 and packets are discarded and don't make it to the remote router. Apstra
automatically renders the needed configuration to address these limitations.
This design merges the separate EVPN-VXLAN domains and VXLAN tunnels between sites. Merging of
previously separate EVPN domains in different sites realizes the benefit of extending Layer 2 and Layer
3 (VRF) services between sites, but also renders the sites as a single fault domain. So a failure in one site
is necessarily propagated. Also, anytime you stretch Layer 2 across the WAN between sites, you are also
extending the flood domain and along with it, all broadcast traffic over your costly WAN links. At this
time, this solution does not offer any filtering or QoS.
NOTE: When separate Apstra blueprints manage individual sites (or when only one site is Apstra-
managed) you must create and manage extended routing zones (VRFs) and virtual networks
321
(Layer 2 and/or Layer 3 defined VLANs/subnets) independently in each site. You must manually
map VRFs and VNs between sites (creating administrative overhead).
NOTE: If you’re setting up P2P connections between two data centers (blueprints) in the same
Apstra controller, each blueprint must pull resources from different IP pools to avoid build errors.
To do this, create two IP pools with the same IP subnet, but with different names.
This "Over the Top" solution is the easiest to deploy, requires no additional hardware and introduces no
additional WAN config other than increasing the MTU. It is the most flexible and has the lowest barrier
to entry. However, the downside is that there is a single EVPN control plane and a routing anomaly in
one site will affect convergence and reachability in the other site(s). The extension of Layer 2 flood
domains also implies that a broadcast storm in one site extends to the other site(s).
With any DCI implementation, careful resource planning and coordination is required. Adding more sites
requires an exponential increase in such planning and coordination. VTEP loopbacks in the underlay
need to be leaked. VNIDs must match between sites and in some cases, additional Route Targets (RTs)
must be imported. This is covered in detail later in this document.
VXLAN Network IDs (VNIDs) are a part of the VXLAN header that identify unique VXLAN tunnels, each
of which are isolated from the other VXLAN tunnels in an IP network. Layer 3 packets can be
encapsulated into a VXLAN packet or Layer 2 MAC frames can be encapsulated directly into a VXLAN
packet. In both cases, a unique VNID is associated with either the Layer 3 subnet, or the Layer 2
domain. When extending either Layer 3 or Layer 2 services between sites, you are essentially stitching
VXLAN tunnels between sites. VNIDs therefore need to match between sites.
It is important to understand that a particular VNID will be associated with only one VRF (or routing
zone in Apstra terminology). VNIDs exist within a VRF. They are tied to a VRF. For Layer 3 services, the
stitching, or extending, of each VNID is done with the export and import of RTs within a routing zone
(VRF). Layer 3 subnets (routes) are identified via RTs. All VNIDs are exported automatically at the EVPN
gateway (edge) towards the WAN. Conversely, RTs of the same value are automatically imported at the
EVPN gateway (edge) coming into the fabric. So if you coordinate the Layer 3 VNIDs at one site to
322
In the image above, no additional export or import is required. Everything is automatically exported
(Export All) and because the RTs match, they are automatically imported.
However, if a VNID in DC1 is different from a VNID in DC2, then you must import the RTs respectively.
Each respective gateway still automatically imports RTs of the same value. In the example below, an
additional step of manually adding the RTs from the other site is required.
A virtual network can be a pure Layer 2 service (Layer 3 anycast gateway is not instantiated). It can be
rack-local (VLAN on server-facing ports contained within a rack) or VXLAN (select the racks to extend
the Layer 2 flood and broadcast domain between racks. This Layer 2 domain has its own VNID, and the
323
MAC frames (as opposed to IP packets) are encapsulated into the VXLAN header with the VNID of the
Layer 2 domain.
The same principles apply in that all VNIDs are exported at the EVPN gateway (in this case Type-2
routes/MAC addresses), and matching RTs are automatically imported. However, the location of
importing and exporting RTs is not at the routing zone level, but instead at the virtual network itself.
Apstra Workflow
IN THIS SECTION
Apstra uses the concept of an an "EVPN Gateway". This device can theoretically be a leaf, spine or
superspine fabric node, as well as the DCI device. EVPN Gateways separate the fabric-side from the
network that interconnects the sites and masks the site-internal VTEPs.
In Apstra, an EVPN Gateway is a device that belongs to and resides at the edge of an EVPN fabric which
is also attached to an external IP network. In an Apstra EVPN blueprint, this is always a border-leaf
device. The EVPN Gateway of one data center, establishes BGP EVPN peering with a reciprocal EVPN
gateway, or gateways, in another data center. The "other" EVPN gateway is the "Remote EVPN
Gateway" in Apstra terminology. The Local EVPN Gateway is assumed to be one of the Apstra-managed
devices in the blueprint, and is selected when creating the "Remote EVPN Gateway". The Local EVPN
Gateway will be the border-leaf switch with one or more external routing connections for traffic in and
out of the EVPN Clos fabric.
Due to this capability, you can configure a Local EVPN Gateway (always an Apstra-managed switch) to
peer with a non Apstra-managed, or even a non Spine-Leaf device, in another DC. The EVPN Gateway
BGP peering is used to carry all EVPN attributes from inside a pod, to outside the pod. In the Apstra
environment, each blueprint represents a data center. If two or more sites are under Apstra
management, you still must configure each site to point to the "Remote EVPN Gateway(s)" in other sites.
We recommend that you create multiple, redundant EVPN Gateways for each data center. There is also
324
currently a full mesh requirement between EVPN gateways, although in future releases this requirement
will be removed.
The underlay reachability to VTEP IP addresses, or an equivalent summary route, must be established
reciprocally. Each site must advertise these VTEP loopbacks from within the default routing zone into
the exported BGP (IPv4) underlay advertisements. Loopbacks in the routing policy are enabled by
default.
CAUTION: By default, ESI MAC msb (most significant byte) is set to 2 on all blueprints.
Every Apstra blueprint that's connected must have a unique msb to prevent service-
impacting issues. Before creating gateways, "change ESI MAC msb" on page 328
accordingly. (You can leave one of them at the default value.)
Remote EVPN Gateway is a logical function that you could instantiate anywhere and on any device. It
requires BGP support in general, L2VPN/EVPN AFI/SAFI specifically. To establish a BGP session with an
EVPN gateway, IP connectivity, as well as connectivity to TCP port 179 (IANA allocates BGP TCP ports),
should be available.
NOTE: For resilience, we recommend having at least two remote gateways for the same remote
EVPN domain.
1. From the blueprint, navigate to Staged > Virtual > Remote EVPN Gateways and click Create Remote
EVPN Gateway.
325
2. In the dialog that opens, fill in the following information for the remote EVPN gateway.
When extending L2 networks between data center fabrics you have the option to exchange only
EVPN Route Type RT-5 prefixes (interface-less model). This is useful when there is no need to
exchange all host routes between data center locations. This results in smaller requirements for the
routing information base (RIB), also known as the routing table, and the forwarding information base
(FIB), also known as the forwarding table, on DCI equipment.
3. Select the Local Gateway Nodes. These are the devices in the blueprint that will be configured with a
Local EVPN Gateway. You can select one or more devices to peer with the configured remote EVPN
gateway. You can use the query function to help you locate the appropriate nodes. We recommend
using multiple border-leaf devices which have direct connections to external generic systems (tagged
326
as external routers).
4. Click Create to stage the gateway and return to the table view.
5. When you are ready to deploy the devices in the blueprint, commit your changes.
We recommend using multiple remote EVPN gateways. To configure additional remote EVPN gateways,
repeat the steps above.
If you are configuring the Remote EVPN Gateway(s) to another Apstra blueprint, you must configure and
deploy the remote EVPN gateway(s) separately in that blueprint.
Once the change is deployed, Apstra monitors the BGP session for the remote EVPN gateways. To see
any anomalies from the blueprint, navigate to Active > Anomalies.
RT (route-target) import/export policies on devices that are part of extended service govern EVPN route
installation. Specify route target policies to add import and export route-targets that Apstra uses for
routing zones/VRFs. You do this when you create routing zones. Navigate to Staged > Virtual > Routing
327
Zones and click Create Routing Zone. For more information, see "Routing Zones" on page 199.
NOTE: The generated default route-target for routing zones is <L3 VNI>:1. You can't change this
default value.
To confirm that correct routes are received at VTEP make sure L3VNIs and route target are identical
between the blueprint and remote EVPN domains.
You can add additional import and export route-targets that Apstra uses for virtual networks.
328
NOTE: The default route target that Apstra generates for virtual networks is <L2 VNI>:1. You
can’t alter this.
For Intra-VNI communication L2VNI specific RT is used. The import RT is used to determine which
received routes are applicable to a particular VNI. To establish connectivity, Layer 2 VNIs must be the
same between the blueprint and the remote domains. SVI subnets must be identical across domains.
Remote EVPN gateways are represented on the topology view as cloud elements with dotted line
connections to the blueprint elements with which BGP sessions are established as shown in the image
below. (Image below is slightly different from more recent versions.)
SEE ALSO
CAUTION: Updating the Most Significant Byte (msb) value regenerates all existing ESI
MACs in the blueprint.
329
To enable ESI (EVPN) LAG multihoming, an Ethernet segment identifier (ESI) is mandatory. ESIs identify
ESI LAGs. Apstra automatically generates ESI MAC addresses using most significant byte (msb) values.
Configuration of the ESI value is rendered as 10 octets. The first octet is 0. The second octet is the most
significant byte value. To ensure that multicast MACs are not generated, the second octet must be an
even number between 0 and 254. The second through sixth octets are used as the LACP system ID.
Apstra is programmed to assign a unique ESI MAC address starting with the value 00.00.00.00.00.01.
The msb value in each Apstra blueprint defaults to the value 2. If you aren't connecting blueprints (IP
fabrics) you can leave the value as is. You can manually configure the most significant byte (msb) of the
MAC address. Updating this value results in the regeneration of all ESI MACs in the blueprint. This is
necessary to address the data center interconnect (DCI) use case requirement where ESI values must be
unique across multiple fabrics (blueprints). For example, if you have data centers DC1, DC2, and DC3 all
managed by Apstra and connected via Apstra DCI, by default, each of them will have the same internally
generated ESI MAC. You would use this feature to provide a unique value to DC2 and DC3.
1. From the blueprint, navigate to Staged > DCI > Settings and click Modify Settings.
2. Change the ESI MAC most significant byte to an even number between 0 and 254 that is different
from the msbs for all connected blueprints.
3. Click Save Changes to save your changes and return to the DCI Settings view.
When you're ready to activate your changes, commit them from the Uncommitted tab.
RELATED DOCUMENTATION
Catalog
IN THIS SECTION
Configlets | 335
Tags | 344
Logical Devices
IN THIS SECTION
The logical devices in the blueprint catalog are from the template used to create your blueprint .
1. From the blueprint, navigate to Staged > Catalog > Logical Devices and click the Export to global
catalog button for the logical device to export (in the Actions column on the right side).
331
• Export existing - to create interface maps for this logical device in the global catalog that you can
re-import into the blueprint. If you already have a logical device with the same name in the global
catalog, you can’t use this option. When you export a logical device with this option, the logical
device ID and logical device name are the same.
3. Click Export to export the logical device and return to the table view.
RELATED DOCUMENTATION
Interface Maps
IN THIS SECTION
1. Make sure the "interface map" on page 729 that you want to import is in the global catalog.
2. From the blueprint, navigate to Staged > Catalog > Interface Maps and click Import Interface Map.
3. Select a logical device and an interface map from the drop-down lists. A preview of your selection
appears.
4. Click Import Selected Interface Map to stage the import and return to the table view.
RELATED DOCUMENTATION
1. From the blueprint, navigate to Staged > Catalog > Interface Maps and click the Delete button for the
interface map to delete (in the Actions column on the right side).
2. Click Delete to stage the deletion and return to the table view.
RELATED DOCUMENTATION
Property Sets
IN THIS SECTION
1. Make sure the "property set" on page 773 that you want to import is in the design catalog.
2. From the blueprint, navigate to Staged > Catalog > Property Sets and click Import Property Set.
3. From the drop-down list, select a property set from the design catalog, then click Import Property Set
to stage the import and return to the table view.
RELATED DOCUMENTATION
If a property set that's used in a blueprint is updated in the design (global) catalog, a message appears in
the blueprint catalog stating that the property set in the blueprint catalog is Different from global
catalog. If you want the blueprint to use the updated property set, re-import it.
1. From the blueprint, navigate to Staged > Catalog > Property Sets.
2. Click the Re-import button for the "stale" property set, then click Re-import Property Set to stage the
update and return to the table view.
RELATED DOCUMENTATION
As long as a property set is not used in a configlet, you can unassign it from a device at any time. If it is
used in a configlet, a build error occurs and you won't be able to commit the change until you remove
the property set from the configlet which resolves that build error.
1. From the blueprint, navigate to Staged > Catalog > Property Sets and click the Delete button for the
property set to delete.
2. Click Delete to stage the deletion and return to the summary table view.
335
Configlets
IN THIS SECTION
From the blueprint, navigate to Staged > Catalog > Configlets to go to blueprint configlets. Configlets are
vendor-specific. Apstra software automatically ensures that configlets of a specific vendor aren't applied
to devices from a different vendor. You can import, edit, and delete configlets from the blueprint catalog.
NOTE:
service config deployment"Anomalies (Service)" on page 502
336
Import Configlet
SUMMARY
You've created a configlet in the design (global) catalog. Now you'll import it into your blueprint catalog,
set conditions and specify where to apply it in the blueprint.
1. Go to the configlets catalog in the blueprint. You can get to it in a couple of ways:
• From the blueprint, navigate to Staged > Catalog > Configlets to go to the configlets catalog.
• Or, navigate to Staged > Physical > Build > Configlets, then click Manage Configlets to go to the
configlets catalog.
337
5. For all configlet types, specify the application scope. For example, you may want to apply the
configlet to all generic systems that you've tagged as firewalls. Instead of listing all applicable generic
338
systems, you can just add one tag to the scope. You can define the scope in a couple of different
ways:
• Select Role, Name, Hostname or Tags from the drop-down list, then select the check boxes that
apply. (You can "apply tags to interfaces" on page 143 as of Apstra version 4.2.0.) To add an
additional definition, click +Add.
6. Click Import Configlet to stage the configlet and return to the configlet catalog.
RELATED DOCUMENTATION
IN THIS SECTION
1. From the blueprint, navigate to Staged > Catalog > Configlets and click the Edit button for the
configlet to edit.
2. Make your changes to the configlet scope. The options are the same as for "importing a configlet" on
page 336.
3. Click Update to stage the update and return to the table view.
1. "Edit" on page 772 or "create" on page 771 a configlet in the design (global) catalog.
2. "Delete" on page 339 the configlet from the blueprint catalog.
3. "Import" on page 336 the configlet into the blueprint catalog from the design (global) catalog.
4. Commit the changes.
SEE ALSO
When you delete a configlet, it's removed from all devices within its scope.
340
1. From the blueprint, navigate to Staged > Catalog > Configlets and click the Delete button for the
configlet to delete.
2. Click Delete to stage the deletion and return to the table view.
RELATED DOCUMENTATION
AAA Servers
IN THIS SECTION
IN THIS SECTION
Parameter Description
(Continued)
Parameter Description
Hostname
Auth Ports
From the blueprint, navigate to Staged > Catalog > AAA Servers to go to the AAA servers catalog. You
can create, clone, edit, and delete AAA servers.
342
client Arista-7280SR-48C6-1 {
shortname = Arista-7280SR-48C6-1
343
ipaddr = 172.20.191.10
secret = testing123
nastype = other
}
This example shows a simple credential; when you configure you may use any EAP method that both
the client and RADIUS server support.
/etc/wpa_supplicant/aos_wpa_supplicant.conf
# Ansible managed
ctrl_interface=/var/run/wpa_supplicant
# Default version is 0 - ensure we're using modern protocols.
eapol_version=2
# Don't scan for wifi.
ap_scan=0
# Hosts will be configured to authenticate with usernames that match their
# Slicer DUT name, configured in radius_server playbook.
network={
key_mgmt=IEEE8021X
eap=TTLS MD5
identity="leaf1-server1"
anonymous_identity="leaf1-server1"
password="password"
phase1="auth=MD5"
phase2="auth=PAP password=password"
344
eapol_flags=0
}
Tags
IN THIS SECTION
IN THIS SECTION
Tags Overview
You can apply tags to nodes, links and connectivity templates in your blueprint. When you create a
blueprint, if you added tags to the design elements used to create that blueprint (rack types and
templates), those tags are added to the blueprint Tags catalog. From the blueprint, navigate to Staged >
Catalog > Tags to go to the tags blueprint catalog. You can add, clone, edit and delete blueprint tags. You
can also import global catalog tags to the blueprint catalog and export blueprint tags to the global
345
catalog.
Search Tags
You can filter tagged elements based on tag names and/or element types.
1. From the blueprint, navigate to Staged > Catalog > Tags and click Query to open the dialog.
2. Enter search criteria:
• To see elements associated with tags, enter tag name(s) in the Name field.
• To see tags that elements are associated with, select element type(s) from the drop-down list in
the Applied To field.
• To filter both by tag name and element type, enter details in both fields.
3. Click Apply to see filtered results in the table.
4. To go to the table view for a filtered element type, click the element type in the Applied To column.
From there you can drill down for more details on a specific element.
Find by Tags
With Find by Tags, you can search the entire blueprint for nodes, links, and connectivity templates that
have associated tags.
1. From any page in the staged (or active) blueprint click Find by Tags (right side).
346
2. Either start typing to filter tags for selection, or select one or more check boxes.
3. Click Find tagged objects to display all objects with those tags.
1. From the blueprint, navigate to Staged > Catalog > Tags and click Create Tag.
2. Select New and enter a name and (optional) description. Names are case-insensitive.
3. Click Create to stage the tag addition and return to the table view. The newly created tag appears in
the table.
RELATED DOCUMENTATION
1. From the blueprint, navigate to Staged > Catalog > Tags and click the Export button for the tag to
export. If a tag exists in the global catalog with the same name you won't be able to export it. (The
export button will be nonfunctional.)
2. Click Export to export the tag to the global catalog and return to the table view.
RELATED DOCUMENTATION
1. From the blueprint, navigate to Staged > Catalog > Tags and click Create Tag.
347
2. Select Import from Global Catalog, select a tag from the drop-down list and enter an (optional)
description.
3. Click Create to stage the tag import and return to the table view. The newly created tag appears in
the table.
RELATED DOCUMENTATION
1. From the blueprint, navigate to Staged > Catalog > Tags and click the Edit button for the tag to edit.
2. Change the description.
3. Click Update to stage the change and return to the table view.
RELATED DOCUMENTATION
1. From the blueprint, navigate to Staged > Catalog > Tags and click the Delete button for the tag to
delete.
2. Click Delete to stage the deletion and return to the table view.
RELATED DOCUMENTATION
Tasks
IN THIS SECTION
Connectivity Templates
IN THIS SECTION
Primitives | 350
Create Connectivity Template for Multiple VNs on Same Interface (Example) | 362
Create Connectivity Template for Layer 2 Connected External Router (Example) | 365
• Assigning Apstra virtual network endpoints (tagging and untagging VLAN ports) to connect Layer 2
servers.
• Creating Layer 3 interfaces and VLAN-tagged sub-interfaces with BGP routing between Apstra fabric
border-leaf devices and external routers.
Use connectivity templates to configure the required external routing connections to routing zones. To
see static routes and protocol sessions, navigate to Staged > Virtual in the blueprint.
349
From the blueprint, navigate to Staged > Connectivity Templates to go to the connectivity template
table view. You can create, assign, edit, and delete connectivity templates.
With advanced search you can filter based on primitive types, and based on the types, you can show
parameters and filter on those parameters. You can take this search to multiples levels. For example, you
can search for all the logical links in routing zone green or all the static routes with the same next hop.
350
Primitives
IN THIS SECTION
User-defined | 361
Pre-defined | 362
The Primitives tab includes the supported configuration functions that can be added to connectivity
templates.
351
The virtual network (single) primitive ends with a vn_endpoint point that can optionally connect to
another compatible primitive, such as BGP peering (generic system).
352
Unlike the virtual network (single) primitive, the virtual network (multiple) primitive cannot connect
another primitive.
IP Link Primitive
IP link uses Apstra resource pool Link IPs - To Generics (by default) to dynamically allocate an IP
endpoint (/31) on each side of the link. You can create an IP link for any routing zone including the
default routing zone. You can use an untagged link even if it is for a non-default routing zone. If you
select a tagged interface, the VLAN ID is required.
The IP link primitive ends with an ip_link point that can optionally connect to another compatible
primitive, such as BGP peering (generic system).
353
The L3 MTU field was added in Apstra version 4.2.0 to enable you to update the MTU on subinterfaces.
Next-hop is derived from either the IP link or virtual network endpoint. If the remote peer IP is shared
across the generic system, then share the IP endpoint.
354
The Static Route primitive uses the next available IP address as the next-hop. To use a specific next-hop
IP address, use the Custom Static Route instead.
If the next-hop IP address is not accessible, the static route will not be installed. Apstra software cannot
monitor the next-hop IP and will not alert you if it is not accessible. It is your responsibility to configure
the custom static route primitive correctly.
355
Connectivity templates using this primitive can only be assigned to leaf systems and cannot be
combined with interface primitives.
The BGP peering (IP endpoint) primitive creates a BGP peering session with a user-specified BGP
neighbor addressed peer. You can use this to create a BGP peering session to a Layer 3 server running
BGP connected to an Apstra virtual network.
• IPv4 AFI
• IPv6 AFI
• When you set TTL to 0, nothing is configured and the device defaults are used.
• When you set TTL to 1, Cisco NX-OS and FRR-based BGP (SONiC) render disable-connected-
check. Otherwise, TTL values render ebgp-multihop on specific BGP neighbors.
356
• Single-hop BFD
• This enables BFD for the BGP peering. Multihop BFD is only supported for Junos, which is
activated by default.
• BGP Password
You can connect a routing policy primitive to a BGP peering (IP endpoint)
The BGP peering (generic system) primitive creates a BGP peering session with a generic system. The
generic system is inherited from Apstra generic system properties, such as loopback and ASN
(addressed, link-local peer). This primitive connects to a virtual network (single) or IP link connectivity
point primitive.
• IPv4 AFI
• IPv6 AFI
• When you set TTL to 0, nothing is configured and the device defaults are used.
• When you set TTL to 1, Cisco NX-OS and FRR-based BGP (SONiC) renders disable-connected-
check. Otherwise, TTL values render ebgp-multihop on specific BGP neighbors.
• Single-hop BFD
• This enables BFD for the BGP peering. Multihop BFD is only supported for Junos, which is
activated by default.
• BGP Password
• IPv6 Addressing Type (none, (addressed if IPv6 applications are enabled) link local)
• Local ASN - Configured on a per-peer basis. It allows a router to appear to be a member of a second
autonomous system (AS) by prepending a local-as AS number, in addition to its real AS number,
announced to its eBGP peer, resulting in an AS path length of two.
• Loopback: use this option to peer with the loopback address of a single remote system.
• Interface/IP endpoint: use this option to peer with the IP address of a single remote system link or
routed vlan interface.
• Interface/Shared IP endpoint: use this option for any scenario where the remote peer IP address
is shared across multiple remote systems.
358
You can connect a routing policy primitive to a BGP peering (generic system).
The dynamic BGP peering primitive enables dynamic peering on selected devices and virtual networks.
• IPv4 AFI
• IPv6 AFI
• When you set TTL to 0, nothing is configured and the device defaults are used.
• When you set TTL to 1, Cisco NX-OS and FRR-based BGP (SONiC) renders disable-connected-
check. Otherwise, TTL values render ebgp-multihop on specific BGP neighbors.
• Single-hop BFD
• This enables BFD for the BGP peering. Multihop BFD is only supported for Junos, which is
activated by default.
• BGP Password
359
• IPv4
• IPv6
• IPv4 subnet for BGP prefix dynamic neighbors. If you leave this field blank, Apstra derives the subnet
from the application point.
• IPv6 subnet for BGP prefix dynamic neighbors. If you leave this field blank, Apstra derives the subnet
from the application point.
The routing policy primitive applies a routing policy to an application endpoint. This overrides the
routing policy configured for the routing zone. You must select the routing policy that was defined in the
360
When you want to apply the routing zone constraint to an application point, add the Routing Zone
Constraint primitive to the connectivity template and specify the routing zone or routing zone group.
361
User-defined
From the User-defined tab, you can add grouped primitives that you previously created as connectivity
templates.
362
Pre-defined
From the Pre-defined tab, you can add grouped primitives that ship with the Apstra software.
1. From the blueprint, navigate to Staged > Connectivity Templates and click Add Template. The staging
area on the right contains the application point.
2. In the Parameters tab, enter a connectivity name in the Title field. You can optionally enter a
description, and tags that you can use during subsequent searches.
3. The tabs Primitives, User-defined, and Pre-defined all contain primitives either singly or in groups.
They are described in more detail in the overview. For this example, we'll add primitives one at a time
from the Primitives tab. Click the Primitives tab, then click Virtual Network (Single). It's added to the
363
4. Click the Parameters tab to see what you need to configure for that primitive. In this example, you
need to select a virtual network and specify whether it is VLAN tagged or untagged.
364
5. When it's successfully configured, the color of the selected primitive changes from red to gray. Click
the Primitives tab.
7. In the staging area, click Virtual Network (Multiple) (to make sure it's selected), click the Parameters
tab and configure the primitive.
365
8. Click Create to create the connectivity template and return to the table view where you'll see your
newly created connectivity template.
1. From the Create Connectivity Template dialog, click Primitives, click Virtual Network (Single), and
configure it on the Parameters tab (similar to the first example).
367
2. Click Primitives. When a primitive is selected, the other primitives that you can add to it are
highlighted (new in Apstra version 4.0).
368
3. With Virtual Network (Single) selected in the staging area, click BGP Peering (Generic System) to add
it to the staging area and connect it to the virtual network.
4. Proceed with configuring the parameters and click Create to create the template.
IN THIS SECTION
Method 1 | 369
Method 2 | 370
You can assign connectivity templates that have an active Assign button. These include connectivity
templates in the Ready or Assigned status. (Incomplete status means that more configuration is
required.) You can use one of two methods to assign connectivity points:
369
• Method 1- Select connectivity templates from the table view, and add application endpoints.
Method 1
1. From the blueprint, navigate to Staged > Connectivity Templates and click the Assign button (in the
Actions section on the right) for the connectivity template to assign. (You can select multiple
connectivity templates, to the left of the CT name, then click the Assign button that appears above
the list.)
You can use "bulk actions" to select multiple "children" application endpoints.
3. Click Assign to complete the connectivity template assignments.
370
4. You can view application endpoints in Table view. From the table view, you can filter application
endpoints by pod, rack, node, applied connectivity templates, or tags. You can also copy/paste
connectivity template assignments from the table view.
Method 2
1. From the blueprint, navigate to Staged > Connectivity Templates and click Application Endpoints.
2. You can click the + button to add a column for multiple connectivity templates.
3. You can then query and select the desired assignment combination of connectivity templates and
application endpoints.
4. After a connectivity template is applied, its configuration may require additional resources in the
blueprint. For example, if you're adding Layer 3 links to connect a generic system (such as an external
router), you must assign Generic Link IPs.
5. You can view as a Table view. From the table view, you can filter application endpoints by pod, rack,
node, applied connectivity templates, or tags. You can also copy/paste connectivity template
371
When a virtual network (single) or virtual network (multiple) template is already assigned to a port and
you want to assign a new VN template, you’ll receive a validation error indicating that the port already
has a VN template assigned to it. As of Apstra version 4.0.1 you can force assign the new VN template,
which automatically unassigns the existing VN template(s) and assigns the new one(s) on the selected
port(s). You don’t need to manually unassign the existing VN template.
To force assign VN templates, from the CT assignment screen, click Remove all conflicts, then click
Assign.
372
1. Either from the table view (Staged > Connectivity Templates) or the details view, click the Delete
button for the connectivity template to delete.
2. Click Delete to delete the connectivity template and return to the table view.
Fabric Settings
IN THIS SECTION
Fabric Policy
IN THIS SECTION
CAUTION: After IPv6 has been enabled in a blueprint, it cannot be disabled. Although,
you could use Time Voyager to rollback to a revision before IPv6 was enabled.
Enabling support for IPv6 virtual networks on EVPN L2 deployments or L3 deployments adds resource
requirements and device configurations. This includes IPv6 loopback addresses on leaf devices and spine
devices, IPv6 addresses for MLAG SVI subnets and IPv6 addresses for leaf L3 peer links. The following
caveats apply:
• When IPv6 is enabled on EVPN L2 deployments, security policy functionality is not available.
1. From the blueprint, navigate to Staged > Fabric Settings > Fabric Policy and click Modify Settings.
"Assign the required IPv6 IP addresses" on page 33. For more information about IPv6 configuration, see
"Virtual Networks" on page 177.
When you're ready to activate your changes, commit them from the Uncommitted tab.
NOTE:
• To update the global default settings for SVIs and IP link MTUs, update the Default IP Links to
Generic Systems MTU and/or the Default SVI L3 MTU fields in the Virtual Network Policy.
• To update the MTU on a specific SVI, update the L3 MTU field in the corresponding virtual
network.
• To update the MTU on subinterfaces, update the L3 MTU field in the IP Link connectivity
template primitive.
RELATED DOCUMENTATION
IN THIS SECTION
Parameter Description
Default IP Links to Generic Systems MTU Specifies the default MTU for all L3 IP links facing
generic system. A null or empty (default) value implies
that Apstra won't render explicit MTU values and that
the system default MTU will be used. Custom larger
MTU may be required to provide EVPN DCI
functionality or to support fabric wide Jumbo frame
functionality. For EVPN-DCI, we recommend an MTU
of 9050.
Max External Routes Count Maximum number of routes to accept from external
routers. The default (None) does not render any
maximum-route commands on BGP sessions, implying
that vendor defaults are used. An integer between
range 1 to 2**32-1 sets a maximum limit of routes in
BGP config. The value 0 (zero) intends the device to
never apply a limit to number of EVPN routes
(effectively unlimited). We suggest that this value is
effectively unlimited on EVPN blueprints, to permit the
high number of /32 and /128 routes to be advertised
and received between VRFs in the event an external
router is providing a form of route leaking
functionality.
376
(Continued)
Parameter Description
Max MLAG Routes Count Maximum number of routes to accept across MLAG
peer switches. The default (None) does not render any
maximum-route commands on BGP sessions, implying
that vendor defaults are used. An integer between
range 1 to 2**32-1 sets a maximum limit of routes in
BGP config. The value 0 (zero) intends the device to
never apply a limit to number down BGP sessions if
maximums are exceeded on a session. For EVPN
blueprints, this should be combined with
max_evpn_routes to permit routes across the L3 peer
link which may contain many /32 and /128 from EVPN
type-2 routes that convert into BGP route
advertisements.
Max Fabric Routes Count Maximum number of routes to accept between spine
and leaf in the fabric, and spine-superspine. This
includes the default VRF. You may need to set this
option in the event of leaking EVPN routes from a
routing zone into the default routing zone (VRF) which
could generate a large number of /32 and /128 routes.
We suggest that this value is effectively unlimited on
all blueprints to ensure the network stability of spine-
leaf BGP sessions and EVPN underlay. We also suggest
unlimited for non-EVPN blueprints considering the
impact to traffic if spine-leaf sessions go offline. An
integer between 1-2**32-1 will set a maximum limit of
routes in BGP config. The value 0 (zero) intends the
device to never apply a limit to number of fabric routes
(effectively unlimited).
377
(Continued)
Parameter Description
Generate EVPN host routes from ARP/IPV6 ND ARP Default disabled. When enabled all EVPN vteps in the
fabric will redistribute ARP/IPv6 ND (when possible on
NOS type) as EVPN type 5 /32 routes in the routing
table.
Junos EVPN routing instance mode Selects non-EVO Junos EVPN mac-vrf rendering
mode. Default indicates EVPN configuration will be
added to the default switch instances on Junos.
vlan_aware will transition Junos to a single EVPN mac-
vrf vlan-aware instance named evpn-1, similar to Junos
EVO config rendering in Apstra. This option is ignored
for Junos EVO devices. Existing deployed blueprints
will be opt-in from default to mac-vrf. Switching
designs is service-impacting. New blueprints will be
mac-vrf by default.
378
(Continued)
Parameter Description
Junos EVPN Next-hop and Interface count maximums Enables configuring the maximum number of nexthops
and interface numbers reserved for use in EVPN-
VXLAN overlay network on Junos leaf devices. Default
is disabled. Modifying this option may be disruptive as
a Day 2 operation.
Junos Graceful Restart Enables the Graceful Restart feature on Junos devices
Junos EX-series Overlay ECMP Enables VXLAN Overlay ECMP on Junos EX-series
devices
RELATED DOCUMENTATION
1. From the blueprint, navigate to Staged > Fabric Settings > Virtual Network Policy and click Modify
Settings.
3. Click Save Changes to save your changes and return to the Virtual Network Policy page.
When you're ready to activate changes, commit them from the Uncommitted tab.
RELATED DOCUMENTATION
Anti-Affinity Policy
IN THIS SECTION
Anti-Affinity Policy
IN THIS SECTION
• Max Links Count per Slot - maximum total number of links connected to ports/interfaces of the
specified slot regardless of the system they are targeted to. It controls how many links can be
connected to one slot of one system. Example: A line card slot in a chassis.
• Max Links Count per System per Slot - restricts the number of links to a certain system connected to
the ports/interfaces in a specific slot. It controls how many links can be connected to one system to
one slot of another system.
380
• Max Links Count per Port - maximum total number of links connected to the interfaces of the
specific port regardless of the system they are targeted to. It controls how many links can be
connected to one port in one system. Example: Several transformations of one port. In this case, it
controls how many transformations can be used in links.
• Max Link Count per System per Port - restricts the number of interfaces on a port used to connect to
a certain system. It controls how many links can be connected from one system to one port of
another system. This is the one that you will most likely use, for port breakouts.
• Disabled (default) - ports selection is based on assigned interface maps and interface names
(provided or auto-assigned). Port breakouts could terminate on the same physical ports.
• Enabled (loose) - controls interface names that were not defined by the user. Does not control or
override user-defined cabling. (If you haven't explicitly assigned any interface names, loose and strict
are effectively the same policy.)
• Enabled (strict) - completely controls port distribution and could override user-defined assignments.
When you enable the strict policy, a statement appears at the top of the cabling map (Staged/Active
> Physical > Links and Staged/Active > Physical > Topology Selection) stating that the anti-affinity
policy is enabled ("forced" for strict).
An example of when you'd want to apply the anti-affinity policy is when you have a QSFP 40G breakout
port that you want to break out into 4-10G ports. You can ensure that any links that go to the same
device use different QSFP ports instead of 2-10G spine links on the same QSFP port. This gives you an
added layer of redundancy if that QSFP port fails.
1. From the blueprint, navigate to Staged > Fabric Settings > Anti-Affinity Policy and click Modify
Settings.
381
2. Change the policy mode, and if you're enabling the policy, enter a maximum number of links, as
applicable.
3. Click Save Changes to stage the change and return to the policies view.
When you're ready to activate your changes, commit them from the Uncommitted tab.
Validation Policy
IN THIS SECTION
NOTE: This feature is classified as a Juniper Apstra Technology Preview feature. These features
are "as is" and are for voluntary use. Juniper Support will attempt to resolve any issues that
customers experience when using these features and create bug reports on behalf of support
cases. However, Juniper may not provide comprehensive support services to Tech Preview
features.
For additional information, refer to the "Juniper Apstra Technology Previews" on page 1223 page
or contact "JuniperSupport" on page 893.
1. From the blueprint, navigate to Staged > Fabric Settings > Validation Policy and click Modify Settings.
382
• IP Overlaps Base level - The severity level raised when detecting an overlap of IP addresses. To
set a more granular severity level based on the type of IP Overlaps, use the settings below.
• IP Overlaps SVI IP overlapping error level - The severity level raised when detecting duplicate SVI
IP addresses within a single virtual network. When set to “Default“, the severity level from the IP
Overlaps “Base Level“ setting is used. Note that duplicate SVI IPs can cause unexpected traffic
flow for routed traffic. We recommend leaving the severity level to “Error“ or “Default“ (when the
“Error“ level is configured at the “Base Level“ setting).
• IP Overlaps Generic system Loopback IP overlapping error levels - The severity level raised when
detecting a Generic System Loopback IP overlaps with another external or fabric node IP used in
default or EVPN Routing Zones. This can be an IP address used for a loopback, physical link,
logical link, virtual network subnet or VTEP interface. When set to “Default“, the severity level
from the IP Overlaps “Base Level“ setting is used. Use this setting to relax validation errors and
allows these types of overlaps.
• ASN Overlaps Base level - The severity level raised when detecting an overlap of ASNs.
• Route Target Overlaps Allow internal route-target policies - Severity of errors raised on overlap of
a user-defined route-target which overlaps with an internal virtual network or routing zone route-
target. This can be used for a form of full table inter-vrf route leaking.
2. Change settings, as applicable:
• Warning - If validation fails, warnings are raised; you can commit changes.
• Error - If validation fails, errors are raised that must be resolved before you can commit changes.
3. Click Save Changes to stage the changes and return to the Validation Policy page.
When you're ready to activate your changes, commit them from the Uncommitted tab.
384
IN THIS SECTION
Apstra version 4.1.2 introduces a new feature where the following are tagged with BGP communities
(RFC1997 - BGP Communities Attribute):
• All routes (IPv4 and IPv6) generated within the data center fabric
These communities allow you to identify any BGP route within the data center fabric quickly. They'll be
used for running more sophisticated route telemetry in future releases.
Introducing this new feature results in new lines of configuration on deployed network devices. These
configuration changes won't impact the control or forwarding plane and thus won't be service-
impacting.
Each route is tagged with two communities (32-bits each) in the following format:
[<system_index>:<function_id>] [<vrf_id>:<peer_id>]
system_index Identifies the device where the route is learned (sourced) in Apstra 0 - 19999
A unique blueprint-wide value is generated for every leaf, spine, and • 0 - don’t care
super spine in the data center fabric.
• 1 - 19999 usable
values // block of
20.000
385
(Continued)
function_id Identifies the route source or a function associated with it 20000 - 20999
A unique blueprint-wide value is generated for every leaf, spine, and • 20000 - don’t care
super spine in the data center fabric.
• 20001 - 20999
The base for function_id is 20000. The function_id value will be 20000 usable values //
+ function_id. Function_id MUST be set in every tagged BGP update. block of 1000
The following function ids are not supported in Apstra version 4.1.2
(Continued)
vrf_id Identifies the VRF associated with the route 21000 - 25999
A unique value is generated for every configured VRF in the blueprint. • 25000 - don’t care
The vrf_id value in the BGP community tag will be 21000 + vrf_id.
• 21001 - 25999
usable values //
block of 5000
peer_id Optional field. Possibly identifying the peer via which the route is 26000 - 28999
learned. This field is not used in Apstra 4.1.2 and is set to a don’t care
value (26000). • 26000 - don’t care
The peer_id is not used and is set to the default value of 26000 in • 26001 - 28999
Apstra version 4.1.2 usable values //
block of 3000
IN THIS SECTION
Blueprints | 390
Physical | 396
Catalog | 465
Tasks | 477
387
Freeform Introduction
IN THIS SECTION
Reference Designs
If your network architecture is comprised of a 3-stage Clos, 5-stage Clos or collapsed fabric, you’ll want
to take advantage of the abstraction and automation that’s included with the Datacenter reference
design. For all other topologies, you can use the Freeform reference design to leverage any feature,
protocol, or architecture.
Blueprints created in the Datacenter reference design use a set of design elements to abstract and
automate many network activities. Blueprints created in the Freeform reference design consist of
systems and links that you add and configure yourself, giving you complete control over your
architecture. In Freeform we use the term system to represent all the types of devices that can be linked
in the Apstra environment: switches, routers, Linux hosts and so on.
Device Management
Device management for Freeform blueprints is the same as for Datacenter blueprints. The process of
installing agents and acknowledging them to bring them under Apstra management is the same in both
reference designs. Only Juniper devices are supported in Freeform blueprints.
You can build your Freeform blueprint manually from an empty blueprint, or if you've exported an
existing Freeform blueprint, you can use it as a template for a new one (as of Apstra version 4.2.0). You’ll
start building your empty blueprint by importing device profiles from the design (global) catalog. A
device profile represents a device’s capabilities without specifying its system ID (serial number). This is
what enables you to build your entire network ‘offline’ before deploying it.
388
You’ll create internal systems and assign device profiles to them. Internal systems are devices that are
managed in the Apstra environment. You can bring your devices under Apstra management at any time.
If you have them ready, you can assign them as you're creating your internal systems. If they're not
ready, that's OK. You can assign them any time before deploying your network.
External systems are the other type of system used in Freeform blueprints. These are systems that are
linked to internal systems, and are not under Apstra management.
When you link your systems, you’ll select ports and transformations, as applicable. You can also add IP
addresses and tags as you're creating those links.
Config templates are text files used to configure internal systems in Freeform. You'll assign a config
template to every internal system. You could paste configuration directly from your devices into a config
template to create a static config template, but then you wouldn’t be using the potential of config
templates. With some Jinja2 knowledge (and maybe some Python), you can parametrize config
templates to do powerful things.
Property sets provide a valuable capability to fully parameterize config templates. Consisting of key-
value pairs, they enable you to separate static portions of config templates from variables. You create
property sets in the blueprint catalog. (Property sets used in Freeform blueprints are not related to
property sets in the design (global) catalog.) You'll include property set names in your config template
and then the values in those property sets will be used when configuration is rendered.
You can also create a property set and assign it directly to one system.
Tags are a way for you to assign metadata to Apstra-managed resources. They can help you identify,
organize, search for, and filter Apstra systems and links. With tags, you can categorize resources by
purpose, owner, environment, or other criteria. Because tags are metadata, they aren't just used for
visual labeling; they are also applied as properties of nodes in the Apstra graph database. This node
property (or device property) is then available for you to reference in Jinja config templates for dynamic
variables in config generation and the Apstra real-time analytics via Apstra's Live Query technology and
Apstra Intent-Based Analytics.
An example of when you might want to use tags is if you have bare metal servers with SRIOV interfaces,
and you need to produce specific configuration for those interfaces. You would add the tag sriov to the
links, then specify in the config template that links with that tag are to be configured a certain way.
Freeform Workflow
2. "Bring your devices under Apstra management" on page 536 (same procedure as for Datacenter
blueprints). If you don't have your system IDs (serial numbers) yet, that's OK. You can build your
entire network 'offline' in the Apstra environment and bring your devices under Apstra
management any time before deploying your network.
4. "Import device profiles" on page 471 for the internal systems you'll create.
5. "Add internal systems" on page 402 for the systems that Apstra will manage.
8. "Create config templates" on page 466, and "property sets" on page 473 as needed.
9. "Assign config templates" on page 411 to internal systems with deploy mode set to Deploy.
10. If you haven't brought your "devices under Apstra management" on page 536 yet, it's time to do
that now.
11. "Assign system IDs" on page 423 (if you haven't already) and set the deploy mode on your systems
to Deploy.
12. Before deploying your network, you can use the apstra-cli utility to validate config template syntax.
For more information, see Juniper Support Knowledge Base article KB69779.
RELATED DOCUMENTATION
Blueprints
IN THIS SECTION
IN THIS SECTION
Dashboard | 391
Blueprints Summary
The blueprints summary page shows a summary of all your blueprints. At the top of the page, different
status indicators show various statuses across all blueprints (deployment status, anomalies, root causes,
build errors and warnings, and uncommitted changes. This is useful to see any issues at a glance when
you have many blueprints in your Apstra instance.
391
From the left navigation menu of the Apstra GUI, click Blueprints to go to the blueprints summary page.
Dashboard
From the left navigation menu of the Apstra GUI, click Blueprints, then click the name of the blueprint
that you want to see. The blueprint dashboard is the default view. It shows the blueprint's overall health
and status. You can delete blueprints from here and also export them to be used as templates for other
blueprints (by importing them).
392
RELATED DOCUMENTATION
SUMMARY
Create a Freeform blueprint to build and manage any topology with Apstra.
393
1. From the left navigation menu of the Apstra GUI, click Blueprints, then click Create Blueprint.
3. If you've previously exported a Freeform blueprint, you can use it as a template for a new one (new in
Apstra version 4.2.0). Click Import existing blueprint from JSON. Then either click Choose File and
navigate to the downloaded file, or drag and drop the file into the dialog window. Otherwise,
continue to the next step.
4. Click Create to create the blueprint and return to the blueprint summary view. The newly created
blueprint appears in the summary.
Next Steps:
• You can "bring your devices under Apstra management" on page 536 anytime before deploying your
network.
RELATED DOCUMENTATION
SUMMARY
You can export a Freeform blueprint to use it as a template to create another Freeform blueprint (new
in Apstra version 4.2.0).
1. From the left navigation menu, click Blueprints, then click the name of the blueprint to export.
2. From the blueprint dashboard, click Export (top-right) to open the export dialog.
3. The exported blueprint includes all content that describes the physical environment (systems, links,
device profiles, tags). Additional details are included by default. To exclude any of them from the
export file, toggle them off in the dialog.
395
4. Click Export to download the JSON file of the staged blueprint contents and return to the blueprint
dashboard.
When you create a Freeform blueprint, you'll be able to import this exported Freeform blueprint and use
it as a template (new in Apstra version 4.2.0). You can import the blueprint into the same Apstra instance
or into a different one.
RELATED DOCUMENTATION
SUMMARY
396
You must have permission to delete blueprints. (Permissions are based on the roles you've been assigned
as a user).
2. Enter the blueprint name, then click Delete to delete the blueprint and go to the blueprint summary
view.
RELATED DOCUMENTATION
Physical
IN THIS SECTION
Selection | 397
Topology | 399
Systems | 401
Links | 441
397
Selection
IN THIS SECTION
While in the Apstra environment, you may need device information that's obtained via CLI commands.
Traditionally, you need to log in to a machine with access to the device management network, open a
terminal, find device IP addresses, SSH to each of them, then run the required CLI commands. As of
Apstra version 4.2.0, you can bypass these steps and run show commands for Juniper devices directly
from the Apstra GUI. You can execute CLI commands from within the staged or active blueprint, or from
the Managed Devices page. The steps below are for Freeform blueprints.
1. From the blueprint, navigate to Staged > Physical > Topology (or Staged > Physical > Systems) and
select a Juniper device node.
2. In the Device tab on the right that appears, click Execute CLI Command.
398
3. In the dialog that opens type show, then press the space bar. Available commands appear that you can
scroll through to select, or you can start typing the command and it will auto-fill. In our example
we're looking for interfaces. We typed show, space, then i, which filtered the commands to only
include those with the letter i. We'll select interfaces to complete the command.
4. From the drop-down list, select how you want to view the results: text, XML or JSON.
5. Click Execute to return show command results. We used Text Mode for our example.
399
RELATED DOCUMENTATION
Topology
IN THIS SECTION
Topology (Freeform)
The Topology view shows in a graphical way the collection of devices/objects that make up the network
and the links that connect devices. It self-documents your intended network state. This is then
modeled/created in the Apstra GraphDB for intent-based modeling. You can perform various tasks from
the Topology view, via the topology editor, as described in later sections.
To go to the topology view from the blueprint, navigate to Staged > Physical > Topology.
400
• To focus on just the topology without showing all the other tabs, view it in full screen mode.
• To select which label to display on system nodes, select it from the System Label drop-down list (new
in Apstra version 4.2.0):
• Name
• Hostname
• IP address
• Select how the elements are arranged by selecting an arrangement from the Arrangement drop-
down list:
• User-defined
• Layered
• Stress
• Force
• Compaction
• Select a node from the Selected Node drop-down list to go to the Systems detail page.
Systems
IN THIS SECTION
The Systems view (Staged > Physical > Systems) shows in a table format the collection of devices/
objects that make up the network (similar to the Nodes view in Datacenter reference designs). The table
includes information about internal and external systems in the blueprint, tags, deploy mode, assigned
device profile, assigned system ID, hostname, operation mode (full control), assigned config template,
and assigned property set. You can see details at a glance and tell if there are any issues with missing
requirements. You can customize what appears in the table by selecting/deselecting elements in the
columns drop-down list. You can perform various tasks from the Systems view as described in later
sections.
402
IN THIS SECTION
Systems represent switches, routers, Linux hosts and so on. Managed devices that you add to a blueprint
are called internal systems. You can create systems from scratch, or you can clone systems and
customize them to create new ones. You can create (and clone) from the Topology view or from the
Systems view.
Internal systems must be mapped to device profiles. Before creating systems, make sure you've
imported the relevant device profiles into the blueprint catalog.
CAUTION: Be careful. If you click away from the topology editor without clicking Save,
your changes are discarded.
2. In the topology editor that opens, click the Create internal system button (bottom-left). The system
appears as a gray rectangle with a system-generated name. The red triangle indicates that
information is needed for required fields. In this case, it's the device profile.
You can move systems around on the canvas and when you save your changes in the editor and then
reopen it, your systems will still be where you moved them.
4. You can change the system color that displays in the topology. This is useful for designating different
roles or anything else you'd like to visually differentiate.
5. You can change the system name and hostname to customize them for your environment.
6. Select a device profile from the drop-down list. (Device profiles come from the blueprint catalog. If
you don't see the one you need, import it.)
7. You can assign the system ID now or later. To assign it now, select it from the System drop-down list.
(The list includes managed devices that haven't been assigned yet. If you have your devices ready and
they're not appearing in this list, you still need to bring them under Apstra management by adding
them to Managed Devices.)
8. You can add tags, then later when you want to find systems you can use the Find by Tags feature
(upper-right) to find them. You can also include tags in config templates, then systems with those tags
will be rendered as specified in the config template.
9. Click Save to stage your new system and return to the Topology view. (If you leave the page without
saving, your changes are discarded.)
Next Steps:
• Continue to create internal and "external systems" on page 407 until you've added your devices to
the topology.
• "Assign config templates" on page 411 to your internal systems with deploy mode set to Deploy.
• From Scratch - select a device profile (that was imported into the blueprint catalog.) (You'll assign
the system ID later.)
• From Managed Devices - select a managed device to assign its system ID to the system.
4. Enter a hostname (optional).
405
5. You can add tags, then later when you want to find systems you can use the Find by Tags feature
(upper-right) to find them. You can also include tags in config templates, then systems with those tags
will be rendered as specified in the config template.
6. Click Create to stage your new system and return to the Systems view. The newly created system
appears in the list.
Next Steps:
• Continue to create internal and "external systems" on page 407 until you've added your devices to
the topology.
• "Assign config templates" on page 411 to your internal systems with deploy mode set to Deploy.
2. In the topology editor, select one or more existing internal systems, then click the Clone selected
nodes button.
406
SEE ALSO
IN THIS SECTION
Systems represent switches, routers, Linux hosts and so on. Unmanaged devices that you add to a
blueprint are called external systems. They link to managed (internal) systems. You can create systems
408
from scratch, or you can clone systems and customize them to create new ones. You can create (and
clone) from the Topology view or from the Systems view.
CAUTION: Be careful. If you click away from the topology editor without clicking Save,
your changes are discarded.
2. In the topology editor click the Create external system button. The system appears as a rectangle
with a system-generated name. You can move systems around on the canvas and when you save
your changes in the editor and then reopen it, your systems will still be where you moved them to.
You can save the system as is since there are no other required fields, or you can open the
parameters dialog and configure optional fields.
409
Next Steps:
Continue to create external systems and "internal systems" on page 402 until you've added your devices
to the topology. Then you can "create links" on page 441 for them.
Next Steps:
Continue to create external systems and "internal systems" on page 402 until you've added your
devices. Then you can "create links" on page 441 for them.
3. The new system(s) appear as gray rectangles with system-generated names. You can move systems
around on the canvas and when you save your changes in the editor and then reopen it, your systems
will still be where you moved them to.
4. Click the gear to open the parameters dialog, and change details to customize your new system.
5. Click Save to stage your new system(s) and return to the Topology view. (If you leave the page
without saving, your changes are discarded.)
You can update Freeform config template Update Config Template Assignment on One
assignments on one or more systems. System (from Systems) | 412
1. From the blueprint, navigate to Staged > Physical > Systems and click the system name to go to
details for that system.
NOTE: You can also get to the Systems details page from the Topology view. From the
blueprint, navigate to Staged > Physical > Topology and select the system to update.
2. In the panel on the right, in the Devices tab, click the Edit button for the Config Template field.
3. Select the config template from the drop-down list, then click the Save button. (Or, to cancel the
change, click the gray discard button. Or, to remove the config template, click the red remove button.)
The panel shows the value for the active blueprint and the staged value
When you're ready to activate your changes, commit them from the Uncommitted tab.
Update Config Template Assignment (Multiple Systems)
Internal systems with deploy mode set to Deploy require an assigned config template.
If you haven't created your "config templates" on page 466 yet, do that now.
1. From the blueprint, navigate to Staged > Physical > Systems to go to the Systems view.
413
2. Select one or more check boxes for the system(s) to update. (The same action will be applied to all
selected systems. That is, all selected systems will be assigned or unassigned the same config
template.)
3. Click the Update Config Template Assignments button that appears above the table.
4. To add or replace a config template assignment, leave Override Assignment selected and select a
config template from the drop-down list. The template text appears for your review. (Each internal
system is assigned only one config template, but that config template could nest other config
templates within it.)
6. Click Assign Config Template (or Remove Config Template Assignments, as applicable) to stage the
changes and return to the Systems view.
When you're ready to activate your changes, commit them from the Uncommitted tab.
You can change Freeform system names from the Update System Name (from Topology) | 414
Topology or Systems view. Update System Name (from Systems) | 415
CAUTION: Be careful. If you click away from the topology editor after making changes
without clicking Save, your changes are discarded.
2. In the topology editor, click the system to change, then click the Manage selected nodes properties
button that becomes available. (You can also open the same dialog by clicking the settings button for
the selected system. It's the gear at the top-right of the system.)
When you're ready to activate your changes, commit them from the Uncommitted tab.
Update System Name (from Systems)
1. From the blueprint, navigate to Staged > Physical > Systems and click the system name to go to
details for that system.
416
NOTE: You can also get to the Systems details page from the Topology view. From the
blueprint, navigate to Staged > Physical > Topology and select the system to update.
2. In the panel on the right, click the Properties tab, then click the Edit button for the Name field.
3. Enter the new name and click the Save button. (To cancel the change, click the gray discard button.)
The panel shows the values for the active blueprint and the staged blueprint.
When you're ready to activate your changes, commit them from the Uncommitted tab.
417
You can change Freeform system hostnames from Update System Hostname (from
the Topology or Systems view. Topology) | 417
CAUTION: Be careful. If you click away from the topology editor after making changes
without clicking Save, your changes are discarded.
418
2. In the topology editor, click the system to change, then click the Manage selected nodes properties
button that becomes available. (You can also open the same dialog by clicking the settings button for
the selected system. It's the gear at the top-right of the system.)
When you're ready to activate your changes, commit them from the Uncommitted tab.
Update System Hostname (from Systems)
1. From the blueprint, navigate to Staged > Physical > Systems and click the system name to go to
details for that system.
419
NOTE: You can also get to the Systems details page from the Topology view. From the
blueprint, navigate to Staged > Physical > Topology and select the system to update.
2. In the panel on the right, in the Devices tab, click the Edit button for the Hostname field.
3. Enter the new hostname and click the Save button. (To cancel the change, click the gray discard
button.)
The panel shows the values for the active blueprint and the staged blueprint.
When you're ready to activate your changes, commit them from the Uncommitted tab.
You can change Freeform device profile assignments Update Device Profile Assignment (from
from the Topology or Systems view. Topology) | 420
The device profile may need to change for various reasons, such as the following:
• If Juniper Support provides you with a new device profile to resolve an issue
CAUTION: Be careful. If you click away from the topology editor after making changes
without clicking Save, your changes are discarded.
2. In the topology editor, select one or more systems to change to the same device profile, then click
the Manage selected nodes properties button that becomes available. (You can also open the same
dialog by clicking the settings button for a selected system. It's the gear at the top-right of the
system.)
421
3. Select the new device profile from the Device Profile drop-down list.
Device profiles come from the blueprint catalog. If you don't see the one you need, import it.
4. To close the dialog, click anywhere on the canvas outside of the dialog.
5. Click Save to stage your changes, exit the topology editor and return to the Topology view. (If you
leave the page without saving, your changes are discarded.)
When you're ready to activate your changes, commit them from the Uncommitted tab.
SEE ALSO
NOTE: You can also get to the Systems details page from the Topology view. From the
blueprint, navigate to Staged > Physical > Topology and select the system to update.
2. In the panel on the right, click the Properties tab, then click the Edit button for the Device Profile
field.
3. Select the new device profile from the Device Profile drop-down list, then click the Save button. (Or,
to cancel the change, click the gray discard button.)
The panel shows the value for the active blueprint and the staged value
When you're ready to activate your changes, commit them from the Uncommitted tab.
Update One or More Device Profile Assignments (from Systems)
1. From the blueprint, navigate to Staged > Physical > Systems and select the check boxes for one or
more systems, then click the Update Device Profile button that becomes available above the table.
423
2. In the dialog that opens, select the new device profile from the Device Profile drop-down list.
3. Click Update to stage the changes and return to the Systems view.
You can change Freeform system ID (serial number) Update One System ID Assignment (from
assignments from the Topology or Systems view. Topology) | 423
CAUTION: Be careful. If you click away from the topology editor after making changes
without clicking Save, your changes are discarded.
2. In the topology editor, click the system to change, then click the Manage selected nodes properties
button that becomes available. (You can also open the same dialog by clicking the settings button for
the selected system. It's the gear at the top-right of the system.)
425
3. Select the new system ID from the System drop-down list, or click x to remove an existing
assignment.
The list includes managed devices that haven't been assigned yet. If you have your devices ready and
they're not appearing in this list, you still need to bring them under Apstra management by adding
them to Managed Devices.
4. To close the dialog, click anywhere on the canvas outside of the dialog.
5. Click Save to stage your changes, exit the topology editor and return to the Topology view. (If you
leave the page without saving, your changes are discarded.)
If you've removed an assignment, the device is still under Apstra management. It's ready and
available to be assigned to any blueprint. To remove the device completely from Apstra management,
"remove the device from Managed Devices" on page 552.
When you're ready to activate your changes, commit them from the Uncommitted tab.
Update One or More System ID Assignments (from Topology)
1. From the blueprint, navigate to Staged > Physical > Topology and click Edit to open the topology
editor.
426
CAUTION: Be careful. If you click away from the topology editor after making changes
without clicking Save, your changes are discarded.
2. Select the systems to update, then click the Batch assign systems to selected nodes button that
becomes available.
427
3. Select system IDs from the System drop-down lists, or click the x to remove an existing assignment..
The list includes managed devices that haven't been assigned yet. If you have your devices ready and
they're not appearing in this list, you still need to bring them under Apstra management by adding
them to Managed Devices.
4. Click Apply Changes to apply the changes or, if you decide not to keep the changes, click Discard.
If you've removed an assignment, the device is still under Apstra management. It's ready and
available to be assigned to any blueprint. To remove the device completely from Apstra management,
"remove the device from Managed Devices" on page 552.
When you're ready to activate your changes, commit them from the Uncommitted tab.
Update One System ID Assignment (from Systems)
1. From the blueprint, navigate to Staged > Physical > Systems and click the system name to go to
details for that system.
428
NOTE: You can also get to the Systems details page from the Topology view. From the
blueprint, navigate to Staged > Physical > Topology and select the system to update.
2. In the panel on the right, in the Devices tab, click the Edit button for the S/N field.
3. Select the new system ID from the System drop-down list, or click the red box to remove an existing
assignment.
The list includes managed devices that haven't been assigned yet. If you have your devices ready and
they're not appearing in this list, you still need to bring them under Apstra management by adding
them to Managed Devices.
4. If you're going to deploy the device, make sure the deploy mode is set to Deploy, then save it. If
you're removing an assignment, update the deploy mode to Undeploy, then save it.
5. Click the Save button to stage your changes.
If you've removed an assignment, the device is still under Apstra management. It's ready and
available to be assigned to any blueprint. To remove the device completely from Apstra management,
"remove the device from Managed Devices" on page 552.
When you're ready to activate your changes, commit them from the Uncommitted tab.
429
2. In the dialog that opens, select new system IDs from the System drop-down lists, or click the red
trash can button to remove an existing assignment. If you're removing an assignment, go ahead and
update the deploy mode to Undeploy as well.
The list includes managed devices that haven't been assigned yet. If you have your devices ready and
they're not appearing in the lists, you still need to bring them under Apstra management by adding
them to Managed Devices.
430
3. Click Update Assignments to stage your changes and return to the Systems view.
If you've removed an assignment, the device is still under Apstra management. It's ready and
available to be assigned to any blueprint. To remove the device completely from Apstra management,
"remove the device from Managed Devices" on page 552.
When you're ready to activate your changes, commit them from the Uncommitted tab.
You can update system deploy modes from the Update Deploy Mode on One or More
Topology or Systems view. Systems (from Topology) | 431
NOTE: When you set the deploy mode on a system, it appears in its Device Context. But if you
haven’t added deploy_mode (as a Jinja variable) to the config template that’s assigned to that
system, it has no effect on the rendered configuration.
CAUTION: Be careful. If you click away from the topology editor after making changes
without clicking Save, your changes are discarded.
2. In the topology editor, click one or more systems to change, then click the Manage selected nodes
properties button that becomes available. (You can also open the same dialog by clicking the settings
button for the selected system. It's the gear at the top-right of the system.)
432
3. Select the new deploy mode from the Deploy Mode drop-down list. The changes apply to all selected
systems.
4. To close the dialog, click anywhere on the canvas outside of the dialog.
5. Click Save to stage your changes, exit the topology editor and return to the Topology view. (If you
leave the page without saving, your changes are discarded.)
When you're ready to activate your changes, commit them from the Uncommitted tab.
Update Deploy Mode on One System (from Systems)
1. From the blueprint, navigate to Staged > Physical > Systems and click the system name to go to
details for that system.
433
NOTE: You can also get to the Systems details page from the Topology view. From the
blueprint, navigate to Staged > Physical > Topology and select the system to update.
2. In the Devices panel (on the right side), click the Edit button for the Deploy Mode field.
3. Select the deploy mode (deploy, ready, drain, undeploy), then click the Save button to stage your
changes.
Internal systems with deploy mode set to Deploy require an assigned config template. Make sure the
config template assigned to the device includes deploy_mode or your changes will have no effect on
configuration.
When you're ready to activate your changes, commit them from the Uncommitted tab.
2. In the dialog, select the deploy mode (deploy, ready, drain, undeploy) for the selected systems.
3. Click Set Deploy Mode to stage the changes and return to the Systems view.
Internal systems with deploy mode set to Deploy require an assigned config template. Make sure the
config template assigned to the device includes deploy_mode or your changes will have no effect on
configuration.
434
When you're ready to activate your changes, commit them from the Uncommitted tab.
You can update Freeform system tag assignments Update Tags on One or More Systems (from
from the Topology or Systems view. Topology) | 434
CAUTION: Be careful. If you click away from the topology editor after making changes
without clicking Save, your changes are discarded.
2. In the topology editor, click one or more systems to change, then click the Manage selected nodes
properties button that becomes available. (You can also open the same dialog by clicking the settings
button for one selected system. It's the gear at the top-right of the system.)
3. Add and remove tags in the Tag field, as needed. The changes apply to all selected systems.
4. To close the dialog, click anywhere on the canvas outside of the dialog.
5. Click Save to stage your changes, exit the topology editor and return to the Topology view. (If you
leave the page without saving, your changes are discarded.)
When you're ready to activate your changes, commit them from the Uncommitted tab.
Update Tags on One System (from Systems)
1. From the blueprint, navigate to Staged > Physical > Systems and click the system name to go to
details for that system.
436
NOTE: You can also get to the Systems details page from the Topology view. From the
blueprint, navigate to Staged > Physical > Topology and select the system to update.
2. Click the Tags tab in the right panel, then in the dialog that opens add and/or remove tags, as needed.
3. Click Update Tags to update tags for that system and return to the selection view.
When you're ready to activate your changes, commit them from the Uncommitted tab.
Update Tags on One or More Systems (from Systems)
1. From the blueprint, navigate to Staged > Physical > Systems and select the check boxes for one or
more systems, then click the Tag button that becomes available above the table.
437
When you're ready to activate your changes, commit them from the Uncommitted tab.
You can delete Freeform systems from the Topology Delete One or More Systems (from
or Systems view. Topology) | 437
CAUTION: Be careful. If you click away from the topology editor after making changes
without clicking Save, your changes are discarded.
2. In the topology editor, select one or more systems to delete and click the Delete selected nodes
button that becomes available.
3. Click Save to stage your changes, exit the topology editor and return to the Topology view.
When you're ready to activate your changes, commit them from the Uncommitted tab.
439
2. Click Delete to stage the deletion and return to the Systems view.
When you're ready to activate your changes, commit them from the Uncommitted tab.
Delete One or More Systems (from Systems)
1. From the blueprint, navigate to Staged > Physical > Systems and select the check boxes for one or
more systems, then click the Delete button that becomes available above the table.
2. Click Delete to delete the systems (and any links that are connected to the systems) and return to the
Systems view.
When you're ready to activate your changes, commit them from the Uncommitted tab.
The device context includes all the contextual data that you can use when creating dynamic Jinja config
templates. It includes such data as interfaces, IP addresses, prefix lengths, name, and state. It also shows
440
you what the neighbor interface is of other devices. You can search for data in a query box to pinpoint
the information you're looking for.
1. From the blueprint, either from the Topology view or the Systems view, click the name of the system
to view. Its details appear in the Systems view.
2. At the bottom of the device panel on the right, click Device Context to go to device context for the
device.
441
Links
IN THIS SECTION
Links (Freeform)
The Links view (Staged > Physical > Links) shows all the links that connect your devices together. The
table includes information about endpoint names, link type, tags, speed, role, interface names and IP
addresses. You can customize what appears in the table by selecting/deselecting elements in the
columns drop-down list. You can perform various tasks from the Links view as described in later
sections.
After you've created systems you can link them to each other from the Topology view.
1. From the blueprint, navigate to Staged > Physical > Topology and click Edit.
442
CAUTION: Be careful. If you click away from the topology editor without clicking Save,
your changes are discarded.
2. In the topology editor select the two systems that you want to link. You can select them in a couple
of different ways:
• Click and drag across the two systems.
• Hold down the alt key (command key on a Mac) while clicking the two systems.
When you select two systems additional tasks become available in the context-aware menu at the
bottom.
443
3. Click the Manage links between selected nodes button. The Links Management dialog opens showing
the two node names (and device profiles, as applicable).
4. Click Create Link. The port representations appear.
Next Steps:
If you haven't "created config templates" on page 466 yet, create them now. If you have config
templates ready for your devices and haven't assigned them yet, "assign" on page 411 them now. When
you've assigned all required config templates and all other requirements are met, you can deploy your
blueprint from the Uncommitted tab.
You can change one or more interfaces and IP addresses in the cabling map editor.
1. From the blueprint, navigate to Staged > Physical > Links and click the Edit cabling map button.
444
2. In the cabling map editor, change interface names and/or IP addresses, as applicable.
• You can use Batch clear override to clear all interfaces and IPv4/IPv6 values for selected links.
• To drop the override for either an interface name or IPv4/IPv6 address, submit an empty value in
the corresponding field.
3. Click Update to stage your changes and return to the Links view.
Next Steps:
When you're ready to activate your changes, commit them from the Uncommitted tab.
If you've already cabled up your devices, you can have Apstra discover your existing cabling instead of
using the cabling map prescribed by Apstra. All system nodes in the blueprint must have system IDs
assigned to them.
1. From the blueprint, navigate to Staged > Physical > Links and click the Fetch discovered LLDP data
button (second of two buttons above links list).
2. If staged data is identical to LLDP discovery results, you will see a message with that statement. Your
actual cabling matches the Apstra cabling map. No further action is needed.
3. If staged data is different from LLDP discovery results, the message includes the number of links that
are different.
4. Scroll to see details of the diffs (in red), or check the Show only links with LLDP diff? check box to see
only the differences.
5. To accept the changes and update the map to match LLDP data, click Update Stated Cabling Map
from LLDP.
1. From the blueprint, navigate to Staged > Physical > Links and select one or more check boxes for the
links to manage.
2. Click the Tag button that appears above the list after selecting link(s).
3. In the dialog, add and/or remove tags, as needed.
4. Click Add/Remove Tags to stage the changes and return to the Links view.
1. From the blueprint, navigate to Staged > Physical > Topology and click Edit.
446
2. In the topology editor select the two systems where the link is that you want to delete. You can
select them in a couple of different ways:
• Hold down the alt key (cmd key on a Mac) while clicking the two systems.
When you select two systems additional tasks become available in the context-aware menu at the
bottom.
3. Click the Manage links between selected nodes button. The Links Management dialog opens showing
the two node names (and device profiles, as applicable).
447
4. Click the Delete button for the link to delete, then click Save. You're still in the topology editor and if
you click away without saving, your changes are discarded.
5. Click Save (right-side) in the topology editor to stage your changes and return to the Topology view.
When you're ready to activate your changes, commit them from the Uncommitted tab.
Resource Management
IN THIS SECTION
Resource management in Freeform blueprints is similar to that in Datacenter blueprints. The difference
is that with Datacenter the mechanism is set up for you, and with Freeform you’re responsible for
setting it up yourself. You can set it up so resources are assigned and unassigned automatically as
needed, just like in the Datacenter reference design.
Resource Types
In Apstra, resources are values that are assigned to various elements of the network. Resources include
the following types:
Resource Groupings
Resources for Freeform blueprints are grouped and organized in the following ways:
Resource Pools
Allocation Groups
• provide the mechanism for pulling resources from pools and assigning them.
In the Datacenter reference design, templates determine the initial resource requirements. When you
create a Datacenter blueprint (from a template) allocation groups are created automatically. Freefrom
reference design doesn't use templates, so resource requirements can't be determined when you create
a Freeform blueprint. You'll create them yourself in Freeform blueprints.
Groups (Folders)
• can be created and deleted automatically as needed, using group generators (described below).
• All resources must reside within a group (or group generator) that you create (not directly in the built-
in Root group).
Local Pools
• can be created and deleted automatically as needed, with local pool generators (described below).
450
Generators
Generators automatically create and delete groups, resources, or local pools, as applicable. The graph
database returns a set of objects based on a set of conditions that you specify. These conditions define
the scope of what is added and/or removed.
Group Generator
You can put all of your resources in one group (folder), but if your design is complex, it's easier to
manage resources in multiple groups. You can organize resources in any group combination that makes
sense for you. You probably want to have nested groups, and you might want to have a group for every
system in your network. Creating groups manually is simple enough; just click the group that you want
to put your new group in and give it a name. Then you'd populate the group with your resources, either
manually, or automatically with resource generators (described later). But, if you have many systems and
you want a group for every system, creating each group manually is a lot of unnecessary work. You can
automate this process with group generators.
To create a group generator, give it a name, then specify a scope based on how you want your groups to
be created and managed. Our example of creating one group for every internal system uses the
following scope:
This scope tells the graph database to find all internal systems and create a group for each one; and
assign the applicable system name to each group. The state of the groups keeps in synch with the graph
database as the fabric changes. If you subsequently delete a system, the group created for that system is
also deleted. All resources in that group are released back to the pool they came from, ready to be re-
used. Conversely, if you create a system after this group generator is created, a group for that system is
automatically created (and if you created resource generators inside the group generator, resources are
also allocated accordingly).
Resource Generator
When it matters what the value is, you can allocate a resource manually, but in most cases you'll want to
automate the process with resource generators. Resource generators don't actually generate resources;
they pull existing resources from resource pools via allocation groups, based on a specified scope.
Before creating a resource generator create any resource pools and allocation groups that you'll need.
Creating an allocation group is straightforward; give it a name and select one or more resource pools to
include in the group.
Resources must be inside a group (or group generator as described below) that you create. To put all
resources generated from a resource generator in one group, select the group and create your resource
generator from there.
To create a resource generator, give it a name, then specify a resource type, an allocation group, a
subnet prefix length for IPv4 only, and a scope. For example, you might want a group to contain link IPs
(/31 addresses) for the links between all internal systems (switches) . First, create any resource pools and
allocation groups that you'll need. In the resource generator, specify resource type IPv4, an applicable
allocation group, the subnet prefix length, and the following scope:
This scope tells the graph database to find all fabric-facing links. The generator specifies to create link
IPs for them, and add them to the group. Resources are automatically generated or released as links are
added or removed.
To put every generated resource in its own group automatically, you can put your resource generator
inside a group generator. The resource generator inherits the scope of the group generator.
For example, to create a group for every system and put an ASN in each group, you'd select the group
generator already created and create the resource generator from there. The resource generator inherits
the scope from the group generator. In our example, the scope is:
The graph database finds every internal system, allocates an ASN to each one, then puts each ASN in
the applicable group based on internal systems.
You can put multiple resource generators inside a group generator (or group). Let's continue our example
that already has a group for every internal system and an ASN in every group. You might also want your
internal system groups to include loopback IP addresses. You can create a resource generator for
loopback IP addresses in the same group generator as for the ASNs; you'd just select resource type IPv4.
The process is the same as when you added the ASNs. From the same group generator as before create
the resource generator.….….….….….….…..
Select a group to put the resource in, give it a name, specify the resource type and select an allocation
group to pull the resource from. Then you'll have a resource in the specified folder. You can see the
resource in the table and the allocation group it was pulled from. You an see if it's been assigned yet.
Initially, it won't be. (put this in the task doc)
452
You can create and assign a specific VLAN ID to a specific system (node) in your blueprint. If it doesn't
matter what the specific value is, you can create a generator that will dynamically create and delete
VLAN IDs based on the conditions you set. Values will be pulled from these pools as needed. These
pools are specific to each blueprint.
Config Templates
1. Create resource pools ("ASNs" on page 780, "VNIs" on page 782, Integers, "IPv4 addresses" on page
784, "IPv6 addresses" on page 786) in the global Resources catalog. This is where you specify ranges
of resource values.
This is where you specify one or more resource pools to be included in an allocation group. When
you're ready to assign resources, you'll select resource pools from one of these allocation groups.
3. Plan how you'd like to organize your resources, then create "groups" on page 453 and "group
generators" on page 454 in the blueprint, as applicable.
4. Create "resources" on page 453 and "resource generators" on page 458 in the blueprint, as
applicable.
5. Create "local pools" on page 462 and "local pool generators" on page 463 in the blueprint, as
applicable.
6. Assign resources. (Assigned Resources and Assigned groups is on the detailed system page). To
render the correct configuration using these resources, you have to apply the resources to individual
Jinja2 config templates. (Use the resources to render configurations by modifying the Jinja2 config
templates from using property sets to resources. Does this still need to be done if I'm not converting
property sets to resource management?)
Blueprint Resources
IN THIS SECTION
SUMMARY
Groups are folders used to organize resources in Freeform blueprints. You can nest groups inside other
groups in as many levels needed to organize your resources. You can add new groups to any existing
group. If you haven't created any groups yet, you'll put your new group in the built-in Root group.
(Instead of, or in addition to, creating groups manually as described here, you can "create group
generators" on page 454 that create groups automatically and dynamically based on conditions that you
set.)
1. From the blueprint, navigate to Staged > Resource Management > Blueprint Resources.
2. Click the group where you want to put the new group, then click Create (right-side) and select Group.
The group you selected appears in the immutable Parent field.
3. Enter a group name.
4. (The Data field holds metadata that you can associate with the group. It's used to impart information
to the object you've created. It may be used for things like a desctiption or perhaps to indicate to
others what the object represents.) If you'd like to add context to the group, enter applicable key-
value pairs in the Data field.
454
Example: {"group_type":"vn"}
5. Click Create to create the group and return to the Blueprint Resources view.
When you've created one or more groups you can start putting resources and resource generators into
them.
SUMMARY
Groups (folders) in Freeform blueprints organize resources. Group generators (folders with properties)
automatically create and delete groups based on specified conditions.
For more explanation, see the "Freeform Resource Management Introduction" on page 447.
1. From the blueprint, navigate to Staged > Resource Management > Blueprint Resources.
The group directory appears on the left. You can nest group generators inside any group.
2. Click the group where you want to put the new group generator, then click Create (right-side) and
select Group Generator.
The name of the group you selected appears in the immutable Parent field.
3. Enter a group generator name, then specify a scope based on how you want your groups to be
created and managed.
For example, to create one group for every internal system, use the following scope:
This scope tells the graph database to find all internal systems and create a group for each one of
them; then assign the name of the system to each group. If you subsequently delete a system, the
group created for that system is also deleted. Conversely, if you create a system after this group
generator is created, a group for that system is automatically created.
You can click the Open in Graph Explorer button to open a new tab that shows the groups that will
be created based on the current topology. In our example, the topology includes 3 internal systems,
and 3 groups will be created, as expected.
456
4. Back in the Create Group Generator dialog, click Create to create the group generator and return to
the Blueprint Resources view.
Groups will be created and deleted dynamically based on your specified conditions.
In our example, the group generator named system was created inside the Root folder, and it
automatically created 3 groups, one for each of the systems in the topology. To see the resources in a
group, click the name of the group. We haven't put any resources into the group we just created, so
the resource table is empty.
457
Next Steps: "Set up resource generators" on page 458 to automatically add and delete resources in your
groups, as needed.
SUMMARY
Resources are values that you assign to systems and links. Resources include IPv4 addresses, IPv6
addresses, ASNs, VNIs, VLANs, and integers.
Resources are located inside groups (folders) that you create. (Resources can't be put directly in the
predefined Root group). If you haven't "created groups" on page 453 yet, create them before proceeding
here.
1. From the blueprint, navigate to Staged > Resource Management > Blueprint Resources.
458
2. Click the group where you want to put the new resource, then click Create (right-side) and select
Resource.
The group you selected appears in the immutable Parent field.
3. Enter a resource name and select a resource type (IPv4, Host IPv4, IPv6, Host IPv6, ASN, VNI, VLAN,
Integer).
4. To have Apstra automatically pull resources from pools, select the applicable allocation group from
the drop-down list.
5. To manually allocate a resource, enter the value. in the Value (override) field.
6. Enter a subnet prefix length, as applicable.
7. Click Create to create the resource and return to the Blueprint Resources view.
Resource generators are located inside groups (folders) that you create. If you haven't "created groups"
on page 453 yet, create them before proceeding. To automate resource allocation you'll also need to
confirm that you've created allocation groups and that they map to a sufficent number of resources.
1. From the blueprint, navigate to Staged > Resource Management > Blueprint Resources.
2. Select the group where you want to put the new resource generator, then click Create (right-side)
and select Resource Generator.
The type and name of the container (group) appear in the immutable Container Type and Container
fields, respectively.
459
3. Enter a resource generator name, then enter the scope for your generator.
To assist with determining scope, you can use the Graph Explorer.
4. Click Create to create the resource generator and return to the Blueprint Resources view.
Allocation Groups
IN THIS SECTION
Allocation groups consist of one or more resource Create Allocation Group (from Resource
pools that you use to assign resources (IPv4, IPv6, Management Tab) | 460
ASN, VNI, Integers). Create Allocation Group (from Topology
View) | 461
(You can map additional resource pools to allocation groups at any time. You might get low on resources.
If the global resource pools don't have enough resources defined you can create more pools or add a
range of values to an existing pool. You can create allocation groups from the resource managment tab
or from the topology view. You're just creating a group of already existing resource pools. It's just a way
to combing them in one location.)
An allocation group consists of one or more "global resource pools" on page 780. You'll assign resources
later from one of these allocation groups. If you haven't created the resource pools you need, go do that
before proceeding here.
2. Enter an allocation group name and select the resource type (IPv4, IPv6, ASN, VNI, Integer).
461
3. Select one or more check boxes for the resource pools to include in the allocation group. (These
resource pools are from the global Resources catalog in the left navigation menu. You can add
resource pools at any time if you need more resources available for your allocation groups.) You can
create the group without selecting any resource pools, but of course, you'll need to add at least one
before you can assign resources from it.
4. Click Create to create the allocation group and return to the table view.
Next Steps: When you assign resources, you'll select an allocation group that you've created; then
Apstra will pull resources from the group and assign them, as needed.
2. Enter an allocation group name and select a resource type (IPv4, IPv6, ASN, VNI, Integer).
462
3. Select one or more check boxes for the resource pools to include in the allocation group. (These
resource pools are from the global Resources catalog in the left navigation menu. You can add
resource pools at any time if you need more resources available for your allocation groups.)
4. Click Create to create the allocation group and return to the Topology view.
When you assign resources, you'll select an allocation group that you've created; then Apstra will pull
resources from the group and assign them, as needed.
Local Pools
IN THIS SECTION
1. From the blueprint, navigate to Staged > Resource Management > Local Pools > Create Local Pool.
463
3. You'll be applying the integers to a system. Select the system from the Owner drop-down list.
4. Enter the range of integers for the pool.
5. If you want to add another range, click Add a range and enter the range.
6. Click Create to create the pool and return to the table view.
SUMMARY
Use local pools to ?. Create local pool generators to automatically create and delete local pools based
on your criteria.
464
1. From the blueprint, navigate to Staged > Resource Management > Local Pools > Local Pool Generator
> Create Local Pool Generator.
2. Enter a local pool generator name, then enter the scope for your generator.
To assist with determining scope, you can use the Graph Explorer.
465
3. Click Create to create the local pool generator and return to the Local Pools table view.
Catalog
IN THIS SECTION
Tags | 474
466
Config Templates
IN THIS SECTION
IN THIS SECTION
We recommend that you familiarize yourself with the Jinja Template Designer before working with
config templates.
Several predefined config templates are included with the Apstra product. To get familiar with the syntax
and how config Jinja is used in config templates. check out the sections below.
protocols {
lldp {
port-id-subtype interface-name;
port-description-type interface-description;
neighbour-port-info-display port-id;
interface all;
467
}
}
This straightforward template doesn't include any variables or other conditions. It's nested inside the
config template junos_configuration.jinja, one of the other predefined config templates. You could create
your own config template and nest this basic one in it as well.
{% if hostname %}
system {
host-name {{hostname}};
}
{% endif %}
This template includes an if-then statement and the variable hostname. When configuration is rendered, if
the system device context includes a value for hostname, then the rendered configuration includes that
value.
{% if property_sets.get('ntp') %}
system {
ntp {
server {{property_sets['ntp']['ntp_server']}};
}
}
{% endif %}
468
The example below shows the syntax for the property set ntp that contains the IP address.
ntp_server = '1.2.3.4'
Creating config templates in the blueprint catalog (instead of the design catalog), gives you access to
device context for systems that you've already added to your blueprint. Device context groups relevant
information into one place, making it easier to get the information you need while creating config
templates.
1. From the blueprint, navigate to Staged > Catalog > Config Templates and click Create Config
Template.
2. In the dialog, enter a name for the config template including the .jinja extension. (The .jinja extension
is required even if you're not using Jinja.)
3. Enter or paste your content into the Template Text field. You can also import a config template that
you created in the design (global) catalog.
• To see device context for a specific system, select it from the System drop-down list.
• Preview and Preview Mode are available only when you're editing a config template.
469
4. Click Create to create the config template and return to the config template catalog view.
When you're ready you can "assign config templates" on page 411 to internal systems.
You can create config templates in the design (global) catalog, then import them into as many blueprints
as you want. (You can also create config templates directly in your blueprint, which gives you access to
device context making it easier to write config template.)
1. From the blueprint, navigate to Staged > Catalog > Config Templates and click Import Config
Template(s).
2. Select the check boxes for the config templates to import from the design (global) catalog.
3. Click Import to stage the import and return to the table view.
1. From the blueprint, navigate to Staged > Catalog > Config Templates to go to the table view.
2. Either from the table view or the details view click the Edit button for the config template to edit.
470
• To see device context for a specific system, select it from the System drop-down list.
• To see the full configuration, including the changes you're making, select Complete from the
Preview Mode drop-down list.
• To see only the configuration that you've changed, select Incremental from the Apply Mode
drop-down list.
4. Click Update (bottom-right) to update the config template and return to the table view.
If you create a config template directly in a blueprint, and you want to make it available to other
blueprints, you can export it to the design (global) catalog.
1. From the blueprint, navigate to Staged > Catalog > Config Templates to go to the table view.
2. Either from the table view or the details view click the Export config template button for the config
template to export.
3. Click Copy to copy the contents, Export to Global to export the config template to the design (global)
catalog, or click Save As File to download the file.
4. When you've copied, exported or downloaded the config template, close the dialog to return to the
table view.
1. From the blueprint, navigate to Staged > Catalog > Config Templates to go to the table view.
2. Either from the table view or the details view click the Delete button for the config template to
delete.
471
3. Click Delete to stage the deletion and return to the table view.
Device Profiles
IN THIS SECTION
Device Profiles define the capabilities of supported hardware devices. They interact with devices via
system agents. They don't include system IDs (serial numbers) which enables you to build your network
in the Apstra environment 'offline' before you have your devices ready. In Freeform blueprints you
import device profiles to provide context for configuring systems with config templates.
1. From the blueprint, navigate to Staged > Catalog > Device Profiles and click Import Device profile(s)
(right-side).
2. Select one or more check boxes for the device profile(s) to import into the blueprint. Only supported
device profiles in Freeform appear in the list (currently only Juniper devices).
3. Click Import to stage the change and return to the table view. The newly imported device profile(s)
appear in the list.
Next Steps:
You're ready to "create internal systems" on page 402 and assign your imported device profiles to them.
RELATED DOCUMENTATION
If a device profile is not being used by a system, you can delete it from the Freeform blueprint catalog, as
of Apstra version 4.2.0.
1. From the blueprint, navigate to Staged > Catalog > Device Profiles and click the Delete button in the
Actions panel for the device profile to delete.
The Delete Device Profile dialog opens showing the device profile to be deleted.
2. Click Delete to stage the deletion and return to the the Device Profile catalog view.
RELATED DOCUMENTATION
Property Sets
IN THIS SECTION
Property sets provide a valuable capability to fully parameterize config templates. Consisting of key-
value pairs, they enable you to separate static portions of config templates from variables. You create/
clone property sets in the blueprint catalog. (Property sets used in Freeform blueprints are not related to
property sets in the design (global) catalog.) You'll include property set names in your config template
and then the values in those property sets will be used when configuration is rendered.
You can also create a property set and assign it directly to one system.
473
IN THIS SECTION
1. From the blueprint, navigate to Staged > Catalog > Property Sets to go to the table view.
2. Either from the table view or the details view, click the Edit button for the property set to edit.
3. Make your changes.
4. Click Update to stage your changes and return to the table view.
1. From the blueprint, navigate to Staged > Catalog > Property Sets and click the Delete button for the
property set to delete.
2. Click Delete to stage the deletion and return to the table view.
Tags
IN THIS SECTION
Tags Overview
You can tag systems, then later when you want to find systems you can use the Find by Tags feature to
find them.
You can include Tags in config templates. Systems/links with those tags will be rendered as specified in
the config template. For example, if you have bare metal servers with SRIOV interfaces, and you need to
produce specific configuration for those interfaces, you can add the tag sriov, then specify that links with
that tag to be configured per the config template.
Tags are a way for you to assign metadata to Apstra-managed resources. Tags can help you identify,
organize, search for, and filter Apstra systems and links. With tags, you can categorize resources by
purpose, owner, environment, or other criteria. Because tags are metadata, they are not just used for
visual labeling; they are also applied as properties of nodes in the Apstra graph database. This node
property (or device property) is then available for you to reference in Jinja for dynamic variables in config
generation and the Apstra real-time analytics via Apstra's Live Qurey technology and Apstra Intent-
Based Analytics.
Here is an example of using the tag firewall in a "config template" on page 466 to render a specific
description.
{% if has_tag(interface.link.neighbor_system.id, 'firewall') %}
description "this is a firewall facing interface";
{% endif %}
1. From the blueprint, navigate to Staged > Catalog > Tags and click Create Tag.
476
RELATED DOCUMENTATION
1. From the blueprint, navigate to Staged > Catalog > Tags and click the Edit button for the tag to edit.
2. Change the description.
3. Click Update to stage the change and return to the table view.
RELATED DOCUMENTATION
1. From the blueprint, navigate to Staged > Catalog > Tags and click the Delete button for the tag to
delete.
2. Click Delete to stage the deletion and return to the table view.
RELATED DOCUMENTATION
Tasks
IN THIS SECTION
Uncommitted (Blueprints)
IN THIS SECTION
Uncommitted Introduction
IN THIS SECTION
Warnings | 483
While you're staging your new blueprint (under the Staged tab), the status indicator on the Uncommitted
tab is red. When you've finished staging the blueprint and resolved any build errors, the indicator turns
yellow (or orange if you have warnings, as of Apstra version 4.0.1) and the Commit button turns from
gray to black indicating that the blueprint is ready to be committed. When you commit your pending
changes you are pushing configuration to the Active blueprint. The meaning of the status indicator
colors are shown in the table below:
Red The blueprint needs staging or has Build Errors that must be resolved before you can commit.
479
Orange The blueprint has Warnings to notify you of potential issues. The blueprint may or may not
have staged changes. You can commit to a blueprint that has warnings and pending changes.
Yellow The blueprint has pending changes that you can commit to the blueprint.
Green The blueprint does not have any pending changes, warning, or errors. The blueprint is active
and there is nothing to commit.
The blueprint below has warnings and pending changes. You can commit these changes.
480
The blueprint below has warnings and no pending changes. There is nothing to commit.
You can review pending changes, and then decide to commit those changes or discard them. For more
information, see the sections below.
Logical Diff
From Logical Diff, click a name from the Name column to see detailed changes, additions or deletions for
that element.
Full nodes diff shows all uncommitted changes in one place, organized by node type, change type and
raw data. You can sort and search the diffs, then preview the changed element. Full nodes diff requires a
fair amount of resources and time to generate.
1. From the blueprint top menu, click Uncommitted to go to pending changes. You can review Logical
Diff, Full Nodes Diff, Build Errors, and Warnings. Full nodes diff shows all uncommitted changes in
one place, organized by node type, change type and raw data. You can sort and search the diffs, then
preview the changed element. Full nodes diff requires a fair amount of resources and time to
481
generate.
2. From Logical Diff, click a name from the Name column to see detailed changes, additions or deletions
for that element.
482
In some cases, you have the option of viewing only the differences, as shown below.
483
The preview for config template changes is color-coded to easily see the content that has been
added (in green) and the content that has been removed (in red).
3. When you are finished reviewing your changes and you've resolved any build errors, proceed to
commit your changes to the blueprint or discard them, as applicable.
Build Errors
Warnings
You can check configuration for semantic errors and omissions before deploying Junos OS and Junos
Evolved devices, starting in Apstra version 4.2.0.
When the Commit button on the Uncommitted tab becomes clickable, Apstra has validated that
requirements are met and you can activate your blueprint changes.
4. If you decide to activate the blueprint changes, click Commit and add a description. We recommend
that you enter the optional revision description to identify changes. These descriptions are displayed
in the Revisions section of Time Voyager where you can roll back to a previous network state. If you
don't add a description now you can always add one later. If you need to roll back to a previous
revision, this description helps to determine the appropriate revision. Specific diffs between revisions
are not displayed, so the description is the only change information available for that revision.
[IMAGE SHOWING COMMIT BUTTON TOOLTIP]
5. Click Commit to push the staged changes to the active blueprint and create a revision. The Apstra
engine validates all commits and makes sure everything works as it pushes configuration. Cabling
anomalies may appear until validation is complete.
[IMAGE SHOWING ACTIVE TASKS]
485
6. While the task is active, you can click Active Tasks at the bottom of the screen for information about
task progress. (Additional task history is available in the blueprint at Staged > Tasks.)
When a blueprint has been committed and devices have been deployed, the network is up and running.
However, networks are not static and can require modifications as they evolve. Due to Juniper Apstra's
approach of the network as a single entity this is extremely easy; all required device configurations are
generated and pushed to the devices when you commit the change.
RELATED DOCUMENTATION
IN THIS SECTION
Query | 501
IN THIS SECTION
Selection Panel
When you select a node in the active Topology or Nodes view, information about telemetry, device,
properties, tags, and VMs for that node are available in the right Selection panel.
When you select a link in the active Topology or Links view, properties and tags information for that link
is available in the right Selection panel.
487
Status Panel
From the blueprint, navigate to Active > Physical to go to the statuses for services and deploy modes,
deployment statuses for discovery, drain and service, as well as traffic heat.
Topology (Active)
IN THIS SECTION
You can look at topologies as 2D views or 3D views. When you select a node from a topology view (by
clicking its element in the topology, or by selecting it from the Selected Nodes drop-down list), details
for the selection are displayed. You can view the selection to show neighbors, links, virtual network
endpoints (as of Apstra version 4.0.1), or headroom. Telemetry and other device properties are displayed
in the selection panel on the right side of the window.
• To make topology elements larger, click the Expand Nodes check box.
• To show the links between elements, click the Show Links check box.
• To show node name, hostname (and role and tags as of Apstra version 4.0.1) as applicable, hover over
an element.
• To display a different label (name, hostname, S/N), select a different label from the Topology Label
drop-down list.
• To show rack details, select a rack by either clicking its element or by selecting it from the Selected
Rack drop-down list.
• To show node details, select the node by either clicking its element in the topology or by selecting it
from the Selected Node drop-down list.
489
NOTE: This feature has been classified as a "Juniper Apstra Technology Preview" on page 1223.
These features are "as is" and voluntary use. "Juniper Technical Support" on page 893 will
attempt to resolve any issues that customers experience when using these features and create
bug reports on behalf of support cases. However, Juniper may not provide comprehensive
support services to Tech Preview features.
From the blueprint, navigate to Active > Physical > Topology and click 3D.
• You can zoom in and out, move left and right, and reset to the default size and orientation.
• To show rack details, select a rack by either clicking its element or by selecting it from the Selected
Rack drop-down list.
• To show node details, select a node by either clicking its element or by selecting it from the Selected
Node drop-down list.
• To show unused ports, click the Show Unused Ports check box.
• To show a different label (name, hostname, S/N), select a different label from the Topology Label
drop-down list (right side).
• To show a different layer, select a different layer from the Layer drop-down list.
• Choose to show all neighbors or only specific ones (generic, leaf, spine, and so on).
The traffic heat layer is shown below. The colors represent different available/used capacity based on
the current system level TX/RX, averaged to 2 minutes, by default. If the aggregated TX or RX across all
the device interfaces is < 20% it's green . If it's between 21-40%, it's yellow and so on. For each 20%
492
difference, capacity is shown with a different color. (Server color is calculated based on the interface
counters of the leaf ports facing that server. To see RX/TX per interface for a single node, click the node.
If any of a device's deployed ports are > 81% of its capacity in either RX or TX, a new "Alert" icon is
shown on the device.
Headroom (Topology)
NOTE: To see the headroom view, the Device Traffic probe must be enabled. If you disable or
delete the probe, the traffic heat layer in the active topology is not available. For more
information, see "Device Traffic probe" on page 1047.
495
To view traffic history on top of the physical topology from the headroom view, select Time Series .
496
Nodes (Active)
IN THIS SECTION
1. From the blueprint, navigate to Active > Physical > Nodes and select the device.
2. From the selection panel (right-side) click Device, then click Rendered, Incremental, or Pristine to
review the different configurations.
3. Click Apply Full Config.
498
Links (Active)
IN THIS SECTION
NOTE: Cabling maps can also be exported from the Staged >Physical >Links view.
Racks (Active)
IN THIS SECTION
To go to rack details in the active blueprint, navigate to Active > Physical > Racks. You can change the
default view from a table to a list. You can search for specific racks by name or rack type.
1. From the blueprint, navigate to Active > Physical > Racks and select the rack that you want to
change.
2. In Rack Properties (right panel selection) click the Edit button for the rack name.
3. Change the name and click the Save button to stage the change.
NOTE: You can also change rack names from the staged blueprint.
Pods (Active)
From the blueprint, navigate to Active > Physical > Pods to see details about deployed pods. You can
search for specific nodes or links and select a layer to see anomalies, deploy modes, deployment status
and more. 3-stage topologies have one pod, while 5-stage topologies have two or more pods. Click a
501
Query
You can search for MAC addresses, IP addresses and VMs by using the query feature in the active
blueprint.
502
Anomalies (Service)
IN THIS SECTION
This section covers service anomalies. For analytics anomalies see "IBA Anomalies." on page 14
503
Discovery Anomalies
To demonstrate anomalies during the discovery phase, cabling errors have been deliberately configured
in the example below to trigger alarms.
To see the list of the cabling anomalies, click the Cabling gauge on the dashboard.
504
To see the topology view of the anomalies affecting spine1, click Spine1 in the topology.
You can see the cabling violations on spine1. In the right panel, click the red status indicator for All
Services to see a comparison of expectations vs. actual. If other anomalies existed in addition to the
506
To see how to resolve these cabling issues, see "Fetching Discovered LLDP Data" on page 140.
Configuration Deviation
IN THIS SECTION
Running configurations on devices are continuously compared with the "Golden Config" on page 519.
If a config deviation is found, a configuration anomaly is raised. Typically such deviations are seen when
changes were made outside of Apstra (from the device CLI), or attempting to deploy configuration on a
switch that is not able to take the change. These anomalies remain active until either the anomalous
configuration is removed from the device or the anomaly is suppressed.
508
1. From the blueprint dashboard, any configuration deviations are displayed in the Deployment Status
section.
3. Click a node name to see the device telemetry page, then click Config to see a side-by-side
comparison of the actual config to the golden config. (The difference is not shown in the image
below.)
509
4. To keep the configuration difference, click Accept Changes. This suppresses the configuration
anomaly, and does not affect "Intended" or Apstra-rendered config. the primary purpose of "Accept
Changes" is to mitigate cosmetic configuration anomalies.
NOTE: Out-of-band (OOB) changes to the fabric are not supported. Do not Accept Changes
to attempt to add OOB changes. For custom changes, use "configlets" on page 766.
CAUTION:
• Using Accept Changes does not make the OOB change persistent. In the event of a
full config push or Apstra writing to the same config, all OOB changes are
discarded.
5. To make the actual configuration conform to the intended configuration, click Apply Full Config, then
click Confirm. Applying the full config erases the device's current (unintended) configuration before
re-applying the complete intended configuration. A full configuration push does not include any OOB
changes, and therefore erases them, regardless of their "Accepted" state.
CAUTION: Never directly modify any Apstra-rendered config that affects routing and
connectivity. Doing so can potentially impact the network's operation. When in doubt,
contact "Juniper Support" on page 893.
510
6. After resolving the config deviation anomaly (accept changes or apply full config) the actual config
matches the golden config and the anomaly is cleared.
If an improperly-configured configlet causes Apstra deployment errors (when the device rejects the
command), a service config deployment failure occurs. In this case, follow the steps below to resolve the
anomaly.
1. From the blueprint, navigate to Staged > Catalog > Configlets and delete the configlet.
2. Click Uncommitted and commit the change. The configuration deviation remains because the golden
config is empty. The golden config is the running config of the device after successful deployment of
Apstra-rendered config. If deployment fails there is no golden config, thus causing the config
deviation.
3. Click Dashboard, then click Config Dev. (in the Deployment Status section).
4. Click the node name, then select Accept Changes to notify Apstra that the failure can be ignored.
Root Causes
IN THIS SECTION
Disconnection between 2 devices Symptoms: Union of symptoms for link broken / link
miscabled / operator shut interface for all constituent
links between a spine and a leaf
2. Enter a Trigger Period or leave the default, and click Create to enable root cause analysis and return
to the table view.
Root cause analysis runs periodically and produces zero or more root causes. Any root causes that are
found include a description, a timestamp of when it was detected and a list of symptoms.
513
IN THIS SECTION
When you commit a staged blueprint (deploy updates to the network), the result might not be what you
expected. Maybe you've committed changes to a blueprint by mistake and you want to undo those
changes. Or maybe you've decided to return the network to the state it was in several revisions ago.
Depending on the level of complexity, manually staging and committing changes to undo what you've
done can be difficult and error-prone. In these cases you'll want to use Time Voyager to quickly restore
previous revisions of a blueprint.
You can roll back a blueprint to any retained revision. The 5 most recent blueprint commits are retained,
by default. When you commit a sixth time, the first revision is discarded, and the sixth revision becomes
the fifth, the second revision becomes the first, and so on as additional blueprint changes are committed.
You can change the number of automatically saved revisions to up to 100 revisions (as of Apstra version
4.2.0). In the Commit dialog, a message lets you know that if you've reached your limit and you commit
another change, the new revision will replace the oldest auto-saved revision. If you've reached the limit
when you want to commit, and you don't want any revisions deleted, you can close the commit dialog
without committing, then increase the number of auto-saved revisions in Time Voyager.
You can retain a particular revision indefinitely by keeping it, or manually saving it. When you keep a
revision it is not included in the 5 revisions that cycle out. You can keep up to 25 revisions, effectively
having 30 blueprint revisions to choose from, by default. (If you change the number of automatically
saved revisions to the maximum of 100, you could save up to 125 revisions.) Keep in mind that each
revision requires storage space. If you decide that you no longer want to keep a revision you can simply
delete it.
514
When committing a blueprint we recommend that you add a revision description to help identify the
changes made in that revision. These descriptions are displayed in the revision history section of the
blueprint as long as that revision is retained. If you don't add a description when you commit you can
always add one later (but you'll need to remember what the changes were). When jumping to a revision
(rolling back), this description helps you choose the correct one. Specific differences between revisions
are not displayed, so the description is the only change information available for that revision.
When jumping to a revision, any previously staged changes that haven't been committed are discarded.
If this is an issue, do not roll back until you've addressed the uncommitted changes.
Time Voyager is not just an UNDO function. When using Time Voyager you roll back to a previous
commit. This means that anything deleted on the last commit is re-applied when rolling back. There can
be many changes in-between revisions, both additions and removals, all of which would be included in
the rollback. Before committing a rollback, it's important that you review the pending changes in detail.
Time Voyager is better compared with a Revision Control System (for the whole network!) than an
UNDO function.
• After you've upgraded Apstra server, you can't jump to a blueprint with an older version because the
blueprint revision history is discarded on upgrade. If you need to return to a previous Apstra version
that was taken prior to upgrading Apstra, refer to "Restore Database" on page 914. This method
could cause issues from a device config standpoint.
• It's not supported when the Pristine config has changed between revisions.
• It's not supported when the NOS versions are different between revisions. You could downgrade the
NOS version to the same version using the device manager, then roll back to a previous revision.
• Devices that were allocated in a previous revision that are no longer available result in the build error
system ID does not exist. (Conversely, adding a device and jumping to a previous revision without
that device will be successful. The added device will be removed.)
• Resources that were assigned in a previous revision that have been reassigned cause the build error
resource already in use. To resolve the build error, manually assign resources to each member in that
group or reset the resource group overrides. (Jumping to a previous revision after a previously
assigned global resource pool is modified may be successful, but it could cause an intent violation.)
• It's not supported if manual device config changes have been accepted.
• It's not supported in any other cases where the resulting device config state is different.
NOTE: Why not use Apstra server backup/restore to jump to a previous revision? Time Voyager
maintains synchronized configuration between the Apstra server and devices (as much as
515
possible); Apstra backup/restore does not. Effectively, the Apstra backup/restore is an out-of-
band change from a device configuration standpoint. If a backup is restored, you would need to
push a full config to make sure the device configuration reflects what you restored from the
database backup. This would most likely be disruptive.
From the blueprint, click Time Voyager to go to the retained blueprint revisions. The first revision in the
list is the active one. Successive revisions are ordered by date from most recent to oldest.
NOTE: When you roll back to a previous revision, any previously staged changes that have not
been committed are discarded. If this is an issue, do not jump to a different revision until you've
committed the uncommitted changes.
1. From the blueprint, click Time Voyager, then click the Jump to this revision button for the revision to
jump to (first of four buttons in Actions section).
516
2. Any uncommitted changes in the staged area are discarded. If this is an issue, close the dialog and
address the uncommitted changes before proceeding. To proceed, click Rollback.
3. You can make additional changes to the blueprint before committing. For example, if you've replaced
a device, the device ID (serial number) will change, but the IP won't. You can create the device agent
and update the serial number in your blueprint before committing the revision change.
4. Click Uncommitted, then click the diff tabs to review the changes.
5. If you decide that you don't want to jump to this revision, click the Revert button to discard the
changes.
6. To proceed, click the Commit button (top-right) to see the dialog for committing changes and creating
a revision.
7. We recommend that you enter the optional revision description to identify the changes. Specific
differences between revisions are not displayed, so the description is the only change information
available for the revision.
8. Click Commit to commit your changes to the active blueprint and create a revision. In some cases,
you might also need to "reset resource group overrides" on page 33.
9. If you click Time Voyager you'll see the revision as the current one.
RELATED DOCUMENTATION
1. From the blueprint, click Time Voyager, then click the Keep this revision button for the revision to
keep (second of four buttons in Actions section).
517
2. Click Save to confirm and proceed. The button turns gray indicating that the revision has been saved
indefinitely. It won't be deleted until you manually delete it.
RELATED DOCUMENTATION
5 blueprint revisions are saved automatically by default. You can change the setting to save up to 100
revisions (as of Apstra version 4.2.0). When you commit a revision to the blueprint that exceeds the set
number to save, then the oldest revision is automatically deleted.
1. From the blueprint, click Time Voyager, then click Settings (top-right) to go to the Update Settings
dialog.
2. Change the maximum number of automatically saved revisions, up to 100. Decreasing the number of
automatically saved revisions will delete older revisions, as needed.
518
RELATED DOCUMENTATION
1. From the blueprint, click Time Voyager, then click the Update description button for the revision to
update (third of four buttons in Actions section.)
RELATED DOCUMENTATION
1. From the blueprint, click Time Voyager, then click the Delete button for the revision to delete (fourth
of four buttons in Actions section). You can't delete a revision if there are five (5) or fewer of them in
the list.
2. Click Delete to delete the revision and return to the table view.
RELATED DOCUMENTATION
Devices
IN THIS SECTION
Telemetry | 678
IN THIS SECTION
Terminology | 520
Terminology
Configuration lifecycle stages are as follows:
Pristine Config When you install a device agent, configuration is added to the pre-existing config on
the device. Normally, the pristine config doesn't change throughout the device's
lifecycle.
Discovery 1 Config When you acknowledge a device, Apstra adds basic configuration, including enabling
LLDP on all interfaces.
Ready Config (previously When you assign a device to a blueprint without deploying it (deploy mode: ready),
known as Discovery 2 Apstra adds basic configuration, including device hostnames, interface descriptions
Config) and port speed / breakout config.
Service Config When you deploy a device (deploy mode: deploy), Apstra adds configuration that's
required in the Apstra environment. Service Config consists of Discovery 1 config,
Ready (Discovery 2) config and this additional config.
Rendered Config Complete Apstra-rendered configuration for the device, per the Apstra Reference
Design.
Incremental Config The configuration that will be applied when you commit changes that you've made.
Golden Config When you commit config changes, Apstra collects a new running configuration called
Golden Config. Golden config serves as Intent: Apstra continuously compares the
running config against the Golden config. When a deployment fails, Apstra unsets the
Golden Config.
(Continued)
Install Apstra device Pristine Config: Factory + Pre- OOS-QUARANTINED Not Assigned
system agent Apstra + Agent Install config
CAUTION: When you install an agent on a device, any configuration that was already
there becomes part of the Pristine Config, which means it's included in the device's
entire configuration lifecycle. Any corrections that you make will be service-impacting.
IN THIS SECTION
The lifecycle of a device begins with the factory default configuration stage.
Certain minimum base configuration is required for the entire configuration lifecycle. This includes
configuration for agent installation and device connectivity. You must configure management IP
connectivity between devices and the Apstra server out-of-band (OOB). Configuring it in-band is not
supported and could cause connectivity issues when changes are made to the blueprint.
You can bootstrap this User-required config with "Apstra ZTP" on page 694, or add it with scripts (or
other methods).
CAUTION: Only add configuration that's required for connectivity, for installing the
device agent, or that's known to be required throughout the device lifecycle (for
example Banners or NTP / SNMP / syslog server IP addresses). You can add required
configuration that's not rendered by Apstra with "configlets" on page 766.
When you install an onbox agent on a device (or an offbox agent on the server) the device connects and
registers with Apstra in the Quarantined state. Apstra applies partial configuration to the pre-Apstra
configuration. This configuration is called Pristine configuration. Pristine configuration is the basis for all
subsequent device configuration.
524
When you acknowledge a device, you're putting it in the Ready state. This acknowledgment signals your
intent to have Apstra manage the device. To the pristine config, Apstra adds minimal base configuration
that's essential to Apstra agent operation. This configuration is called Discovery 1 config. Discovery 1
applies a complete configuration (Full config push), overwriting all existing configuration to ensure config
integrity.
• All interfaces are rendered with interface speeds for the assigned device profile.
• All interfaces are no shutdown to allow you to view LLDP neighbor information.
• All interfaces are moved to L3 mode (default) to prevent the device from participating in the fabric.
NOTE: Devices that have been acknowledged cannot simply be deleted. Since the device would
still have an active agent installed, the devices would re-appear within seconds. To remove a
device from Apstra management, see "Remove (Decommission) Device from Managed Devices"
on page 552 for the complete workflow.
When you assign a device to a blueprint and set its Deploy Mode to Ready, you're putting it in the Ready
(Discovery 2) state. The device has been staged, but not yet committed (deployed) to the active
blueprint. Ready config applies a complete configuration (Full config push) to ensure config integrity.
Ready configuration brings up network interfaces and configures interface descriptions and validates
telemetry, such as LLDP, to ensure it's properly wired and configured. This configuration is non-
disruptive to other services in the fabric. Links are up, but they are configured in L3-mode to prevent
STP/L2 operations.
CAUTION: The first time you assign a device and deploy it (set deploy mode to Deploy
and commit the blueprint), you're triggering a full configuration push on the device. This
action overwrites the complete running configuration with the pristine configuration,
then adds the full rendered Apstra configuration. Apstra discards any configuration
that's not part of the Apstra-rendered configuration.
When you commit a device, it becomes Active, and Apstra deploys the service configuration, moving the
device into the Rendered configuration stage. Rendered config contents are derived from the pristine
config, selected reference design/topology, NOS, and device model. The first rendered config applies a
complete configuration (removing all existing configuration from the Apstra server per Jinja) to ensure
configuration integrity. This is the full end-state of Apstra. A full configuration has been pushed, all
interfaces are running, and routing within IP fabric is configured. Full configuration rendering, intent-
based telemetry, and standard service operations occur here.
After the full configuration is successfully deployed to the device, Apstra takes a snapshot of the device
configuration (for example show running-confg) and stores it as the Golden Configuration.
CAUTION: If you add configuration at this point, you'll raise configuration deviation
anomalies. The deviation is the difference between the current configuration and the
stored Golden configuration. Before you can proceed with deployment tasks, you must
correct any anomalies.
To see the rendered config file after committing the blueprint, select the device in the Active blueprint
and click Config (right-side).
You can modify a running configuration multiple ways. To modify a config that's not part of the reference
design, use "configlets" on page 766.
526
When you stage changes to a running blueprint, you're creating an Incremental configuration.
When you commit a change to a blueprint that affects the device's configuration, a partial config
updates the rendered config.
Click a node in the topology, then from the Device tab in the panel on the right, you can click links for
rendered, incremental, pristine, or device context in the Config section.
The device model is a nested dictionary of variables that you can leverage when creating configlets in
Datacenter blueprints or config templates in Freeform blueprints. Apstra version 4.2.0 adds information
to the device context that's useful when creating configlets in Data Center blueprints. In the interface
section you'll find tags for interface tags and intf_tags for link tags. In the main section you'll find
system_tags.
528
The query tab provides dynamic search capabilities to quickly search through keys or values and identify
the variables of interest. Syntax is case-sensitive. For example, a search of the keyword bgp provides
information on the BGP configuration of the switch as well as the BGP sessions (protocol sessions),
while a search on the key word BGP provides the list of BGP route maps such as "BGP-AOS-Policy". The
use of these variables as built-in property-sets inside a configlet must also respect the case-sensitive
attribute of the device model.
CAUTION: Device models are an internal data model used in the Apstra environment.
They are subject to change without notice or documentation of schema changes.
Configuration Deviations
After each successful config deploy the running config is collected and stored internally as the Golden
configuration. Intent is the cornerstone of the Apstra product. Any difference between the actual
running config and this golden config results in a config deviation anomaly on the blueprint's dashboard.
The golden config is updated every time config is successfully applied to a device.
• If configuration deployment fails, Golden Config is not set. This means both a config deviation and
deployment failure anomaly are raised.
• Running configuration telemetry is continuously collected and matched against the Golden Config.
Any difference result in a deviation anomaly.
• Configuration anomalies can be 'suppressed' using the "Accept Changes feature". This does NOT
mean the change is added to golden config or Intent.
NOTE: Perform a full configuration push with the utmost caution, as it is very likely to impact all
services running on the box. Exact impact depends on changes being pushed. Also note all Out of
Band changes are overwritten upon a full push.
Deploy Modes
IN THIS SECTION
Deploy | 530
Ready | 530
Drain | 530
Undeploy | 532
Not Set
Deploy
Ready
When you assign a device to a blueprint, it's deploy mode changes to Ready; Apstra renders Ready
(Discovery 2) configuration (hostnames, interface descriptions, port speed / breakout configuration). The
device isn't active in the fabric. Changing from Deploy to Ready removes Apstra-rendered configuration.
Drain
"Draining a device" on page 539 for physical maintenance enables it to be taken out of service without
impacting existing TCP flows. Depending on the device being drained, Apstra uses one of two methods:
For L2 Servers
• MLAG peer-links port channels and bond interfaces on any NOS are not changed.
• For Arista EOS, Cisco NX-OS, all interfaces towards L2 servers in the blueprint are shutdown.
The device uses Inbound/Outbound route-maps 'deny' statements to block any advertisements to
0.0.0.0/0 le 32. This allows existing L3 TCP flows to continue without interruption. After a second or
two, the TCP sessions should be re-established by the src/dst devices, or they should negotiate a new
TCP port. The new TCP port forces the devices to be hashed onto a new ECMP path from the list of
available links. Since no ECMP routes to the destination are available in the presence of a route map, the
traffic does not flow through the device that is in Drain mode. The device is effectively drained of traffic
and can be removed from the fabric (by changing Deploy mode to Undeploy).
While TCP sessions drain (which could take some time, especially for EVPN blueprints) BGP anomalies
are expected. When configuration deployment is complete, the temporary anomalies are resolved.
When you change the deploy mode to Drain on a device, neighboring device configuration may also be
affected, not just the device you're draining. For example, when you drain a spine device, configuration
on all connected leaf devices change. Neighboring leaf devices use Inbound/Outbound route filters
(route-maps) 'reject (deny)' statements to block any advertisements to 0.0.0.0/0 le 32, for both EVPN
(overlay) and FABRIC (underlay).
531
Similarly, when you drain a leaf device, the configuration on connected spine devices changes.
Neighboring spine devices use Inbound/Outbound route filters (route-maps) 'reject (deny)' statements to
block any advertisements to 0.0.0.0/0 le 32, for both EVPN (overlay) and FABRIC (underlay).
In the case of an MLAG-based topology, in addition to the configuration on connected spine devices
changing, the configuration on the paired leaf device also changes.
532
Undeploy
Undeploying a device removes the complete service configuration. If a device is carrying traffic it is best
to put it in Drain mode first (and commit the change) before undeploying the device.
Managed Devices
IN THIS SECTION
IN THIS SECTION
Device | 533
Agent | 534
Telemetry | 536
533
Apstra software uses device system agents to manage devices. These agents manage configuration,
device-to-device communication and telemetry collection. You can use "Apstra Zero Touch Provisioning
(ZTP)" on page 694 to install agents and bring devices under Apstra management or you can use the
device installer.
From the left navigation menu in the Apstra GUI, navigate to Devices > Managed Devices to go to
managed devices.
Devices with installed agents appear in the table. The Managed Devices page is the hub for many
device-related tasks, which are described in later sections.
Click a management IP to go to details for its device, agent, pristine config and telemetry as shown
below.
Device
The device detail view shows the user config, the device status and other facts about the device. From
the device detail page you can edit and delete the device. You can also edit or delete a device from the
534
table view or any of the other detail views (Agent, Pristine Config, Telemetry).
Agent
Apstra device system agents handle configuration management, device-to-server communication, and
telemetry collection. If you're not using "Apstra ZTP" on page 694 to bootstrap your devices (or if you
have a one-off installation) you can use this device installer to automatically install and verify devices.
Depending on the device NOS, you can install device agents onbox (agent is installed on the device) or
offbox (agent is installed on the Apstra server and communicates with devices via API). For support
information, see the Device Management section of the "4.2.0 feature matrix" on page 989.
The device agent view shows the agent config, agent status, last job status, jobs history and telemetry
status. From the agent detail page you can perform various tasks similar to tasks in the table view. For
example, you can restore a device's pristine configuration by clicking the Revert to Pristine Config
button (as of Apstra version 4.0.1) as long as the device is not assigned to a blueprint.
535
Pristine Config
The pristine config view shows the pre-Apstra configuration on the device. You can edit the pristine
config manually or update it directly from the device. You can edit and delete the device. You can also
edit or delete the device from the table view or any of the other detail views (Device, Agent, Telemetry).
536
Telemetry
The telemetry view shows telemetry for the device. For more information, see "Telemetry Services" on
page 679.
NOTE: Each device is expected to have a unique management IP address. If you're replacing a
device (decommissioning for an RMA for example) and you want to use the same management IP
address on the replacement device, you must "remove (decommission) the device from Managed
Devices" on page 552 before adding the new device.
1. If you're using Juniper offbox agents, "increase the application memory usage" on page 846.
2. Create and install your "onbox" on page 614 device agent(s) or "offbox" on page 617 device
agent(s) for the devices to be managed in the Apstra environment. If you have many of the same
devices using the same configuration you might consider creating "agent profiles" on page 673
(Device > Agent Profiles), which can streamline the task of creating many agents.
3. If you're deploying modular devices, you may need to "change the default device profile" on page
541 that's assigned to your device.
4. Navigate to Devices > Managed Devices to see that the device state is Out of Service Quarantine.
Configuration at this point is called Pristine Config.
537
5. In the left column of the table, select the check box(es) for the device(es) to manage in the Apstra
environment.
6. Above where you just clicked, click the Acknowledge selected systems button (check mark) in the
Device action bar.
7. Click Confirm to acknowledge the device(s) and return to the table view. The device state changes to
Out of Service Ready. Configuration at this point is called Discovery 1 Config and you can now
manage the device(s) from the Apstra environment.
Next Steps:
If you'll be using a Datacenter blueprint, before creating the blueprint make sure you have all your
design elements ready, starting with logical devices.
If you'll be using a Freeform blueprint, you can "create the blueprint" on page 392 immediately.
You'll assign your devices to a blueprint during the build phase. For details, see "Assign Device
(Datacenter)" on page 37 or "Update System ID Assignment (Freeform)" on page 423, as applicable.
RELATED DOCUMENTATION
1. From the left navigation menu, navigate to Devices > Managed Devices and click the management IP
for the device.
538
2. On the device detail page that appears, click the Execute CLI Command button.
3. In the dialog that opens type show, then press the space bar. Available commands appear that you can
scroll through to select, or you can start typing the command and it will auto-fill. In our example
we're looking for BGP neighbors. We typed show, space, then b, which filtered the commands to only
include those with the letter b. We selected bgp, then pressed the space bar to show available
arguments for bgp. We typed n to show commands including the letter n. We'll select neighbor to
complete the command.
4. From the drop-down list, select how you want to view the results: text, XML or JSON.
5. Click Execute to return show command results. We used Text Mode for our example.
539
RELATED DOCUMENTATION
1. From the blueprint, navigate to Staged > Physical > Build > Devices and change the "deploy mode" on
page 58 on the device to Drain.
2. Click Uncommitted to review staged changes. The Logical Diff tab shows the changes that will be
made to the device, and possibly to its neighbors.
3. Commit staged changes to activate them. While draining is in progress (which could take some time,
especially for EVPN blueprints) BGP anomalies are expected. You can monitor draining progress from
various locations in the Apstra GUI. When drain configuration is complete, the temporary anomalies
are resolved.
540
• You can monitor drain status from the Deployment Status section of the blueprint dashboard
(Drain Config).
• You can monitor drain status from Active > Physical in the Status panel (Deployment Status:
Drain).
541
• If you instantiate the predefined Drain Validation dashboard, you can monitor drain status from
Analytics > Dashboards. (If you set the dashboard as default, you can see it on the blueprint
dashboard as well as on the analytics dashboard). In the image below, traffic is in the process of
draining.
After performing device maintenance, change the deploy mode back to Deploy and commit the change
to bring the device back into active service.
RELATED DOCUMENTATION
Analytics Introduction | 9
Commit / Revert Changes to Blueprint | 483
Edit Device
NOTE: You can also edit a device from any of the detail views (Device, Agent, Pristine Config,
Telemetry.)
1. From the left navigation menu, navigate to Devices > Managed Devices and select the check box(es)
for the device(s) to edit.
2. Click the Update user config button in the Device action bar (above the table), then change the
device profile, admin state, and/or location, as applicable.
542
3. Click Confirm to update the device and return to the list view.
An example of when you might need to edit a device is when one modular device has multiple
device profiles associated with it. Device profiles represent different line card configurations.
The first device profile that matches the device chassis model (based on the selector model field) is
associated with the device (DCS-7504N for example). If you're using a modular device in your
network, check that the correct device profile is associated with it. If it's not, edit the device to
update its device profile to the correct one before acknowledging the device and assigning it to a
blueprint.
RELATED DOCUMENTATION
2. In the Device action panel that appears above the table, click the button for the state to change the
selection(s) to.
• Set admin state to NORMAL for selected systems - If you're "upgrading a device network
operating system" on page 543, make sure the admin state is set to NORMAL before beginning
the process.
• Set admin state to DECOMM for selected systems - If you are decommissioning a device, setting
the admin state to DECOMM is part of a larger process. See "Remove Device from Managed
Devices" on page 552 for the workflow and more details.
• Set admin state to MAINT for selected items - this state is no longer used.
3. Click Confirm to set the admin state and return to the table view.
Upgrade the network operating systems (NOS) of NOS Upgrade Overview | 543
your Apstra-managed network devices from within Update User-defined Device Profiles | 544
the Apstra environment.
Register / Upload OS Image | 546
We highly recommend that you become familiar with this procedure before upgrading a device NOS.
You can upgrade a device NOS within the Apstra environment with a few steps. If you've defined your
own device profiles, you may need to update them. Then you'll register the new OS image that you
obtained from the vendor, and click a button to start the upgrade. Apstra takes care of upgrade tasks
and other requirements and ensures that pristine config is updated.
NOTE:
can
For information about supported upgrade paths, see "NOS Upgrade Paths" on page 1020 in the
References section.
544
Apstra software ships with built-in device profiles that support specific OS versions. When you upgrade
the Apstra server, device profiles with the OS versions that are supported in the new Apstra version are
also updated. You can then upgrade the NOS to one of the newly supported versions.
For example, Apstra version 4.0.0 supports Arista EOS versions as shown in the OS version selector (4.
(18|20|21|22|23|24)) in the device profile. That is, it supports versions 4.18, 4.20, 4.21, 4.22, 4.23, and
4.24. Whereas, Apstra version 4.0.2 supports EOS versions 4.18, 4.20, 4.21, 4.22, 4.23, 4.24, and 4.25
(4.(18|20|21|22|23|24|25)). 4.25 is a newly supported version. If you upgrade the Apstra server to version
4.0.2, you can upgrade Arista devices to EOS version 4.25.
However, device profiles that you've created (cloned) yourself, are not managed in the Apstra
environment, so when you upgrade the Apstra server those device profiles aren't automatically updated
with newly supported versions. You'll need to follow a few extra steps to add them as described in the
next section.
• Make sure that you understand the "device configuration lifecycle" on page 519 and that you're
comfortable with managing deploy modes.
• Make sure that Apstra software is managing the device you're upgrading. Navigate to Devices >
Managed Devices and confirm that your device is in the table and that it is acknowledged (with a
green check mark).
• Before upgrading NOS, delete any device AAA/TACACS+ configlets from the blueprint. After the
upgrade is complete, you can reapply them.
• Make sure that the Admin state of the device is set to normal. Navigate to Devices > Managed
Devices, click on the Management IP of the device to confirm the admin state. (Do NOT set the
Admin state to MAINT/DECOMM or the device could enter an unrecoverable state.)
• Make sure that the Apstra version specified is the same on both the Apstra server and the device. If
they are different, you can't upgrade the device. If you attempt to upgrade with different versions,
you will not receive a warning; the task status remains in the IN PROGRESS state indefinitely.
Make sure that your devices are in the appropriate states for upgrading as described in the overview
above.
If you've created (cloned) your own device profiles, you'll need to manually specify OS versions in the
device profile and the blueprint that uses that device profile. (If your devices use built-in device profiles,
then proceed to the next section to register the new OS image.)
1. From the left navigation menu in the Apstra GUI, navigate to Devices > Device Profiles, select your
device and update the OS version in the Selector section.
545
2. From the left navigation menu, navigate to Platform > Developers > Graph Explorer and find the ID
for the device profile. You can find it with the query variables { device_profile_nodes { id label } }
In this example, the "id" for the label "Clone DCS-7160-48YC6_abc" is "35a376ad-6ba1-42ec-
bfe9-7810c56003d3".
Example:
4. From the Apstra GUI, navigate to your blueprint, click Uncommitted and commit the changes.
5. Proceed to the next section to upgrade the OS in the same manner as for devices using predefined
device profiles.
546
IN THIS SECTION
CAUTION: Make sure to select a compatible device operating system image for the
device that you're upgrading. If you use an incompatible image and the upgrade fails,
the deployment lock is not released automatically, even if you recover the device. To
release the deployment lock and activate the device again, remove the device
assignment from the blueprint, decommission and normalize the device (from Devices
> Managed Devices), then reassign the device to the blueprint. For assistance, contact
"Juniper Support" on page 893.
2. From the left navigation menu, navigate to Devices > System Agents > OS Images and click Register
OS Image (top-right). You can see how much space is left for uploading new NOS images, and if the
partition has under 5GB of free space a warning appears when you register.)
3. Select the platform from the drop-down list (EOS, NXOS, SONIC, JUNOS) and enter a description.
547
4. Either upload the image directly to the Apstra server or provide a URL download link pointing to an
image file on an accessible HTTP server (described in sections below).
1. Select Upload Image, then either click Choose File and navigate to the image on your computer, or
drag and drop the image from your computer into the dialog window and click Open.
3. Click Upload to upload and register the image with the Apstra software. The image and image size
appear in the table view.
4. If the (optional) checksum is not verified, the upgrade process stops, before the device reboots.
If another HTTP server is accessible to the devices being upgraded via their network management port,
you can register the OS Image instead of uploading it. Only HTTP URLs are supported. (HTTPS, FTP,
SFTP, SCP and others are not supported.)
548
2. Enter the URL that points to the image on the other server.
4. Click Register to register the image with the Apstra software. The image and image size appear in the
table view.
5. If the (optional) checksum is not verified, the upgrade process stops, before the device reboots.
If the device vendor provides a checksum file, we recommend that you download the file and copy it to
the Checksum field. If a checksum file is not available, you can generate a checksum with the Linux
md5sum or shasum commands, as applicable, or with equivalent programs.
Upgrade OS Image
Make sure that your devices are in the appropriate states for upgrading as described in the overview
above, and that if you're device profiles are user-defined that you've updated them accordingly.
1. From the left navigation menu, navigate to Devices > Managed Devices, and select the check box(es)
for the device(s) to upgrade. (If you have many devices, use the query function to filter selections.) All
selected devices must be of the same type, and they must be upgraded to the same image and
version. To search for specific devices (such as for all EOS devices) enter a query.
2. Click the Upgrade OS Image button (above table in Agent section). The dialog lists the available OS
images that match the selected devices.
3. Select the appropriate image and click Upgrade OS Image. You can monitor the upgrade status from
the Active Jobs section at the bottom of the page.
4. After the image is uploaded, if a checksum is provided with the OS image, the image checksum is
verified. If the MD5/SHA512 checksum is incorrect, or if any other failures occur (such as for
insufficient disk space, incorrect remote URL, or when the device NOS version is not changed post
upgrade), the job state changes to FAIL and the device does not reboot.
NOTE: If an issue arises with the OS image (such as interrupted download or invalid URL)
during a NOS upgrade, you are informed before any device configuration is changed. You can
then resolve the issue and restart the upgrade process.
5. If the job fails, click the agent to view errors. You can also click the Show Log button to view the
detailed Ansible job. If an upgrade fails, you must manually resolve the issue causing the failure. For
example, with a checksum error, you must either correct the invalid checksum or register a new OS
image with a correct checksum, then repeat the upgrade process.
6. If the checksum is correct and no other failures occur, the job state changes to SUCCESS and the
device reboots.
7. When the device has rebooted with the new image and has reestablished its agent connection with
the controller, the upgrade is complete. The Managed Devices page displays the new OS version.
550
Delete Device
If you want to remove a device from Apstra management, see "Remove (Decommission) Device from
Managed Devices" on page 552 for the complete worklow. There are additional steps before deleting
the device.
If the device to be deleted has not been "acknowledged" on page 536, you can delete the device as
shown below.
1. From the left navigation menu, navigate to Devices > Managed Devices and check the box(es) for the
device(s) to delete.
2. In the Device Actions panel (above the table) click the Delete system(s) button, then in the dialog that
opens click Confirm to remove the device(s) from Apstra management and return to the table view. (If
the device is not in STOCKED or DECOMM stage, you can't delete the device.) Device(s) are
disconnected from the Apstra server and removed from the Apstra database.
NOTE: You can also delete a single device from the Device detail view by clicking on the
management IP address in the table.
Device AAA
IN THIS SECTION
Overview | 551
Overview
RADIUS and TACACS+ device AAA (authentication, authorization and accounting) frameworks are
supported on Juniper, Cisco and Arista devices. Device AAA is optional and correct implementation is
the responsibility of the end user. Minimum requirements for correct Apstra AAA implementations are
described below.
CAUTION: When using AAA framework we recommend adding a local Apstra user to
devices. If AAA authentication or authorization fails when Apstra performs a full
configuration push, manual recovery (config push) is required.
You can apply AAA configuration in one of two ways as described below:
Configlets (Recommended)
You add configuration to a configlet, then you import it into a blueprint. Local credentials must be
available from the Apstra environment so the device can be added and the configlet can be applied.
CAUTION: Before you upgrade the Apstra server, device agent, or NOS, you must
delete device AAA/TACACS configlets from blueprints. After the upgrade is complete,
you can re-apply them.
User-required
Instead of using configlets, you can add configuration before acknowledging a device, so it becomes part
of the Pristine Config. For more information, see "Device Configuration Lifecycle" on page 519.
Juniper Junos
CAUTION: Credentials for the Junos offbox system agent user must always be valid and
available. When using the AAA framework we recommend that you add a local user to
devices and use it for Apstra offbox system agents. Always have “password” be first in
Junos config for authentication-order as follows:
Cisco NX-OS
CAUTION: A remote user could erratically be removed from NX-OS devices, causing
authentication and authorization failures. The user (role 'network-admin') must exist on
the device in order to manage the device. If not, Apstra functions such as agent
installation, telemetry collection and device configuration may fail. The only known
workaround is to use local authentication.
The example NX-OS configuration below has been tested to work correctly with Apstra software. This
uses both authentication and authorization:
Arista EOS
CAUTION: When TACACS+ AAA is configured on EOS devices, device agent upgrades
could fail while files are copied from the Apstra server to the device. This commonly
happens if TACACS+ uses a custom password prompt. To prevent this type of failure,
temporarily disable all TACACS+ AAA where device authentication uses an admin-level
username and password for any device agent operations, including upgrades.
RELATED DOCUMENTATION
1. If the device is assigned to a blueprint, unassign it from your "datacenter blueprint" on page 54 or
"freeform blueprint" on page 423, as applicable.
2. From the left navigation menu, navigate to Devices > Managed Devices and check the box for the
device to remove from Apstra management.
3. In the Device Actions panel that appears above the table, click the Set admin state to DECOMM for
selected systems button, then click Confirm to set the admin state and return to the table. (If the
device is assigned to a blueprint, you can't decommission the device.)
4. Check the box for the device again, then in the Agent Actions panel that appears above the table,
click the Uninstall button, click Uninstall selected elements, then click Close.
NOTE: If the device is unreachable, the job will fail. You can force delete the agent (in the next
step).
5. Check the box for the device again, then in the Agent Actions panel that appears above the table,
click the Delete button, click Delete selected elements, then click Close.
554
If you weren't able to uninstall the agent in the previous step because the device is unreachable, a
dialog opens that gives you the option to force delete the agent. With the Force Delete box checked,
click Delete to force delete the agent and return to the table view.
6. Check the box for the device again, then in the Device Actions panel click the Delete system(s)
button, then in the dialog that opens click Confirm to remove the device(s) from Apstra management
and return to the table view. (If the device is not in STOCKED or DECOMM stage, you can't delete
the device.) Device(s) are disconnected from the Apstra server and removed from the Apstra
database.
NOTE: You can also delete a single device from the Device detail view by clicking on the
management IP address in the table.
If you're replacing the device you just removed, follow the steps to "add" on page 536 the replacement
device to Managed Devices.
555
Device Profiles
IN THIS SECTION
IN THIS SECTION
Summary | 556
Selector | 556
Capabilities | 556
Ports | 559
Device profiles define capabilities of supported hardware devices. Some feature capabilities have
different behaviors across NOS versions and thus, capabilities are expressed per NOS version. By
default, the version matches all supported versions. As additional hardware models are qualified, they
are added to the "list of qualified devices" on page 1010.
Device profiles are associated with logical devices (abstractions of physical devices) to create "interface
maps" on page 729.
Summary
Number of slots Number of slots or modules on the device. Modular switches have multiple slots.
Start from ID
Selector
The Selector section contains device-specific information to match the hardware device to the device
profile as described below:
Model Determines whether a device profile can be applied to specific hardware. Selected from drop-
down list or entered as a regular expression (regex).
OS family Defines how configuration is generated, how telemetry commands are rendered, and how
configuration is deployed on a device. Selected from drop-down list.
Version Determines whether a device profile can be applied to specific hardware. Selected from drop-
down list or entered as regex.
Capabilities
You can leverage the hardware and software capabilities defined in this section in other parts of the
Apstra environment to adapt the generated configuration, or to prevent an incompatible situation. With
the exception of ECMP, hardware capabilities modify configuration rendering or deployment.
Capabilities include the following details:
557
CPU (cpu:string) Describes the CPU architecture of the device. For example: "x86"
Userland (bits) (userland:integer) Type of userland (application binary/kernel) the device supports. For example:
"32" or "64".
RAM (GB) (ram:integer) Amount of memory on the device. For example: "16"
ECMP limit (ecmp_limit:integer) Maximum number of Equal Cost Multi Path routes. For example: "64". This
field changes BGP configuration on the device (ecmp max-paths).
Form factor (form_factor:string) Number of rack units (RU)s on the device. For example: "1RU", "2RU", "6RU",
"7RU", "11RU","13RU"
ASIC (asic:string) The switch chipset ASIC. For example: "T2", "T2(3)", "T2(6)", "Arad(3)", "Alta",
"TH", "Spectrum", "XPliant XP80", "ASE2", "Jericho". Used to assist telemetry,
configuration rendering and VXLAN routing semantics
COPP - When Control Plane Policing is enabled (COPP), strict CoPP profile config is rendered for the
specified NX-OS version resulting in the following configuration rendering:
terminal dont-ask
copp profile strict
This terminal dont-ask config is needed only when enabling the CoPP profile strict config, since we do
not want NX-OS to wait for confirmation:
CoPP is enabled by default, except for Cisco 3172PQ NXOS. You can specify multiple versions.
Breakout - Enable breakout to indicate that ports on specified modules can be broken out to lower
speed split ports.
Apstra software first un-breakouts all ports that are breakout-capable, and then applies the proper
breakout commands according to intent. This is based on the assumption that the global negation
command no interface breakout module<module_number> can always be applied successfully to a module with
breakout capable ports. (This is idempotent when applied on ports that are not broken out.) However,
we recognize that this assumption may be broken in future versions of NX-OS, or with a certain
combination of cables / transceivers inserted into breakout-capable ports.
The example below is for the negation command for a module (1) that is set to True:
Since the negation command is always applicable per module, each module is specified individually. The
advantages of this include:
• In modular systems, not all line cards have breakout capable ports.
• In non-modular systems, the breakout capable ports may not always be in module 1.
Breakout is enabled by default except for the following devices with modules incapable of breaking out
ports: 3172PQ NXOS, 9372TX NXOS, C9372PX NXOS, C9396PX NXOS, NXOSv.
Historical Context - With a particular version of NX-OS the POAP stage would apply breakout config on
those ports which are breakout capable. POAP behavior, introduced in 7.0(3)I4(1) POAP, determines
which breakout map (for example, 10gx4, 50gx2, 25gx4, or 10gx2) brings up the link connected to the
DHCP server. If breakout is not supported on any of the ports, POAP skips the dynamic breakout
process. After the breakout loop completes, POAP proceeds with the DHCP discovery phase as normal.
Apstra reverts any such breakout config that might have been rendered during the POAP stage to
ensure that the ports are put back to default speed by applying the negation command.
559
Sequence Numbers Support - Applicable to autonomous system (AS) path. Enable when the device
supports sequence numbers. Apstra sequences into the entry list to resequence and generate config as
follows:
The numbers 5 and 15 are sequence numbers applicable to devices that support AS sequencing.
Sequence numbers support is enabled for all Cisco device profiles by default (except Cisco 3172PQ
NXOS, which does not support sequence numbers). For platforms that do not support sequence
numbers, disabling this feature ensures that the AS sequence numbers are removed from the device
model dictionary to avoid addition and negation in the event that something is resequenced. This
scenario has no requirement to render anything on these platforms, because the entry can't be
sequenced.
Other supported features - not available from the Apstra GUI include "vxlan", "bfd", "vrf_limit",
"vtep_limit", "floodlist_limit", "max_l2_mtu", and "max_l3-mtu". They can be included in the backend using
the following format:
Ports
The ports section defines the types of available ports, their capabilities and how they are organized.
Every port contains a collection of supported speed transformations. Each transformation represents the
breakout capability (such as 1-40GBe port breaking out to 4-10GBe ports), and hence contains a
collection of interfaces.
Example: If port 1 is a QSFP28 100->4x10, 100->1x40 breakout capable port, then port 1 has a
collection of three transformations, one each for 4x10, 1x40 and 1x100 breakouts. The transformation
element in the collection which represents the 4x10 has a collection of 4 interfaces, 1x40 and 1x100
has a collection of 1 interface.
Port Index (port_id: integer) Indicates a unique port in the collection of ports in the Device Profile.
560
Row Index (row_id: integer) Represents the top-to-bottom dimensions of the port panel. Shows where the
port is placed in the device's panel. For instance, in a panel with two rows and
many columns the row index is either "1" or "2".
Column Index (column_id: Represents the left-to-right dimensions of the port panel. Shows where the port is
integer) placed in the device's panel. For instance, in a panel with thirty-two ports and two
rows, the column index is in the range of "1" through "16".
Panel Index (panel_id: Indicates the panel that the port belongs to given the physical layout of ports in
integer) the device specification
Slot ID (slot_id: integer) Represents the module that the port belongs to. A modular switch has more than
one slot. In fixed function network function devices, Slot ID is usually "0".
Failure Domain Indicates if multiple panels are relying on the same hardware components. Used
(failure_domain_id: integer) when creating the cabling plan to ensure that two uplinks are not attached to the
same failure domain.
Connector Type Port transceiver type. Speed capabilities of the port are directly related to the
(connector_type: string) connector type, given that certain connector types can run in certain speeds. For
instance, "sfp", "sfp28", "qsfp", "qsfp28".
Transformations Possible breakouts for the port. Every entry is a specific supported speed. Each
(transformations: list) transformation has a collection of interfaces.
Number of interfaces Dependent on the breakout capability of the port. For a transformation
(interfaces:list) representing a certain breakout speed, the interfaces contain information about
the interface names and interface settings with which the device intends to be
configured. The "setting" information is crucial for configuring the interfaces
correctly on the device.
Based on the OS information entered in the device profile's selector field, the Apstra GUI displays the
applicable settings fields. The fields vary with the vendor OS (as found in examples below). When a
device profile is created or edited, the "setting" is validated from the vendor-specific schema as listed
below.:
eos_port_setting = Dict({
'interface': Dict({
'speed': Enum([
'', '1000full', '10000full', '25gfull', '40gfull',
561
'50gfull', '100gfull',
])}),
'global': Dict({
'port_group': Integer(),
'select': String()
})
})
nxos_port_setting = Dict({
'interface': Dict({
'speed': Enum([
'', '1000', '10000', '25000', '40000', '50000',
'100000',
])}),
'global': Dict({
"port_index": Integer(),
"speed": String(),
"module_index": Integer()
})
})
junos_port_setting = Dict({
'interface': Dict({
'speed': Enum([
'', 'disabled', '1g', '10g', '25g', '40g', '50g', '100g'
])}),
'global': Dict({
'speed': Enum([
'', '1g', '10g', '25g', '40g', '50g', '100g'
]),
"port_index": Optional(Integer()),
"fpc": Optional(Integer()),
"pic": Optional(Integer())
})
})
sonic_port_setting = Dict({
'interface': Dict({
"command": Optional(String()),
"speed": String(),
"lane_map": Optional(String())
})
562
})
})
Apstra does not necessarily use all the information above for modeling. It's made available to other
Apstra API orchestration tools for collection and use.
From the left navigation menu in the Apstra GUI, navigate to Devices > Device Profiles to go to the
device profile table view.You can create, clone, edit, and delete device profiles.
NOTE: When you upgrade the Apstra server, predefined device profile changes applicable to that
version are also updated and applied to the imported interface maps in blueprints. If you create
(or clone) device profiles, they are not managed or updated when you upgrade the Apstra server.
Device profiles contain extensive hardware model details. Make sure the profile accurately describes all
hardware characteristics. For assistance, contact "Juniper Support" on page 893.
1. From the left navigation menu, navigate to Devices > Device Profiles and click Create Device Profile.
2. If you've created a JSON payload, click Import Device Profile and select the file to import it.
Otherwise, continue to the next step.
3. Enter a unique device profile name.
4. Configure the device profile to match the characteristics of the physical device.
563
5. Click Create to create the device profile and return to the table view.
RELATED DOCUMENTATION
If a device profile is used in an "interface map" on page 729, you may not be able to change it if it
would adversely affect that interface map. You can't change predefined profiles, since your changes
would be discarded when you upgrade the Apstra server. You could clone and edit a predefined device
profile instead.)
CAUTION: Editing a device profile can lead to a mismatch between the profile's stated
abilities and the device's actual capabilities, potentially leading to unexpected results.
1. Either from the table view (Devices > Device Profiles) or the details view, click the Edit button for the
device profile to edit.
2. Make your changes.
3. Click Update (bottom-right) to update the device profile and return to the table view.
Predefined device profiles can't be deleted. Device profiles used in interface maps can't be deleted.
1. Either from the table view (Devices > Device Profiles) or the details view, click the Delete button for
the device profile to delete.
2. Click Delete to delete the device profile and return to the table view.
You can also use REST API to manage device profiles. Navigate to Platform > Developers for REST
API Documentation and tools.
IN THIS SECTION
Overview | 564
Overview
Predefined device profiles for most qualified Juniper devices ship with Apstra software. For a complete
list of qualified and recommended Juniper device series and Junos versions, see "Qualified Device and
NOS" on page 1010. Juniper device profile constraints are specified below.
Juniper QFX10002
The 36-port Juniper QFX10002-36Q and 72-port QFX10002-72Q are qualified devices. Both of these
models have a port constraint where only certain ports can be used with QSFP28 100G transceivers.
If these ports are used as 100G, then the adjacent QSFP 40G ports can't be used. The device profile
can't automatically disable the adjacent QSFP 40G ports. You must create an interface map with these
ports unused and disabled.
When you select the 100G ports while you're creating the interface map for QFX10002, you are asked if
you want to select the disabled interfaces for unused device profile ports. For 100G ports on the
565
QFX10002, click OK so the unused QSFP ports are disabled and can't be used.
IN THIS SECTION
Background | 566
Solution | 566
Capabilities | 567
Troubleshooting | 568
Background
Devices are recognized in the Apstra environment with device profiles. They capture device-specific
semantics, which are required for the Apstra software to discover them and to run network configs that
work well for the datapath once inside the blueprint.
Device profiles are REST entities, which enable you to create, edit, delete, and list during the design
phase. Device profiles are used to create interface maps, which get directly used inside the Apstra config
rendering engine when blueprints are deployed.
This document covers the knowledge required to create (and edit) a semantically correct Sonic DP, so
that not only does it pass the validations in place in Apstra which ensure the right DP is created in the
database, but also honors the vendor semantic requirement applicable to the device so that it does not
result in deploy failure when the generated configuration is pushed to the network device.
Problem Statement
Device profiles are vendor semantics-aware data structures. To create a device profile, you need the
device specification from the vendor. To create a valid and config-friendly JSON, you'll need to translate
these specifications into the Apstra device profile data model.
Solution
The high level data model is the same for all DPs. The same keys are used for every device profile. The
way we get the values might differ, or might be loaded with a vendor constraint. The document enlists
the following:
• The schema of the DP and the nested elements inside the DP.
• List any constraints, corner cases to consider, especially for port configurations for certain (group of)
models.
• Any lessons learnt along the way creating those DPs already in production useful in creating future
ones.
User Interface
When you create device profiles from the Apstra GUI, some of your entries are semantically validated.
It's not completely capable of ensuring deep vendor-specific constraints and requirements though. With
the exact vendor specification, the GUI assists you with creating a semantically valid DP which becomes
part of the Apstra database data model.
Alternatively, you can write your own Python code that contains the vendor specifications, normalize it
as per Apstra DP data model and generate the json to then import with the GUI.
Selector information
Entering the correct information in all four of the selector fields is critical for the device to get matched
to the device profile.
567
Capabilities
If you have the device specification, you can obtain its hardware and software capabilities for entry into
the device profile.
The table below contains commonly found values in SONiC devices (based on qualified devices).
ram 16 (int) (Note, the unit is in GB) Does not affect config
To create a SONIC device profile, you must read through the device specific port_config.ini (for example,
sonic-buildimage/device/mellanox/x86_64-mlnx_msn2100-r0/ACS-MSN2100/port_config.ini) file and
follow the instructions in the above link to come up with the right interface names.
The port_congi.ini specifies interface names that SONiC uses. The device profile must match interface
names which will generate the PORT configs in the configuration file (config_db.json) . For this document
568
purposes, port_config.ini and config_db.json should have the same interface naming standard. Use those
interface names in your DP along with the lane numbers provided in the port_cfg.ini file. Once a device
profile has been generated based on the aforementioned steps, Apstra will use that along with the LD to
generate the Interface Map (IM). Apstra as part of its validation will make sure that the IM (which
describes the port and its speeds) are indeed available and supported under “/usr/share/sonic/device/
x86_64-mlnx_msn2100-r0/ACS-MSN2100/port_config.ini” . This validation is performed to make sure
SONiC NOS stack does not fail due to unsupported port configuration (in config_db.json) getting
wrongly generated in Apstra due to wrong DP. So it is important that the end user makes sure the DP
that is generated for a SONiC platform has the correct interface names and lane maps as reflected in
port_config.ini file for that particular platform. A platform may have a few different port_config.ini files
part of different HWSKUs for that platform. Apstra will try to validate the generated port configs with
any of the available options for that platform. Apstra currently does not use the Dynamic Port breakout
feature which is on-going in the SONiC project.
Troubleshooting
Device mismatch usually occurs at the beginning of a device’s lifecycle. If the device is not selecting the
device profile, check the four selector fields in the device profile.
If ports are configured with incorrect speeds or if the OS-specific port constraints were not handled in
the device profile or interface map, then deploy errors could be raised.
• Check the DP for obvious port capabilities errors. Is the port really capable of the speeds the DP has
configured. The device specific port_config.ini Sonic open source project is a good resource to parse
for ERROR messages.
• Check if the DP has configured autoneg or disabled interfaces correctly. Autoneg and disabled can
both be expressed in the interface setting field.
• When debugging the interface names and lane mapping, please take a look at the corresponding
port_config.ini. As an example for AS5712-54X edgecore/accton box we can get the port_config.ini
file that has the details like lane/name/alias at https://github.com/Azure/sonic-buildimage/tree/
master/device/accton/x86_64-accton_as5712_54x-r0/Accton-AS5712-54X
• You can find the naming constraints in the official SONiC documentation. For example if you want to
generate the interface names for Accton 5712 54X running SONIC, the port_config.ini is the
authority. https://github.com/Azure/sonic-buildimage/blob/master/device/accton/x86_64-
accton_as5712_54x-r0/Accton-AS5712-54X/port_config.ini Sometimes the device might have inter-
port constraints. For SONiC, it's generally laid out in the port_config.ini file. A specific platform could
have multiple port_config.ini files, and a specific manufacturer with each port_config.ini file residing
in their on HWSKU folders in the sonic image (like the one referenced above). The ability to try out
different port speeds on (outside of what is listed in the port_config.ini) will need knowledge of the
chipset and also the physical switch manufacturer to see what can be achieved. This information may
not be available in any white papers unless requested of vendors.
569
Parse.py
=========
#!/usr/bin/python
# Copyright (c) 2017 Apstrktr, Inc. All rights reserved.
# Apstrktr, Inc. Confidential and Proprietary.
#
# This source code is licensed under End User License Agreement found in the
# LICENSE file at http://apstra.com/eula
# pylint: disable=line-too-long
import sys
from pprint import pprint
d.update({four:lane[3]})
else:
d.update({words[0]:words[1]})
return {'interface_names' : interface_indices, 'lane_mapping' : d}
def parse_portconfig(f):
buf = ''
with open(f, 'r') as stream:
buf = stream.read()
return {'<Platform>': get_lanemap(buf)}
if __name__ == '__main__':
assert len(sys.argv) > 1, "Missing port_config.ini in cmdline"
print "Collecting lane information from ", sys.argv[1]
pprint(parse_portconfig(sys.argv[1]))
print
"========================================================================================="
print " Substitute <Platform> with an identifier for the platform"
print " Append the dump into sdk/device-profile/sonic.py's sonic_device_info dictionary"
print
"========================================================================================="
To run parse.py
Example:
parse.py sonic-buildimage/device/dell/x86_64-dell_z9100_c2538-r0/Force10-Z9100-C32/
port_config.ini
'32',
'36',
'40',
'44',
'48',
'52',
'56',
'60',
'64',
'68',
'72',
'76',
'80',
'84',
'88',
'92',
'96',
'100',
'104',
'108',
'112',
'116',
'120',
'124'],
'lane_mapping': {'Ethernet0': '49',
'Ethernet1': '50',
'Ethernet10': '59',
'Ethernet100': '113',
'Ethernet101': '114',
'Ethernet102': '115',
'Ethernet103': '116',
'Ethernet104': '125',
'Ethernet105': '126',
'Ethernet106': '127',
'Ethernet107': '128',
'Ethernet108': '121',
'Ethernet109': '122',
'Ethernet11': '60',
'Ethernet110': '123',
'Ethernet111': '124',
'Ethernet112': '5',
'Ethernet113': '6',
'Ethernet114': '7',
573
'Ethernet115': '8',
'Ethernet116': '1',
'Ethernet117': '2',
'Ethernet118': '3',
'Ethernet119': '4',
'Ethernet12': '61',
'Ethernet120': '13',
'Ethernet121': '14',
'Ethernet122': '15',
'Ethernet123': '16',
'Ethernet124': '9',
'Ethernet125': '10',
'Ethernet126': '11',
'Ethernet127': '12',
'Ethernet13': '62',
'Ethernet14': '63',
'Ethernet15': '64',
'Ethernet16': '65',
'Ethernet17': '66',
'Ethernet18': '67',
'Ethernet19': '68',
'Ethernet2': '51',
'Ethernet20': '69',
'Ethernet21': '70',
'Ethernet22': '71',
'Ethernet23': '72',
'Ethernet24': '73',
'Ethernet25': '74',
'Ethernet26': '75',
'Ethernet27': '76',
'Ethernet28': '77',
'Ethernet29': '78',
'Ethernet3': '52',
'Ethernet30': '79',
'Ethernet31': '80',
'Ethernet32': '37',
'Ethernet33': '38',
'Ethernet34': '39',
'Ethernet35': '40',
'Ethernet36': '33',
'Ethernet37': '34',
'Ethernet38': '35',
'Ethernet39': '36',
574
'Ethernet4': '53',
'Ethernet40': '45',
'Ethernet41': '46',
'Ethernet42': '47',
'Ethernet43': '48',
'Ethernet44': '41',
'Ethernet45': '42',
'Ethernet46': '43',
'Ethernet47': '44',
'Ethernet48': '81',
'Ethernet49': '82',
'Ethernet5': '54',
'Ethernet50': '83',
'Ethernet51': '84',
'Ethernet52': '85',
'Ethernet53': '86',
'Ethernet54': '87',
'Ethernet55': '88',
'Ethernet56': '89',
'Ethernet57': '90',
'Ethernet58': '91',
'Ethernet59': '92',
'Ethernet6': '55',
'Ethernet60': '93',
'Ethernet61': '94',
'Ethernet62': '95',
'Ethernet63': '96',
'Ethernet64': '97',
'Ethernet65': '98',
'Ethernet66': '99',
'Ethernet67': '100',
'Ethernet68': '101',
'Ethernet69': '102',
'Ethernet7': '56',
'Ethernet70': '103',
'Ethernet71': '104',
'Ethernet72': '105',
'Ethernet73': '106',
'Ethernet74': '107',
'Ethernet75': '108',
'Ethernet76': '109',
'Ethernet77': '110',
'Ethernet78': '111',
575
'Ethernet79': '112',
'Ethernet8': '57',
'Ethernet80': '21',
'Ethernet81': '22',
'Ethernet82': '23',
'Ethernet83': '24',
'Ethernet84': '17',
'Ethernet85': '18',
'Ethernet86': '19',
'Ethernet87': '20',
'Ethernet88': '29',
'Ethernet89': '30',
'Ethernet9': '58',
'Ethernet90': '31',
'Ethernet91': '32',
'Ethernet92': '25',
'Ethernet93': '26',
'Ethernet94': '27',
'Ethernet95': '28',
'Ethernet96': '117',
'Ethernet97': '118',
'Ethernet98': '119',
'Ethernet99': '120'}}}
=========================================================================================
Substitute <Platform> with an identifier for the platform
Append the dump into sdk/device-profile/sonic.py's sonic_device_info dictionary
=========================================================================================
The output from above will become a dictionary entry in sonic_device_info in the sonic device_profile
generator python file.
{
"hardware_capabilities": {
"asic": "TH",
"cpu": "x86",
"ecmp_limit": 64,
"form_factor": "1RU",
"ram": 16,
"userland": 64
576
},
"id": "Force10-Z9100_SONiC",
"label": "Dell Force10-Z9100_SONiC",
"ports": [
{
"column_id": 1,
"connector_type": "qsfp28",
"failure_domain_id": 1,
"panel_id": 1,
"port_id": 0,
"row_id": 1,
"slot_id": 0,
"transformations": [
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet0",
"setting": "{\"interface\": {\"speed\": \"100000\", \"lane_map\":
\"49,50,51,52\"}}",
"speed": {
"unit": "G",
"value": 100
},
"state": "active"
}
],
"is_default": true,
"transformation_id": 1
},
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet0",
"setting": "{\"interface\": {\"speed\": \"40000\", \"lane_map\":
\"49,50,51,52\"}}",
"speed": {
"unit": "G",
"value": 40
},
"state": "active"
}
577
],
"is_default": false,
"transformation_id": 2
}
]
},
{
"column_id": 1,
"connector_type": "qsfp28",
"failure_domain_id": 1,
"panel_id": 1,
"port_id": 1,
"row_id": 2,
"slot_id": 0,
"transformations": [
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet4",
"setting": "{\"interface\": {\"speed\": \"100000\", \"lane_map\":
\"53,54,55,56\"}}",
"speed": {
"unit": "G",
"value": 100
},
"state": "active"
}
],
"is_default": true,
"transformation_id": 1
},
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet4",
"setting": "{\"interface\": {\"speed\": \"40000\", \"lane_map\":
\"53,54,55,56\"}}",
"speed": {
"unit": "G",
"value": 40
},
578
"state": "active"
}
],
"is_default": false,
"transformation_id": 2
}
]
},
{
"column_id": 2,
"connector_type": "qsfp28",
"failure_domain_id": 1,
"panel_id": 1,
"port_id": 2,
"row_id": 1,
"slot_id": 0,
"transformations": [
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet8",
"setting": "{\"interface\": {\"speed\": \"100000\", \"lane_map\":
\"57,58,59,60\"}}",
"speed": {
"unit": "G",
"value": 100
},
"state": "active"
}
],
"is_default": true,
"transformation_id": 1
},
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet8",
"setting": "{\"interface\": {\"speed\": \"40000\", \"lane_map\":
\"57,58,59,60\"}}",
"speed": {
"unit": "G",
579
"value": 40
},
"state": "active"
}
],
"is_default": false,
"transformation_id": 2
}
]
},
{
"column_id": 2,
"connector_type": "qsfp28",
"failure_domain_id": 1,
"panel_id": 1,
"port_id": 3,
"row_id": 2,
"slot_id": 0,
"transformations": [
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet12",
"setting": "{\"interface\": {\"speed\": \"100000\", \"lane_map\":
\"61,62,63,64\"}}",
"speed": {
"unit": "G",
"value": 100
},
"state": "active"
}
],
"is_default": true,
"transformation_id": 1
},
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet12",
"setting": "{\"interface\": {\"speed\": \"40000\", \"lane_map\":
\"61,62,63,64\"}}",
580
"speed": {
"unit": "G",
"value": 40
},
"state": "active"
}
],
"is_default": false,
"transformation_id": 2
}
]
},
{
"column_id": 3,
"connector_type": "qsfp28",
"failure_domain_id": 1,
"panel_id": 1,
"port_id": 4,
"row_id": 1,
"slot_id": 0,
"transformations": [
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet16",
"setting": "{\"interface\": {\"speed\": \"100000\", \"lane_map\":
\"65,66,67,68\"}}",
"speed": {
"unit": "G",
"value": 100
},
"state": "active"
}
],
"is_default": true,
"transformation_id": 1
},
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet16",
581
"interface_id": 1,
"name": "Ethernet20",
"setting": "{\"interface\": {\"speed\": \"40000\", \"lane_map\":
\"69,70,71,72\"}}",
"speed": {
"unit": "G",
"value": 40
},
"state": "active"
}
],
"is_default": false,
"transformation_id": 2
}
]
},
{
"column_id": 4,
"connector_type": "qsfp28",
"failure_domain_id": 1,
"panel_id": 1,
"port_id": 6,
"row_id": 1,
"slot_id": 0,
"transformations": [
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet24",
"setting": "{\"interface\": {\"speed\": \"100000\", \"lane_map\":
\"73,74,75,76\"}}",
"speed": {
"unit": "G",
"value": 100
},
"state": "active"
}
],
"is_default": true,
"transformation_id": 1
},
{
583
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet24",
"setting": "{\"interface\": {\"speed\": \"40000\", \"lane_map\":
\"73,74,75,76\"}}",
"speed": {
"unit": "G",
"value": 40
},
"state": "active"
}
],
"is_default": false,
"transformation_id": 2
}
]
},
{
"column_id": 4,
"connector_type": "qsfp28",
"failure_domain_id": 1,
"panel_id": 1,
"port_id": 7,
"row_id": 2,
"slot_id": 0,
"transformations": [
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet28",
"setting": "{\"interface\": {\"speed\": \"100000\", \"lane_map\":
\"77,78,79,80\"}}",
"speed": {
"unit": "G",
"value": 100
},
"state": "active"
}
],
"is_default": true,
"transformation_id": 1
584
},
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet28",
"setting": "{\"interface\": {\"speed\": \"40000\", \"lane_map\":
\"77,78,79,80\"}}",
"speed": {
"unit": "G",
"value": 40
},
"state": "active"
}
],
"is_default": false,
"transformation_id": 2
}
]
},
{
"column_id": 5,
"connector_type": "qsfp28",
"failure_domain_id": 1,
"panel_id": 1,
"port_id": 8,
"row_id": 1,
"slot_id": 0,
"transformations": [
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet32",
"setting": "{\"interface\": {\"speed\": \"100000\", \"lane_map\":
\"37,38,39,40\"}}",
"speed": {
"unit": "G",
"value": 100
},
"state": "active"
}
],
585
"is_default": true,
"transformation_id": 1
},
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet32",
"setting": "{\"interface\": {\"speed\": \"40000\", \"lane_map\":
\"37,38,39,40\"}}",
"speed": {
"unit": "G",
"value": 40
},
"state": "active"
}
],
"is_default": false,
"transformation_id": 2
}
]
},
{
"column_id": 5,
"connector_type": "qsfp28",
"failure_domain_id": 1,
"panel_id": 1,
"port_id": 9,
"row_id": 2,
"slot_id": 0,
"transformations": [
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet36",
"setting": "{\"interface\": {\"speed\": \"100000\", \"lane_map\":
\"33,34,35,36\"}}",
"speed": {
"unit": "G",
"value": 100
},
"state": "active"
586
}
],
"is_default": true,
"transformation_id": 1
},
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet36",
"setting": "{\"interface\": {\"speed\": \"40000\", \"lane_map\":
\"33,34,35,36\"}}",
"speed": {
"unit": "G",
"value": 40
},
"state": "active"
}
],
"is_default": false,
"transformation_id": 2
}
]
},
{
"column_id": 6,
"connector_type": "qsfp28",
"failure_domain_id": 1,
"panel_id": 1,
"port_id": 10,
"row_id": 1,
"slot_id": 0,
"transformations": [
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet40",
"setting": "{\"interface\": {\"speed\": \"100000\", \"lane_map\":
\"45,46,47,48\"}}",
"speed": {
"unit": "G",
"value": 100
587
},
"state": "active"
}
],
"is_default": true,
"transformation_id": 1
},
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet40",
"setting": "{\"interface\": {\"speed\": \"40000\", \"lane_map\":
\"45,46,47,48\"}}",
"speed": {
"unit": "G",
"value": 40
},
"state": "active"
}
],
"is_default": false,
"transformation_id": 2
}
]
},
{
"column_id": 6,
"connector_type": "qsfp28",
"failure_domain_id": 1,
"panel_id": 1,
"port_id": 11,
"row_id": 2,
"slot_id": 0,
"transformations": [
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet44",
"setting": "{\"interface\": {\"speed\": \"100000\", \"lane_map\":
\"41,42,43,44\"}}",
"speed": {
588
"unit": "G",
"value": 100
},
"state": "active"
}
],
"is_default": true,
"transformation_id": 1
},
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet44",
"setting": "{\"interface\": {\"speed\": \"40000\", \"lane_map\":
\"41,42,43,44\"}}",
"speed": {
"unit": "G",
"value": 40
},
"state": "active"
}
],
"is_default": false,
"transformation_id": 2
}
]
},
{
"column_id": 7,
"connector_type": "qsfp28",
"failure_domain_id": 1,
"panel_id": 1,
"port_id": 12,
"row_id": 1,
"slot_id": 0,
"transformations": [
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet48",
"setting": "{\"interface\": {\"speed\": \"100000\", \"lane_map\":
589
\"81,82,83,84\"}}",
"speed": {
"unit": "G",
"value": 100
},
"state": "active"
}
],
"is_default": true,
"transformation_id": 1
},
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet48",
"setting": "{\"interface\": {\"speed\": \"40000\", \"lane_map\":
\"81,82,83,84\"}}",
"speed": {
"unit": "G",
"value": 40
},
"state": "active"
}
],
"is_default": false,
"transformation_id": 2
}
]
},
{
"column_id": 7,
"connector_type": "qsfp28",
"failure_domain_id": 1,
"panel_id": 1,
"port_id": 13,
"row_id": 2,
"slot_id": 0,
"transformations": [
{
"interfaces": [
{
"interface_id": 1,
590
"name": "Ethernet52",
"setting": "{\"interface\": {\"speed\": \"100000\", \"lane_map\":
\"85,86,87,88\"}}",
"speed": {
"unit": "G",
"value": 100
},
"state": "active"
}
],
"is_default": true,
"transformation_id": 1
},
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet52",
"setting": "{\"interface\": {\"speed\": \"40000\", \"lane_map\":
\"85,86,87,88\"}}",
"speed": {
"unit": "G",
"value": 40
},
"state": "active"
}
],
"is_default": false,
"transformation_id": 2
}
]
},
{
"column_id": 8,
"connector_type": "qsfp28",
"failure_domain_id": 1,
"panel_id": 1,
"port_id": 14,
"row_id": 1,
"slot_id": 0,
"transformations": [
{
"interfaces": [
591
{
"interface_id": 1,
"name": "Ethernet56",
"setting": "{\"interface\": {\"speed\": \"100000\", \"lane_map\":
\"89,90,91,92\"}}",
"speed": {
"unit": "G",
"value": 100
},
"state": "active"
}
],
"is_default": true,
"transformation_id": 1
},
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet56",
"setting": "{\"interface\": {\"speed\": \"40000\", \"lane_map\":
\"89,90,91,92\"}}",
"speed": {
"unit": "G",
"value": 40
},
"state": "active"
}
],
"is_default": false,
"transformation_id": 2
}
]
},
{
"column_id": 8,
"connector_type": "qsfp28",
"failure_domain_id": 1,
"panel_id": 1,
"port_id": 15,
"row_id": 2,
"slot_id": 0,
"transformations": [
592
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet60",
"setting": "{\"interface\": {\"speed\": \"100000\", \"lane_map\":
\"93,94,95,96\"}}",
"speed": {
"unit": "G",
"value": 100
},
"state": "active"
}
],
"is_default": true,
"transformation_id": 1
},
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet60",
"setting": "{\"interface\": {\"speed\": \"40000\", \"lane_map\":
\"93,94,95,96\"}}",
"speed": {
"unit": "G",
"value": 40
},
"state": "active"
}
],
"is_default": false,
"transformation_id": 2
}
]
},
{
"column_id": 9,
"connector_type": "qsfp28",
"failure_domain_id": 1,
"panel_id": 1,
"port_id": 16,
"row_id": 1,
593
"slot_id": 0,
"transformations": [
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet64",
"setting": "{\"interface\": {\"speed\": \"100000\", \"lane_map\":
\"97,98,99,100\"}}",
"speed": {
"unit": "G",
"value": 100
},
"state": "active"
}
],
"is_default": true,
"transformation_id": 1
},
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet64",
"setting": "{\"interface\": {\"speed\": \"40000\", \"lane_map\":
\"97,98,99,100\"}}",
"speed": {
"unit": "G",
"value": 40
},
"state": "active"
}
],
"is_default": false,
"transformation_id": 2
}
]
},
{
"column_id": 9,
"connector_type": "qsfp28",
"failure_domain_id": 1,
"panel_id": 1,
594
"port_id": 17,
"row_id": 2,
"slot_id": 0,
"transformations": [
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet68",
"setting": "{\"interface\": {\"speed\": \"100000\", \"lane_map\":
\"101,102,103,104\"}}",
"speed": {
"unit": "G",
"value": 100
},
"state": "active"
}
],
"is_default": true,
"transformation_id": 1
},
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet68",
"setting": "{\"interface\": {\"speed\": \"40000\", \"lane_map\":
\"101,102,103,104\"}}",
"speed": {
"unit": "G",
"value": 40
},
"state": "active"
}
],
"is_default": false,
"transformation_id": 2
}
]
},
{
"column_id": 10,
"connector_type": "qsfp28",
595
"failure_domain_id": 1,
"panel_id": 1,
"port_id": 18,
"row_id": 1,
"slot_id": 0,
"transformations": [
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet72",
"setting": "{\"interface\": {\"speed\": \"100000\", \"lane_map\":
\"105,106,107,108\"}}",
"speed": {
"unit": "G",
"value": 100
},
"state": "active"
}
],
"is_default": true,
"transformation_id": 1
},
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet72",
"setting": "{\"interface\": {\"speed\": \"40000\", \"lane_map\":
\"105,106,107,108\"}}",
"speed": {
"unit": "G",
"value": 40
},
"state": "active"
}
],
"is_default": false,
"transformation_id": 2
}
]
},
{
596
"column_id": 10,
"connector_type": "qsfp28",
"failure_domain_id": 1,
"panel_id": 1,
"port_id": 19,
"row_id": 2,
"slot_id": 0,
"transformations": [
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet76",
"setting": "{\"interface\": {\"speed\": \"100000\", \"lane_map\":
\"109,110,111,112\"}}",
"speed": {
"unit": "G",
"value": 100
},
"state": "active"
}
],
"is_default": true,
"transformation_id": 1
},
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet76",
"setting": "{\"interface\": {\"speed\": \"40000\", \"lane_map\":
\"109,110,111,112\"}}",
"speed": {
"unit": "G",
"value": 40
},
"state": "active"
}
],
"is_default": false,
"transformation_id": 2
}
]
597
},
{
"column_id": 11,
"connector_type": "qsfp28",
"failure_domain_id": 1,
"panel_id": 1,
"port_id": 20,
"row_id": 1,
"slot_id": 0,
"transformations": [
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet80",
"setting": "{\"interface\": {\"speed\": \"100000\", \"lane_map\":
\"21,22,23,24\"}}",
"speed": {
"unit": "G",
"value": 100
},
"state": "active"
}
],
"is_default": true,
"transformation_id": 1
},
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet80",
"setting": "{\"interface\": {\"speed\": \"40000\", \"lane_map\":
\"21,22,23,24\"}}",
"speed": {
"unit": "G",
"value": 40
},
"state": "active"
}
],
"is_default": false,
"transformation_id": 2
598
}
]
},
{
"column_id": 11,
"connector_type": "qsfp28",
"failure_domain_id": 1,
"panel_id": 1,
"port_id": 21,
"row_id": 2,
"slot_id": 0,
"transformations": [
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet84",
"setting": "{\"interface\": {\"speed\": \"100000\", \"lane_map\":
\"17,18,19,20\"}}",
"speed": {
"unit": "G",
"value": 100
},
"state": "active"
}
],
"is_default": true,
"transformation_id": 1
},
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet84",
"setting": "{\"interface\": {\"speed\": \"40000\", \"lane_map\":
\"17,18,19,20\"}}",
"speed": {
"unit": "G",
"value": 40
},
"state": "active"
}
],
599
"is_default": false,
"transformation_id": 2
}
]
},
{
"column_id": 12,
"connector_type": "qsfp28",
"failure_domain_id": 1,
"panel_id": 1,
"port_id": 22,
"row_id": 1,
"slot_id": 0,
"transformations": [
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet88",
"setting": "{\"interface\": {\"speed\": \"100000\", \"lane_map\":
\"29,30,31,32\"}}",
"speed": {
"unit": "G",
"value": 100
},
"state": "active"
}
],
"is_default": true,
"transformation_id": 1
},
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet88",
"setting": "{\"interface\": {\"speed\": \"40000\", \"lane_map\":
\"29,30,31,32\"}}",
"speed": {
"unit": "G",
"value": 40
},
"state": "active"
600
}
],
"is_default": false,
"transformation_id": 2
}
]
},
{
"column_id": 12,
"connector_type": "qsfp28",
"failure_domain_id": 1,
"panel_id": 1,
"port_id": 23,
"row_id": 2,
"slot_id": 0,
"transformations": [
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet92",
"setting": "{\"interface\": {\"speed\": \"100000\", \"lane_map\":
\"25,26,27,28\"}}",
"speed": {
"unit": "G",
"value": 100
},
"state": "active"
}
],
"is_default": true,
"transformation_id": 1
},
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet92",
"setting": "{\"interface\": {\"speed\": \"40000\", \"lane_map\":
\"25,26,27,28\"}}",
"speed": {
"unit": "G",
"value": 40
601
},
"state": "active"
}
],
"is_default": false,
"transformation_id": 2
}
]
},
{
"column_id": 13,
"connector_type": "qsfp28",
"failure_domain_id": 1,
"panel_id": 1,
"port_id": 24,
"row_id": 1,
"slot_id": 0,
"transformations": [
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet96",
"setting": "{\"interface\": {\"speed\": \"100000\", \"lane_map\":
\"117,118,119,120\"}}",
"speed": {
"unit": "G",
"value": 100
},
"state": "active"
}
],
"is_default": true,
"transformation_id": 1
},
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet96",
"setting": "{\"interface\": {\"speed\": \"40000\", \"lane_map\":
\"117,118,119,120\"}}",
"speed": {
602
"unit": "G",
"value": 40
},
"state": "active"
}
],
"is_default": false,
"transformation_id": 2
}
]
},
{
"column_id": 13,
"connector_type": "qsfp28",
"failure_domain_id": 1,
"panel_id": 1,
"port_id": 25,
"row_id": 2,
"slot_id": 0,
"transformations": [
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet100",
"setting": "{\"interface\": {\"speed\": \"100000\", \"lane_map\":
\"113,114,115,116\"}}",
"speed": {
"unit": "G",
"value": 100
},
"state": "active"
}
],
"is_default": true,
"transformation_id": 1
},
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet100",
"setting": "{\"interface\": {\"speed\": \"40000\", \"lane_map\":
603
\"113,114,115,116\"}}",
"speed": {
"unit": "G",
"value": 40
},
"state": "active"
}
],
"is_default": false,
"transformation_id": 2
}
]
},
{
"column_id": 14,
"connector_type": "qsfp28",
"failure_domain_id": 1,
"panel_id": 1,
"port_id": 26,
"row_id": 1,
"slot_id": 0,
"transformations": [
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet104",
"setting": "{\"interface\": {\"speed\": \"100000\", \"lane_map\":
\"125,126,127,128\"}}",
"speed": {
"unit": "G",
"value": 100
},
"state": "active"
}
],
"is_default": true,
"transformation_id": 1
},
{
"interfaces": [
{
"interface_id": 1,
604
"name": "Ethernet104",
"setting": "{\"interface\": {\"speed\": \"40000\", \"lane_map\":
\"125,126,127,128\"}}",
"speed": {
"unit": "G",
"value": 40
},
"state": "active"
}
],
"is_default": false,
"transformation_id": 2
}
]
},
{
"column_id": 14,
"connector_type": "qsfp28",
"failure_domain_id": 1,
"panel_id": 1,
"port_id": 27,
"row_id": 2,
"slot_id": 0,
"transformations": [
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet108",
"setting": "{\"interface\": {\"speed\": \"100000\", \"lane_map\":
\"121,122,123,124\"}}",
"speed": {
"unit": "G",
"value": 100
},
"state": "active"
}
],
"is_default": true,
"transformation_id": 1
},
{
"interfaces": [
605
{
"interface_id": 1,
"name": "Ethernet108",
"setting": "{\"interface\": {\"speed\": \"40000\", \"lane_map\":
\"121,122,123,124\"}}",
"speed": {
"unit": "G",
"value": 40
},
"state": "active"
}
],
"is_default": false,
"transformation_id": 2
}
]
},
{
"column_id": 15,
"connector_type": "qsfp28",
"failure_domain_id": 1,
"panel_id": 1,
"port_id": 28,
"row_id": 1,
"slot_id": 0,
"transformations": [
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet112",
"setting": "{\"interface\": {\"speed\": \"100000\", \"lane_map\": \"5,6,7,8\"}}",
"speed": {
"unit": "G",
"value": 100
},
"state": "active"
}
],
"is_default": true,
"transformation_id": 1
},
{
606
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet112",
"setting": "{\"interface\": {\"speed\": \"40000\", \"lane_map\": \"5,6,7,8\"}}",
"speed": {
"unit": "G",
"value": 40
},
"state": "active"
}
],
"is_default": false,
"transformation_id": 2
}
]
},
{
"column_id": 15,
"connector_type": "qsfp28",
"failure_domain_id": 1,
"panel_id": 1,
"port_id": 29,
"row_id": 2,
"slot_id": 0,
"transformations": [
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet116",
"setting": "{\"interface\": {\"speed\": \"100000\", \"lane_map\": \"1,2,3,4\"}}",
"speed": {
"unit": "G",
"value": 100
},
"state": "active"
}
],
"is_default": true,
"transformation_id": 1
},
{
607
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet116",
"setting": "{\"interface\": {\"speed\": \"40000\", \"lane_map\": \"1,2,3,4\"}}",
"speed": {
"unit": "G",
"value": 40
},
"state": "active"
}
],
"is_default": false,
"transformation_id": 2
}
]
},
{
"column_id": 16,
"connector_type": "qsfp28",
"failure_domain_id": 1,
"panel_id": 1,
"port_id": 30,
"row_id": 1,
"slot_id": 0,
"transformations": [
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet120",
"setting": "{\"interface\": {\"speed\": \"100000\", \"lane_map\":
\"13,14,15,16\"}}",
"speed": {
"unit": "G",
"value": 100
},
"state": "active"
}
],
"is_default": true,
"transformation_id": 1
},
608
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet120",
"setting": "{\"interface\": {\"speed\": \"40000\", \"lane_map\":
\"13,14,15,16\"}}",
"speed": {
"unit": "G",
"value": 40
},
"state": "active"
}
],
"is_default": false,
"transformation_id": 2
}
]
},
{
"column_id": 16,
"connector_type": "qsfp28",
"failure_domain_id": 1,
"panel_id": 1,
"port_id": 31,
"row_id": 2,
"slot_id": 0,
"transformations": [
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet124",
"setting": "{\"interface\": {\"speed\": \"100000\", \"lane_map\":
\"9,10,11,12\"}}",
"speed": {
"unit": "G",
"value": 100
},
"state": "active"
}
],
"is_default": true,
609
"transformation_id": 1
},
{
"interfaces": [
{
"interface_id": 1,
"name": "Ethernet124",
"setting": "{\"interface\": {\"speed\": \"40000\", \"lane_map\": \"9,10,11,12\"}}",
"speed": {
"unit": "G",
"value": 40
},
"state": "active"
}
],
"is_default": false,
"transformation_id": 2
}
]
}
],
"selector": {
"manufacturer": "Dell|DELL",
"model": "Z9100-ON",
"os": "SONiC",
"os_version": ".*"
},
"slot_count": 0,
"software_capabilities": {
"lxc_support": false,
"onie": true
}
}
610
System Agents
IN THIS SECTION
Agents Introduction
Apstra device system agents handle configuration management, device-to-server communication, and
telemetry collection. If you're not using "Apstra ZTP" on page 694 to bootstrap your devices (or if you
have a one-off installation) you can use this device installer to automatically install and verify devices.
Depending on the device NOS, you can install device agents on-box (agent is installed on the device) or
off-box (agent is installed on the Apstra server and communicates with devices via API). For support
information, see the device management section of the "4.2.0 feature matrix" on page 989.
When you install device agents, make sure the following configuration is not on the device:
• Loopback interfaces
• VLAN interfaces
• VXLAN interfaces
611
• AS-Path access-lists
• IP prefix-lists
• BGP configuration
During the agent install process, device configuration is validated, and if the device contains
configuration that could prevent the deployment of service configuration, the agent install process raises
an error (as of Apstra 4.0.1).
In this case, manually remove conflicting configuration and start the agent installation process again.
If you must complete the agent installation with configuration validation errors, you can disable pristine
configuration validation. To do this, from Devices > System Agents > Agents, select Advanced Settings,
then select Skip Pristine Configuration Validation.
For information about retaining pre-existing configuration when bringing devices under Apstra
management, see "Device Configuration Lifecycle" on page 519. For more information about managing
devices in the Apstra environmewnt, see "Managed Devices" on page 532.
612
NOTE: On some platforms (Junos for example) you can configure rate-limiting for management
traffic (SSH for example). When the Apstra server interacts directly with devices it can be more
bursty than when it interacts with a user. Rate-limiting configurations that are used for hardening
security can impact device management, and lead to deployment failures and other agent-related
issues.
Parameter Description
Platform (off-box only) For off-box agents only: drop-down list includes supported platforms.
Username / Password If you're not using an agent profile with credentials, check these boxes and add
credentials.
Agent Profile If you don't want to manually enter credentials and packages, use agent profiles that
you previously defined.
Job to run after creation • Install (default) - installs the agent on the device
• Check - creates the agent, but does not install it. It appears in the list view where
you can install it later.
Install Requirements For servers only: If servers don't have Internet connectivity, uncheck the box.
(servers only)
Packages Before creating the agent, install required packages so they are available. Packages
associated with selected agent profiles are listed here as well.
613
Parameter Description
Open Options (off-box Passes configured parameters to off-box agents. For example, to use HTTPS as the API
only) connection from off-box agents to devices, use the key-value pair: proto-https -
port-443. The following default values can be overridden with open options:
From the left navigation menu, navigate to Devices > Managed Devices to go to managed devices.
You can select one or more agents or select actions for individual agents.
You can delete an agent only if that agent has been uninstalled and is no longer running on a device.
Additional actions are available on the line with the agent. For example, if a device is not assigned to a
blueprint, you can restore the device's pristine configuration by clicking the Revert to Pristine Config
button (as of Apstra version 4.0.1).
614
• Configured management IP connectivity between devices and the Apstra server. You must do this
before installing agents so it’s out-of-band (OOB). Configuring management connectivity in-band
(through the fabric) is not supported and could cause connectivity issues when changes are made to
the blueprint.
Before creating/installing onbox device agents on Cisco NX-OS and Arista EOS, configure the following
minimum configuration on them as shown below. (SONiC Enterprise has no specific configuration
requirements other than Management Network and privileged user access.)
!
copp profile strict
!
username admin password <admin-password> role network-admin
!
vrf context management
ip route 0.0.0.0/0 <management-default-gateway>
!
interface mgmt0
615
ip address <address>/<cidr>
!
!
service routing protocols model multi-agent
!
aaa authorization exec default local
!
username admin privilege 15 role network-admin secret <admin-password>
!
interface Management1
ip address <address>/<cidr>
!
ip route vrf management 0.0.0.0/0 <management-default-gateway>
!
• Loopback interfaces
• VLAN interfaces
• VXLAN interfaces
• AS-Path access-lists
• IP prefix-lists
• BGP configuration
During the agent install process, device configuration is validated, and if the device contains
configuration that could prevent the deployment of service configuration, the agent install process raises
an error (as of Apstra 4.0.1).
616
In this case, manually remove conflicting configuration and start the agent installation process again.
If you must complete the agent installation with configuration validation errors, you can disable pristine
configuration validation. To do this, from Devices > Managed Devices, click Advanced Settings (top-
right), select Skip Pristine Configuration Validation, then click Update.
For information about retaining pre-existing configuration when bringing devices under Apstra
management, see "Device Configuration Lifecycle" on page 519.
NOTE: On some platforms (Junos for example) you can configure rate-limiting for management
traffic (SSH for example). When the Apstra server interacts directly with devices it can be more
bursty than when it interacts with a user. Rate-limiting configurations that are used for hardening
security can impact device management, and lead to deployment failures and other agent-related
issues.
Parameter Description
(Continued)
Parameter Description
Username / Password If you're not using an agent profile with credentials, check these boxes and add
credentials.
Agent Profile If you don't want to manually enter credentials and packages, use agent profiles that
you previously defined.
Job to run after creation • Install (default) - installs the agent on the device
• Check - creates the agent, but does not install it. It appears in the table view
where you can install it later.
Install Requirements For servers only: If servers don't have Internet connectivity, uncheck the box.
(servers only)
Packages Before creating the agent, install required packages so they are available. Packages
associated with selected agent profiles are listed here as well.
1. Confirm that you've installed the minimum configuration as described above, and that the device
doesn't contain configuration that would raise validation errors.
2. From the left navigation menu, navigate to Devices > Managed Devices and click Create Onbox
Agent(s).
3. Specify agent details as described in the parameters table above.
4. Click Create. While the task is active you can view its progress at the bottom of the screen in the
Active Jobs section. The job status changes from Initialized to In Progress to Succeeded.
• Configured management IP connectivity between devices and the Apstra server. You must do this
before installing agents so it’s out-of-band (OOB). Configuring management connectivity in-band
(through the fabric) is not supported and could cause connectivity issues when changes are made to
the blueprint.
618
• If you're using Juniper offbox agents, "increase the application memory usage" on page 846.
• On Juniper devices, add Junos license configuration. (This is not the preferred method for adding
license configuration. For more information, see "Juniper Device Agent" on page 627.)
Before creating/installing offbox device agents on Juniper Junos, Cisco NX-OS and Arista EOS, configure
the following minimum configuration on them as shown below.
system {
login {
user aosadmin {
uid 2000;
class super-user;
authentication {
encrypted-password "xxxxx";
}
}
}
services {
ssh;
netconf {
ssh;
}
}
management-instance;
}
interfaces {
em0 {
unit 0 {
family inet {
address <address>/<cidr>;
}
}
}
}
routing-instances {
mgmt_junos {
routing-options {
static {
619
!
feature nxapi
feature bash-shell
feature scp-server
feature evmed
copp profile strict
nxapi http port 80
!
username admin password <admin-password> role network-admin
!
vrf context management
ip route 0.0.0.0/0 <management-default-gateway>
!
nxapi http port 80
!
interface mgmt0
ip address <address>/<cidr>
!
!
service routing protocols model multi-agent
!
aaa authorization exec default local
!
username admin privilege 15 role network-admin secret <admin-password>
!
vrf definition management
rd 100:100
!
interface Management1
620
• Loopback interfaces
• VLAN interfaces
• VXLAN interfaces
• AS-Path access-lists
• IP prefix-lists
• BGP configuration
During the agent install process, device configuration is validated, and if the device contains
configuration that could prevent the deployment of service configuration, the agent install process raises
an error (as of Apstra 4.0.1).
621
In this case, manually remove conflicting configuration and start the agent installation process again.
If you must complete the agent installation with configuration validation errors, you can disable pristine
configuration validation. To do this, from Devices > Managed Devices, click Advanced Settings (top-
right), select Skip Pristine Configuration Validation, then click Update.
For information about retaining pre-existing configuration when bringing devices under Apstra
management, see "Device Configuration Lifecycle" on page 519.
NOTE: On some platforms (Junos for example) you can configure rate-limiting for management
traffic (SSH for example). When the Apstra server interacts directly with devices it can be more
bursty than when it interacts with a user. Rate-limiting configurations that are used for hardening
security can impact device management, and lead to deployment failures and other agent-related
issues.
Parameter Description
(Continued)
Parameter Description
Platform (offbox only) For offbox agents only: drop-down list includes supported platforms.
Username / Password If you're not using an agent profile with credentials, check these boxes and add
credentials.
Agent Profile If you don't want to manually enter credentials and packages, use agent profiles that
you previously defined.
Job to run after • Install (default) - installs the agent on the device
creation
• Check - creates the agent, but does not install it. It appears in the table view where
you can install it later.
Install Requirements For servers only: If servers don't have Internet connectivity, uncheck the box.
(servers only)
Packages Before creating the agent, install required packages so they are available. Packages
associated with selected agent profiles are listed here as well.
Open Options (offbox Passes configured parameters to offbox agents. For example, to use HTTPS as the API
only) connection from offbox agents to devices, use the key-value pair: proto-https -
port-443. The following default values can be overridden with open options:
1. Confirm that you've installed the minimum configuration as described above, and that the device
doesn't contain configuration that would raise validation errors.
2. From the left navigation menu, navigate to Devices > Managed Devices and click Create Offbox
Agent(s).
623
Edit Agent
IN THIS SECTION
You can edit one agent at a time, or you can edit multiple agents simultaneously to change details that
are common to all selected agents.
1. From the left navigation menu, navigate to Devices >Managed Devices to go to devices and agents.
2. Click the three dots in the Actions column (right side) for the device that you want to edit, then click
the Edit button in the Agent menu.
3. Make your changes (device addresses, operation mode, agent profile, packages, open-options, as
applicable).
624
4. Click Update to update the agent and return to the table view.
1. From the left navigation menu, navigate to Devices >Managed Devices and select one or more check
boxes for the device(s) to edit.
2. Click the Assign Profile button (in the menu that appears above the table after making a selection).
625
3. Make your changes (agent profile, clear existing packages, clear open options).
4. Click Assign System Agent Profile to save your changes and return to the table view.
Delete Agent
NOTE: Several steps are involved in removing a device from Apstra management, such as
unassigning it from a blueprint, setting the admin state, uninstalling and deleting the agent, and
deleting the device from the Managed Devices table. See "Remove Device" on page 552 for
details.
1. From the left navigation menu, navigate to Devices > Managed Devices, select the device(s) to
delete, then click the Delete button in the Agent section.
2. Click Delete to delete the agent(s) and return to the list view.
1. From the left navigation menu, navigate to Devices > Managed Devices to go to the managed
devices table view.
626
NOTE: When you uninstall a device agent, the pristine configuration is restored on the device
by default. If you want to retain existing configuration, click Advanced Settings and check the
box to Skip Revert to Pristine on Uninstall.
2. Check the box(es) for the device(s), then in the Agent Actions panel that appears above the table,
click the Uninstall button, click Uninstall selected elements, then click Close.
NOTE: If the device is unreachable, the job will fail. You can force delete the agent (in the next
step).
3. Check the box for the device(s) again, then in the Agent Actions panel that appears above the table,
click the Delete button, click Delete selected elements, then click Close.
627
If you weren't able to uninstall the agent in the previous step because the device is unreachable, a
dialog opens that gives you the option to force delete the agent. With the Force Delete box checked,
click Delete to force delete the agent and return to the table view.
IN THIS SECTION
Juniper ZTP
For an option that's simpler and easier to support at scale, see "Apstra ZTP" on page 694, which shows
you how to automatically boot and install Apstra device agents and prerequisite switch configuration.
Disable ZTP
If you want to install agents manually because a previous attempt to install them with Apstra ZTP failed,
you must first delete the ZTP mode (since it remains active) with the command delete chassis auto-image-
upgrade.
If you're going to provision the Juniper switch without ZTP (ZTP Disabled), make sure that the ZTP
process is disabled before proceeding. After logging into the switch for the first time and setting system
root-authentication, configure delete chassis auto-image-upgrade.
{master:0}
root> edit
Entering configuration mode
{master:0}[edit]
root# delete chassis auto-image-upgrade
{master:0}[edit]
root# commit and-quit
configuration check succeeds
commit complete
Exiting configuration mode
{master:0}
root>
Before installing Apstra device system agents on Juniper Junos devices, apply the minimum
configuration below to the devices.
system {
login {
user aosadmin {
uid 2000;
class super-user;
629
authentication {
encrypted-password "xxxxx";
}
}
}
services {
ssh;
netconf {
ssh;
}
}
management-instance;
}
interfaces {
em0 {
unit 0 {
family inet {
address <address>/<cidr>;
}
}
}
}
routing-instances {
mgmt_junos {
routing-options {
static {
route 0.0.0.0/0 next-hop <management-default-gateway>;
}
}
}
}
For the device system agent to connect to the Juniper Junos device, you must configure a local device
user with class super-user.
{master:0}
root> edit
Entering configuration mode
630
{master:0}[edit]
root# set system login user aosadmin class super-user
{master:0}[edit]
root# set system login user aosadmin authentication plain-text-password
New password:
Retype new password:
{master:0}[edit]
root# commit and-quit
configuration check succeeds
commit complete
Exiting configuration mode
{master:0}
root>
NOTE: If you intend to use a different authentication method for device access (such as
RADIUS), you must use local password authentication first.
Device system agents use the Junos mgmt_junos management-instance VRF and the management
interface (such as em0).
{master:0}
root> edit
Entering configuration mode
{master:0}[edit]
root# set system management-instance
{master:0}[edit]
root# set interfaces em0.0 family inet address 192.168.59.11/24
{master:0}[edit]
root# set routing-instances mgmt_junos routing-options static route 0.0.0.0/0 next-hop
631
192.168.59.1
{master:0}[edit]
root# commit and-quit
configuration check succeeds
commit complete
Exiting configuration mode
{master:0}
root>
If the Juniper device uses a different management interface (such as vme.0), configure the management
IP address on it instead.
Device system agents require Junos SSH and NETCONF access to be configured under system services.
{master:0}
root> edit
Entering configuration mode
{master:0}[edit]
root# set system services ssh
{master:0}[edit]
root# set system services netconf ssh
{master:0}[edit]
root# commit and-quit
configuration check succeeds
commit complete
Exiting configuration mode
{master:0}
root>
You can add license configuration before installing the system agent (to make it part of the pristine
configuration), but the preferred method is to add license configuration with "configlets" on page 766.
632
IN THIS SECTION
Although the preferred method of installing device system agents is from the Apstra GUI, you can
manually install Apstra agents from the CLI. Only in rare exceptions would you need to manually install
agents, which requires more effort and is error-prone. Before manually installing agents, you should have
an in-depth understanding of the various device states, configuration stages, and agent operations . For
assistance, contact "Juniper Support" on page 893.
NOTE: You can also use "Apstra ZTP" on page 694 to automatically boot and install agents and
prerequisite configuration on switches. Using Apstra ZTP is simpler and easier to support at scale
than manually installing agents.
The SONiC device agent manages the following files in the filesystem:
• /etc/sonic/config_db.json - The main configuration file for SONiC, specifying interfaces, IP addresses,
port breakouts etc.
• /etc/sonic/frr/frr.conf - frr.conf contains all of the routing application configuration for BGP on the
device.
CAUTION: Do not edit the config_db.json or frr.conf files manually at any time, before or
after device system agent installation. The agent overwrites any existing configuration
in these files.
633
SONiC automatically creates a management VRF for the "eth0" management interface. By default,
"eth0" gets a DHCP address from the management network. In most cases, no management
configuration should be needed.
However, if you need to manually configure a SONiC device management IP address, you must
configure it using the sonic-cli interface.
admin@sonic:~$ sonic-cli
sonic# show interface Management 0
eth0 is up, line protocol is up
Hardware is MGMT
Description: Management0
Mode of IPV4 address assignment: not-set
Mode of IPV6 address assignment: not-set
IP MTU 1500 bytes
LineSpeed 1GB, Auto-negotiation True
Input statistics:
11 packets, 1412 octets
0 Multicasts, 0 error, 4 discarded
Output statistics:
31 packets, 5290 octets
0 error, 0 discarded
sonic# configure terminal
sonic(config)# interface Management 0
sonic(conf-if-eth0)# ip address 192.168.59.7/24 gwaddr 192.168.59.1
sonic(conf-if-eth0)# exit
sonic(config)# exit
sonic# write memory
sonic# show interface Management 0
eth0 is up, line protocol is up
Hardware is MGMT
Description: Management0
IPV4 address is 192.168.59.7/24
Mode of IPV4 address assignment: MANUAL
Mode of IPV6 address assignment: not-set
IP MTU 1500 bytes
LineSpeed 1GB, Auto-negotiation True
Input statistics:
18 packets, 2494 octets
0 Multicasts, 0 error, 6 discarded
634
Output statistics:
38 packets, 6455 octets
0 error, 0 discarded
sonic#
You can check the Managment VRF from the SONiC Linux command line.
ManagementVRF : Enabled
To manually install SONiC device agents you'll download, install and configure the agent software, then
acknowledge it to bring it under Apstra management.
635
1. Download the Apstra agent with the sudo cgexec -g l3mdev:mgmt curl -o /tmp/aos.run -k -O https://{{aos-
ip-address}}/ device_agent_images/aos_device_agent{{aos-version}}-{{aos-build}}.runcurl` command.
2. Install the Apstra agent with the sudo /bin/bash /tmp/aos.run -- --no-start command.
+ UNKNOWN_PLATFORM=1
+ WRONG_PLATFORM=1
+ CANNOT_EXECUTE=126
+ '[' 0 -ne 0 ']'
+ arg_parse --no-start
+ start_aos=True
+ [[ 1 > 0 ]]
+ key=--no-start
+ case $key in
+ start_aos=False
+ shift
+ [[ 0 > 0 ]]
+ supported_platforms=(["centos"]="install_centos" ["eos"]="install_on_arista"
["nxos"]="install_on_nxos" ["opx"]="install_systemd_deb opx"
["trusty"]="install_sysvinit_deb" ["xenial"]="install_sysvinit_deb"
636
["icos"]="install_sysvinit_rpm" ["snaproute"]="install_sysvinit_deb"
["simulation"]="install_sysvinit_deb" ["sonic"]="install_systemd_deb sonic"
["bionic"]="install_sysvinit_deb")
+ declare -A supported_platforms
++ /tmp/selfgz334323135/aos_get_platform
+ current_platform=sonic
+ installer='install_systemd_deb sonic'
+ [[ -z install_systemd_deb sonic ]]
+++ readlink /sbin/init
++ basename /lib/systemd/systemd
+ [[ systemd == systemd ]]
+ systemd_available=true
+ [[ -x /etc/init.d/aos ]]
+ echo 'Stopping AOS'
Stopping AOS
+ true
+ systemctl stop aos
+ install_systemd_deb sonic
++ pwd
+ local pkg_dir=/tmp/selfgz334323135/sonic
+ install_deb /tmp/selfgz334323135/sonic
+ local pkg_dir=/tmp/selfgz334323135/sonic
+ dpkg -s aos-device-agent
+ dpkg --purge aos-device-agent
(Reading database ... 34189 files and directories currently installed.)
Removing aos-device-agent (3.3.0a-93) ...
Purging configuration files for aos-device-agent (3.3.0a-93) ...
Processing triggers for systemd (232-25+deb9u12) ...
+ dpkg -i /tmp/selfgz334323135/sonic/aos-device-agent-3.3.0a-93.amd64.deb
Selecting previously unselected package aos-device-agent.
(Reading database ... 34180 files and directories currently installed.)
Preparing to unpack .../aos-device-agent-3.3.0a-93.amd64.deb ...
Unpacking aos-device-agent (3.3.0a-93) ...
Setting up aos-device-agent (3.3.0a-93) ...
Synchronizing state of aos.service with SysV service script with /lib/systemd/systemd-sysv-
install.
Executing: /lib/systemd/systemd-sysv-install enable aos
/var/lib/dpkg/info/aos-device-agent.postinst: line 7: /usr/sbin/aosconfig: No such file or
directory
Processing triggers for systemd (232-25+deb9u12) ...
+ mkdir -p /opt/aos
+ cp aos_device_agent.img /opt/aos
+ post_install_common
637
+ /etc/init.d/aos config_gen
+ [[ False == \T\r\u\e ]]
+ true
+ systemctl enable aos
Synchronizing state of aos.service with SysV service script with /lib/systemd/systemd-sysv-
install.
Executing: /lib/systemd/systemd-sysv-install enable aos
admin@sonic:~$
3. Update /etc/aos/aos.conf with the sudo vi /etc/aos/aos.conf command to set the IP of the Apstra server
and enable configuration service.
• For the following, replace "aos-server" with the IP address or valid FQDN of your Apstra server.
[controller]
# <metadb> provides directory service for AOS. It must be configured properly
# for a device to connect to AOS controller.
metadb = tbt://aos-server:29731
• For example
[controller]
# <metadb> provides directory service for AOS. It must be configured properly
# for a device to connect to AOS controller.
metadb = tbt://172.20.74.3:29731
• For the following, set "enable_configuration_service" to 1 to enable "full control" mode from
Apstra.
[service]
# AOS device agent by default starts in "telemetry-only" mode.Set following
# variable to 1 if you want AOS agent to manage the configuration of your
638
# device.
enable_configuration_service = 1
• Add the following, "credential" configuration with "username = " and the local Linux user to be
used for the agent (usually "admin").
[credential]
username = admin
4. Start the agent with the sudo service aos start command and check its status with the sudo service aos
status command.
5. From the left navigation menu in the Apstra GUI, navigate to Devices > Managed Devices to
acknowledge the device, then you can assign it to a blueprint.
To manually uninstall SONiC Apstra device agents you'll stop Apstra server, uninstall the agent, and
remove any remaining Apstra files.
1. Stop the Apstra agent with the sudo service aos stop command.
2. Uninstall the Apstra agent with the sudo dpkg --purge --force-all aos-device-agent command.
3. Remove remaining Apstra files with the sudo rm -fr /etc/aos /var/log/aos /mnt/persist/.aos /opt/aos /run/
aos /run/lock/aos /tmp/aos_show_tech /usr/sbin/aos* command.
IN THIS SECTION
Although the preferred method of installing device system agents is from the Apstra GUI, you can
manually install Apstra agents from the CLI. Only in rare exceptions would you need to manually install
agents, which requires more effort and is error-prone. Before manually installing agents, you should have
an in-depth understanding of the various device states, configuration stages, and agent operations . For
assistance, contact "Juniper Support" on page 893.
NOTE: You can also use "Apstra ZTP" on page 694 to automatically boot and install agents and
prerequisite configuration on switches. Using Apstra ZTP is simpler and easier to support at scale
than manually installing agents.
Manually installing an agent for Cisco devices involves the following steps:
• Update the guestshell disk size, memory and cpu, then enable/reboot the guestshell.
• Start service.
CAUTION: The Cisco GuestShell is not partitioned to be unique with Apstra. If there
are other applications hosting on the guestshell, any changes in the guestshell could
impact them.
641
Configure the device in the following order: VRF, NXAPI, GuestShell, Create Management VRF. To allow
for agent-server communication Apstra's device agent uses the VRF name management. Ensure these lines
appear in the running configuration.
!
no password strength-check
username admin password admin-password role network-admin
copp profile strict
!
vrf context management
ip route 0.0.0.0/0 <Management Default Gateway>
!
interface mgmt0
vrf member management
ip address <Management CIDR Address>
!
1. Run the following commands to resize the disk space, memory and CPU:
2. If the guestshell is not enabled, run the command guestshell enable to activate the changes.
3. If the guestshell was already enabled, run the command guestshell reboot to restart the shell and
activate the changes.
4. Run the command switch# show guestshell detail and verify that the guestshell has been activated.
642
You can copy the installation agents over HTTPS from the Apstra server. After downloading, confirm the
MD5sum of your downloaded copy matches what Apstra stores.
NOTE: To retrieve the agent file, the Cisco device connects to the Apstra server using HTTPS.
Before proceeding, make sure this connectivity is functioning.
Apstra ships with the agent from the Apstra Server. We can copy it to the /volatile, or volatile:
filesystem location. Apstra also ships with an md5sum file in the /home/admin folder on the Apstra Server.
Replace the aos_server_ip variable and aos_version from the run file below. (To check the Apstra server
version from the Apstra GUI, navigate to Platform > About).
NOTE: We recommend that you run the command copy running-config startup-config to save your
latest changes, in case any issues arise.
643
From the Cisco NX-OS switch guestshell, run the command to install the agent as shown below:
After installing the agent and before starting service, update the aos.conf file so it will connect to the
server.
Configure the Cisco NX-OS device agent configuration file located at /etc/aos/aos.conf. See "Apstra
device agent configuration file" on page 1023 for parameters.
After updating the file, run the command service aos start to start the Apstra device agent.
When the Apstra device agent communicates with Apstra, it uses a ‘device key’ to identify itself. For
Cisco NXOS switches, the device key is the MAC address of the management interface ‘eth0’.
Deploy Device
From the left navigation menu of the Apstra GUI, navigate to Devices > Managed Devices. When the
agent is up and running it appears in this list, and can be acknowledged and assigned to a blueprint using
the GUI per standard procedure.
644
If you need to reset the Apstra agent for some reason (changing blueprints, redeploying, restoring device
from backup, etc.) it's best to clear the Apstra agent metadata, re-register the device, and redeploy to
the blueprint.
C9K-172-20-65-5# guestshell
[guestshell@guestshell ~]$ sudo su -
[root@guestshell ~]# systemctl stop aos
[root@guestshell ~]# rm -rf /var/log/aos/*
[root@guestshell ~]# systemctl start aos
To uninstall the agent, first undeploy and unassign it from the blueprint per standard procedures using
the GUI. You can also delete it entirely from the Managed Devices page.
To remove the Apstra package from NX-OS, destroy the guestshell. Do this only if no other applications
are using the guestshell:
The Apstra device agent installs some event manager applets to assist with telemetry. These can be
safely removed
IN THIS SECTION
The Apstra agent runs under the NXOS guestshell to interact with the underlying bash and Linux
environments. This is an internal Linux Container (LXC) in which Apstra operates. Under LXC, Apstra
makes use of the NXAPI and other methods to directly communicate with NXOS. For security reasons,
Cisco partitions much of the LXC interface away from the rest of the NXOS device, so we must drop to
the guest shell bash prompt to perform more troubleshooting commands.
Confirm the Guest Shell is running on NX-OS The Apstra agent runs under the NXOS Guest Shell to
interact with the underlying bash and linux environments. This is an internal Linux Container (LXC) in
which Apstra operates. We are checking to make sure the guest shell is activated and running.
Method : SHA-1
Licensing
Name : None
Version : None
Resource reservation
Disk : 1024 MB
Memory : 3072 MB
CPU : 6% system CPU
Attached devices
Type Name Alias
---------------------------------------------
Disk _rootfs
Disk /cisco/core
Serial/shell
Serial/aux
Serial/Syslog serial2
Serial/Trace serial3
Within the guest shell, ping to the Apstra server to check ICMP Ping. When running commands within
the context of a VRF, use the command chvrf <vrf> In this case, it's management VRF.
Check if the Apstra device agent package is installed. In NXOS, the Apstra agent installs to /etc/rc.d/
init.d/aos to start when the guestshell instance starts.
Check the running system state with the ‘service’ command, and check running processes with the ‘ps’
command. We are looking to confirm aos_agent is running properly.
ListenAddress=localhost
38 ? Ss 0:00 /usr/sbin/crond -n
55 pts/1Ss+0:00 /sbin/agetty --noclear ttyS1
56 pts/0Ss+0:00 /sbin/agetty --noclear ttyS0
113 ? Sl 0:01 tacspawner --daemonize=/var/log/aos/aos.log --pidfile=/var/run/aos.pid --
name=C9K --hostname=localhost --domainSocket=aos_spawner_sock --hostSysdbAdd
115 ? S 0:03 tacleafsysdb --agentName=C9K-LocalTasks-C9K-0 --partition= --storage-
mode=persistent --eventLogDir=. --eventLogSev=TaccSpawner/error,Mounter/
116 ? Sl 0:01 /usr/bin/python /bin/aos_agent --
class=aos.device.common.ProxyDeploymentAgent.ProxyDeploymentAgent --name=DeploymentProxyAgent
device_type=Cisco serial_numbe
117 ? Sl 0:19 /usr/bin/python /bin/aos_agent --
class=aos.device.common.ProxyCountersAgent.ProxyCountersAgent --name=CounterProxyAgent
device_type=Cisco serial_number=@(SWI
118 ? Sl 0:02 /usr/bin/python /bin/aos_agent --
class=aos.device.cisco.CiscoTelemetryAgent.CiscoTelemetryAgent --name=DeviceTelemetryAgent
serial_number=@(SWITCH_UNIQUE_ID)
700 ? Ss 0:00 sshd: guestshell [priv]
702 ? S 0:00 sshd: guestshell@pts/4
703 pts/4Ss 0:00 bash -li
732 pts/4S 0:00 sudo su -
733 pts/4S 0:00 su -
734 pts/4S 0:00 -bash
823 pts/4R+ 0:00 ps wax
Under the guest shell, Apstra stores a number of configuration files under /etc/aos.
The Apstra agent version is available in /etc/aos/version. Before executing this command we need to
attach to aos service.
Apstra agent is sensitive to the DNS resolution of the metadb connection. Ensure that the IP and/or
DNS from /etc/aos/aos.conf is reachable from the device eth0 management port.
<snip>
+ popd
651
/tmp/selfgz18527139
+ rpm -Uvh --nodeps --force /tmp/selfgz18527139/aos-device-agent-1.1.0-0.1.1108.x86_64.rpm
Preparing... ################################# [100%]
installing package aos-device-agent-1.1.0-0.1.1108.x86_64 needs 55MB on the / filesystem
It takes a few minutes for the GuestShell on Cisco NX-OS to initialize the NXAPI within the LXC
container. This is normal. To account for this delay, a wait-delay has been added to the Apstra script
initialization.
We should not be able to ping the Apstra server when running ‘ping’ command by default:
Below - we expect a ping from global default routing table to Apstra server at 172.20.156.3 to fail, but
succeed under the guest shell.
IN THIS SECTION
Although the preferred method of installing device system agents is from the Apstra GUI, you can
manually install Apstra agents from the CLI. Only in rare exceptions would you need to manually install
agents, which requires more effort and is error-prone. Before manually installing agents, you should have
an in-depth understanding of the various device states, configuration stages, and agent operations . For
assistance, contact "Juniper Support" on page 893.
NOTE: You can also use "Apstra ZTP" on page 694 to automatically boot and install agents and
prerequisite configuration on switches. Using Apstra ZTP is simpler and easier to support at scale
than manually installing agents.
IN THIS SECTION
Disable ZTP
If you are provisioning the switch without ZTP (ZTP Disabled), ensure that the ZTP process is disabled
before proceeding. After logging into the switch for the first time, run the command zerotouch disable.
This requires a device reload.
To install or manage the agent, a network-admin user must be configured on the device with a known
password.
NOTE: If you are installing an onbox agent, you don't need to configure the management VRF. If
it's needed, the agent installer automatically configures the management VRF.
The agent uses the management VRF. Move any management interfaces from the default (none) VRF
into the management VRF.
The agent uses the Management1 interface by default. On modular chassis such as the Arista 7504 or 7508,
the management interface is Management0 - check your platform to see if management interfaces appear as
654
CAUTION: If you are logging into this switch remotely, make sure you have an out-of-
band connection prior to issuing the vrf forwarding management command under an
interface. This immediately removes the IP address from the NIC and potentially locks
you out of your system.
Apstra server discovery supports DNS-based discovery if you are manually configuring the agent. By
default, the aos-config file looks for tbt://aos-server:29731 - accordingly, you can use a DNS nameserver to
resolve aos-server.
NOTE: If you are installing an onbox agent, you don't need to configure HTTP API. If it's needed,
the agent installer automatically configures the HTTP API.
HTTP API and Unix sockets are used to connect to the EOS API for configuration rendering and
telemetry commands. The API must be made available for both the default route and the management
VRF. The agent connects using the unix-socket locally on the filesystem.
vrf management
no shutdown
To run EVPN with Arista devices running EOS 4.22, you must run the service routing protocols model multi-
agent. You must also reboot the device to apply the configuration.
To ensure that it is added to the pristine configuration of the device, we recommend that you add multi-
agent configuration to the device before installing the agent. After adding the configuration, save the
device configuration and reload the device.
localhost(config)#wr mem
Copy completed successfully.
localhost(config)#reload now
Decommission Device
1. From the left navigation menu of the Apstra GUI, navigate to Devices > Managed Devices and select
the check box for the device to decommission.
2. Click the DECOMM button (above the table), then click Confirm to change the admin state and
return to the table view.
3. With the device still selected, click the Delete system(s) button, then click Confirm to remove the
device and return to the table view.
656
IN THIS SECTION
Erasing the startup-configuration does not delete the installed EOS extension files. You must explicitly
remove the agent. Follow these steps in order.
To use the Bash CLI you, must edit /mnt/flash/boot-extensions to remove the reference to the extension
and delete the extension from /mnt/flash/.extensions/aos-device-agent.i386.rpm - This filename is unique
depending on the installed Apstra version.
Apstra-related data is retained on the filesystem in a few locations. Manually remove these data as
shown below:
CAUTION: If you don't remove Apstra files (especially /mnt/flash/.aos/ which includes
checkpoint files), the next time you install Apstra software, the last configuration that
was rendered (including any quarantine configuration) replaces the existing
configuration which could shut down all interfaces.
When you're removing Apstra data be sure to remove /mnt/flash/.aos/.
For the extension to be removed from bootup, run the command wr mem to ensure the extension no
longer appears in boot-extensions. If the RPM is still installed in available extensions, the agent may
start up again .
Restart System
After uninstalling the Apstra software, reboot the system. To ensure the extension is removed from the
boot extension, select 'yes' to save configuration.
localhost#reload
System configuration has been modified. Save? [yes/no/cancel/diff]:yes
Proceed with reload? [confirm]
When you remove the agent, configuration that is running on the switch is not modified or changed in
any way; the network is not disrupted.
658
IN THIS SECTION
The agent is available over HTTPs from the Apstra server from the base URL https://aos-server/
device_agent_images/aos_device_agent.run
++ date
+ echo 'Device Agent Installation : Wed' Oct 18 20:34:11 UTC 2017
Device Agent Installation : Wed Oct 18 20:34:11 UTC 2017
+ echo
+ UNKNOWN_PLATFORM=1
+ WRONG_PLATFORM=1
+ CANNOT_EXECUTE=126
+ '[' 0 -ne 0 ']'
+ arg_parse
+ start_aos=True
+ [[ 0 > 0 ]]
+ supported_platforms=(["centos"]="install_sysvinit_rpm" ["eos"]="install_on_arista"
["nxos"]="install_on_nxos" ["trusty"]="install_sysvinit_deb" ["icos"]="install_sysvinit_rpm"
["snaproute"]="install_sysvinit_deb" ["simulation"]="install_sysvinit_deb")
+ declare -A supported_platforms
++ /tmp/selfgz726322812/aos_get_platform
+ current_platform=eos
+ installer=install_on_arista
+ [[ -z install_on_arista ]]
+ [[ -x /etc/init.d/aos ]]
+ echo 'Stopping AOS'
Stopping AOS
+++ readlink /sbin/init
++ basename upstart
+ [[ systemd == upstart ]]
+ /etc/init.d/aos stop
+ install_on_arista
++ pwd
+ local pkg_dir=/tmp/selfgz726322812/arista
+ local to_be_installed=
+ local flash_dir_from_bash=/mnt/flash/aos-installer
+ local flash_dir_from_cli=flash:/aos-installer
+ cp aos_device_agent.img /mnt/flash/
+ mkdir -p /mnt/flash/aos-installer
++ ls /mnt/flash/.extensions/aos-device-agent-2.0.0-0.1.138.i386.rpm
+ existing_aos=/mnt/flash/.extensions/aos-device-agent-2.0.0-0.1.138.i386.rpm
+ for aos_rpm in '${existing_aos}'
++ basename /mnt/flash/.extensions/aos-device-agent-2.0.0-0.1.138.i386.rpm
+ ip netns exec default FastCli -p15 -c 'no extension aos-device-agent-2.0.0-0.1.138.i386.rpm'
++ basename /mnt/flash/.extensions/aos-device-agent-2.0.0-0.1.138.i386.rpm
+ ip netns exec default FastCli -p15 -c 'delete extension:aos-device-
agent-2.0.0-0.1.138.i386.rpm'
660
+ pushd /tmp/selfgz726322812/arista
/tmp/selfgz726322812/arista /tmp/selfgz726322812
++ ls aos-device-agent-2.0.0-0.1.138.i386.rpm
+ aos_rpm=aos-device-agent-2.0.0-0.1.138.i386.rpm
+ cp aos-device-agent-2.0.0-0.1.138.i386.rpm /mnt/flash/aos-installer
+ ip netns exec default FastCli -p15 -c 'copy flash:/aos-installer/aos-device-
agent-2.0.0-0.1.138.i386.rpm extension:'
Copy completed successfully.
+ ip netns exec default FastCli -p15 -c 'extension aos-device-agent-2.0.0-0.1.138.i386.rpm force'
+ popd
/tmp/selfgz726322812
+ ip netns exec default FastCli -p15 -c 'copy installed-extensions boot-extensions'
Copy completed successfully.
+ rm -rf /mnt/flash/aos-installer
+ /etc/init.d/aos config_gen
+ [[ True == \T\r\u\e ]]
+ aos_starter -f
The Arista device agent manages the running-configuration file. No other configuration files are
modified throughout the agent lifecycle. You can directly edit the configuration file located at /mnt/flash/
aos-config. See "Agent Configuration file" on page 1023 for parameters. After updating the file, restart
the agent.
IN THIS SECTION
Name: aos-device-agent-1.2.0-0.1.137.i386.rpm
Version: 1.2.0
Release: 0.1.137
Presence: available
Status: installed
Vendor:
Summary: AOS device agent package for Arista switches
RPMS: aos-device-agent-1.2.0-0.1.137.i386.rpm 1.2.0/0.1.137
662
localhost#dir flash:aos*
Directory of flash:/aos*
Directory of flash:/aos
localhost#dir file:/var/log/aos
Directory of file:/var/log/aos
The agent is sensitive to the DNS resolution of the metadb connection. Ensure that the IP and/or DNS
from the config file is reachable from the device management port.
18446744073709551613)
[2016/10/20 23:04:21.540444UTC@OutgoingMountConnectionError-'warning']:(connectionName=--
NONE--,localPath=/Metadb/ReplicaStatus,remotePath=tbt://aos-server:29731/Data/ReplicaStatus?
flags=i,msg=Tac::ErrnoException: Dns lookup issue "Temporary failure in name resolution" Unknown
error 18446744073709551613)
[2016/10/20 23:04:21.541174UTC@event-'warning']:(textMsg=Failing outgoing mount to <'tbt://aos-
server:29731/Data/ReplicaStatus?flags=i','/Metadb/ReplicaStatus'>' due to code 'resynchronizing'
and reason 'Dns lookup issue "Temporary failure in name resolution" Unknown error
18446744073709551613)
List the Apstra agent processes that run alongside other management components on the switch with
the ps wax command.
When you install an Arista EOS device agent, you might receive an Unable to connect: Connection refused
error.
Agent Profiles
IN THIS SECTION
Agent profiles enable the logical link between device credentials, a device configuration key-value store,
and a selection of user-uploaded packages. With agent profiles, you can configure parameters for a
certain class of devices that exist in the network and edit their device agent settings as a group. Agent
profiles include the following details:
Name Description
Open Options Passes configured parameters to offbox agents. For example, to use HTTPS as the API
(offbox only) connection from offbox agents to devices, use the key-value pair: proto-https - port-443.
You can override the following default values with open options:
Packages Admin-provided software packages stored on the Apstra server that you can apply to each
device agent that you create using the profile.
674
From the left navigation menu, navigate to Devices > System Agents > Agent Profiles to go to the agent
profile table view. You can create, clone, edit, and delete agent profiles.
Before creating an agent profile, upload any "packages" on page 675 that are to be included in the
agent profile.
1. From the left navigation menu, navigate to Devices > System Agents > Agent Profiles and click
Create Agent Profile.
2. Enter a unique agent profile name.
3. Select the platform from the drop-down list (optional).
4. Set a username and password (optional).
5. Add open options (optional).
6. Select package(s) (optional).
7. Click Create to create the agent profile and return to the table view.
1. Either from the table view (Devices > System Agents > Agent Profiles) or the details view, click the
Edit button for the profile to edit.
2. Make your changes.
3. Click Update to update the profile and return to the table view.
1. Either from the table view (Devices > System Agents > Agent Profiles) or the details view, click the
Delete button for the profile to delete.
2. Click Delete to delete the profile and return to the table view.
675
Packages (Devices)
IN THIS SECTION
Packages Overview
You can extend Apstra capabilities by adding support for network operating systems (NOS), new
telemetry collectors, third party software, and more. You upload packages (sometimes referred to as
plugins) to the Apstra server, then include them in device agents and "agent profiles" on page 673. Valid
package types include .egg, .whl (Python wheel package) and .gz. One package can include one or more
collectors for one or more OS platforms.
Upload Packages
3. For each package to upload, either click Choose File and navigate to the downloaded file, or drag and
drop the file into the dialog window.
4. Click Upload, then close the dialog to return to the table view.
676
Pristine Config
IN THIS SECTION
CAUTION: Manual modifications to the Pristine Config are not validated. Mistakes can
lead to full erasure of the device, potentially causing a service-impacting outage. Never
modify the pristine config directly unless there is no alternative. For assistance, contact
"Juniper Support" on page 893.
1. From the left navigation menu, navigate to Devices > Managed Devices and click the Management IP
of the device to edit.
2. Click the Pristine Config tab (top-left), then click the Edit pristine config button (under checkpoint on
the left).
3. Make your changes to the configuration. (For information about what should and should not be
included in pristine configurations, see "Create Onbox Agent" on page 614 and "Create Offbox
Agent" on page 617, as applicable.)
677
4. If the device is deployed, and you absolutely need to change pristine config, you can force the update
without undeploying first (as of Apstra version 4.2.0). Never use this option unless you are otherwise
directed by the Juniper support team. This update has the power to impact your entire network. To
proceed, check the Force Update check box.
3. From the left navigation menu in the Apstra GUI, navigate to Devices > Managed Devices and click
the Management IP of the device to edit.
4. Click the Pristine Config tab (top-left), then click the Update From Device button (top-right).
5. Click Update to update Pristine Config from the device.
Verify the Pristine Config. You have copied the running config of the device in the out of service state,
which should be Discovery 1 config. It may include additional configuration such as interface "speed"
commands. You can edit Pristine Config again and delete the additional configuration manually. Contact
"Juniper Support" on page 893 for assistance as needed.
Telemetry
IN THIS SECTION
Services | 679
Telemetry (Devices)
Services
From the left navigation menu, navigate to Devices > Telemetry > Services to go to a summary of
telemetry services.
Service Description
ARP ARP telemetry shows an ARP table. You can query this
information via API. Anomalies are not generated.
(Continued)
Service Description
LAG LAG telemetry shows the health of all the LACP bonds
facing servers and between MLAG switches.
681
(Continued)
Service Description
LLDP (Cabling) When you assign a device with deploy mode Ready to
a blueprint, the device enters the Ready stage
(previously known as Discovery 2). Every node is part
of intent. On each link, there are expected neighbor
hostnames, interfaces and connections. Physical
cabling and links must match the specified intent. Any
deviations result in anomalies that you must correct by
either recabling to match the blueprint or by modifying
the blueprint to match cabling already in place.
(Continued)
Service Description
Utilization (Onbox agents only) Utilization telemetry allows the network operator to
view some vital statistics on the device - CPU and
Memory utilization. No anomalies are generated.
Service Registry
IN THIS SECTION
From the left navigation menu, navigate to Devices > Service Registry to go to the service registry. You
can view, import and delete telemetry service schemas via the GUI (as of Apstra version 4.0.1). For
information about developing extensible telemetry, see the "Extensible Telemetry Guide" on page 931.
683
1. From the left navigation menu, navigate to Devices > Service Registry and click Import Service
Schemas.
2. Either click Choose File and navigate to the file on your computer, or drag and drop the file from your
computer into the dialog window and click Import.
1. Either from the table view (Devices > Service Registry) or the details view, click the Delete button for
the service to delete.
2. Click Delete Service Schema to remove the schema from the system and return to the service
registry screen.
Interval How frequently the service is configured to run on the device (in seconds)
Input The input that is provided to the service for its processing
Max Run Count User-specified maximum number of times for the collector to run
Execution Time The time it took for collection during the last iteration (in milliseconds)
Waiting Time A device runs multiple collectors. If some collectors monopolize CPU, other collector
executions are deferred. Waiting time is the amount of time that the collector was
deferred (in milliseconds).
685
(Continued)
Last Run Timestamp Timestamp at which the collector was scheduled to run
Last Error Timestamp Timestamp at which the collector last reported an error
From the collection statistics screen, you can see if there are any service errors that were generated
during the telemetry collection process (in the Error message column). Click the Show error link to see
its details.
From this screen you can also go to to all telemetry services for a specific device by clicking the device
name.
To go to collection statistics for all services on a specific device, click Collection Statistics.
686
Telemetry Streaming
The Apstra server transmits the following content to user-defined end-hosts for further processing of
data and for use within your own internal systems:
Data streams are implemented with Google Protocol Buffers (GPB). GPBs define and implement the
format of data streams. GPBs allow software developers to use a language-agnostic definition of events
and data types.
GPB offers support for C++, Python, Go, and possibly more languages in the future. Example Python
code named "AOSOM Streaming" on page 972 is available for GPBs . The AOSOM Streaming demo
software is open source and you can download it from github: https://github.com/Apstra/aosom-
streaming.
Developers have various language options : C++, Python, Go. This means it integrates nicely with our C+
+ infrastructure. And then Infrastructure Engineers can use Python or Go for the client.
687
{
"items": [
{
"actual": {
"value": "missing"
},
"anomaly_type": "route",
"expected": {
"value": "up"
},
"id": "547bcbc9-963f-4477-904b-712482aa6428",
"identity": {
"anomaly_type": "route",
"destination_ip": "0.0.0.0/0",
"system_id": "000C29202526"
},
"last_modified_at": "2017-06-09T17:28:13.773324Z",
"role": "unknown",
"severity": "critical"
},
{
"actual": {
"value": "partial"
},
"anomaly_type": "route",
"expected": {
"value": "up"
},
"id": "92a6804a-42ff-4cbd-a52b-5c6acadc1d23",
"identity": {
"anomaly_type": "route",
"destination_ip": "0.0.0.0/0",
"system_id": "000C29EA59A7"
},
"last_modified_at": "2017-06-09T17:28:44.787604Z",
"role": "unknown",
"severity": "critical"
688
},
{
"actual": {
"value": "partial"
},
"anomaly_type": "route",
"expected": {
"value": "up"
},
"id": "25886eb7-e629-4f56-9479-686fe1e53c64",
"identity": {
"anomaly_type": "route",
"destination_ip": "0.0.0.0/0",
"system_id": "000C29E808A1"
},
"last_modified_at": "2017-06-09T17:28:13.773423Z",
"role": "unknown",
"severity": "critical"
},
{
"actual": {
"value": "partial"
},
"anomaly_type": "route",
"expected": {
"value": "up"
},
"id": "2b7a77ac-fd12-41fe-acfc-a53678b177ed",
"identity": {
"anomaly_type": "route",
"destination_ip": "0.0.0.0/0",
"system_id": "000C2982786A"
},
"last_modified_at": "2017-06-09T17:28:13.773389Z",
"role": "unknown",
"severity": "critical"
},
{
"actual": {
"value": "partial"
},
"anomaly_type": "route",
"expected": {
689
"value": "up"
},
"id": "50a1e0d6-e483-4bc4-bed8-cbc5666569f8",
"identity": {
"anomaly_type": "route",
"destination_ip": "0.0.0.0/0",
"system_id": "000C2998C7E7"
},
"last_modified_at": "2017-06-09T17:28:13.773453Z",
"role": "unknown",
"severity": "critical"
},
{
"actual": {
"value": "down"
},
"anomaly_type": "bgp",
"expected": {
"value": "up"
},
"id": "ab9f4273-e86f-456c-8cc7-7115f3aafa45",
"identity": {
"anomaly_type": "bgp",
"destination_asn": "1",
"destination_ip": "10.1.1.1",
"source_asn": "65417",
"source_ip": "10.0.0.5",
"system_id": "000C29202526"
},
"last_modified_at": "2017-06-09T17:28:13.727949Z",
"role": "to_external_router",
"severity": "critical"
}
],
"count": 6
}
Apstra uses CLI to retrieve telemetry from Junos OS and Junos OS Evolved devices.
690
Service Command
/network-instances/network-instance/mac-table/entries/
entry
Cisco telemetry is derived from the NX-API with 'show' commands and embedded event manager
applets that provide context data to the device agent while it is running. Most commands are run as
their CLI version wrapped into JSON output.
Service Command
Arista EOS uses a few techniques from the EOS SDK API to directly subscribe to event notifications
from the switch, for example 'interface down' or 'new route' notifications. When using an event-based
notification, you do not have to continually render 'show' commands every few seconds. The EOS SDK
gives you the information immediately as soon as the switch has the status.
692
CAUTION: Event-based subscription requires the EOSProxySDK agent. For details, see
"Arista Device Agents" on page 652.
When the Arista API does not provide information (LLDP statistics), Apstra runs CLI commands at a
regular interval to derive telemetry expectations.
Service Command
Service Command
Hostname hostname
ARP ip -4 neigh
MLAG clagctl -j
Debugging Telemetry
Enable trace options to debug telemetry output. On the Device Agent, in /etc/aos.conf (usually), set these
options and restart the agent.
[DeviceTelemetryAgent]
log_config = aos.infra.core.entity_util:DEBUG,aos.device.DeviceTelemetryAgent:DEBUG
trace_config = MountFacility/0-8,DHT,AgentHeartbeat,TelemetryProxy
Log files containing trace information for telemetry agents will then be viewable in /var/log/aos/
DeviceTelemetryAgent.<pid>.<timestamp>.log. These log files are verbose, but they may point to various
694
rendering and parsing issues in the environment. When you finish troubleshooting, be sure to disable
logging.
Apstra ZTP
IN THIS SECTION
NOTE: This document applies to Apstra ZTP 4.2 versions. Use the Apstra ZTP version
corresponding to the Juniper Apstra version you are using.
Apstra ZTP is a Zero-Touch-Provisioning server for data center infrastructure systems. Apstra ZTP
enables you to bootstrap Apstra data center devices without considering the differences in underlying
NOS mechanisms. ZTP, from an Apstra perspective, is a process that takes a device from initial boot to a
point where it is managed by Apstra via device system agents.
Depending on how ZTP is configured, the process may include (but not always) the following
capabilities:
• A DHCP service
NOTE: To prevent being locked out of a device when there is a problem during the ZTP process,
ZTP uses default, hard-coded credentials. These credentials are:
• root / admin
• aosadmin / aosadmin
You can use an Apstra-provided VM image (.ova, .qcow2.gz, .vhdx.gz) or build your own ZTP server and use
the Apstra-provided device provisioning scripts as part of the existing ZTP/DHCP process to
automatically install agents on devices as part of the boot process. The Apstra ZTP reference
implementation consists of the following three phases:
• The device receives the assigned IP address and a pointer to a script to execute (or an OS image
to install if using the Apstra-provided VM image).
2. Initialization Phase
• The device executes the downloaded script to prepare it to be managed. This includes verifying
that the device is running a supported OS.
• The ZTP script makes an API call to install a device system agent on the device.
Apstra ZTP runs as an Ubuntu 22.04.3 LTS server running MYSQL, DHCP, HTTP, and TFTP servers. You
can configure the Apstra ZTP server by editing configuration files such as dhcpd.conf and ztp.json. In
addition, you can edit the Apstra ZTP configuration via the Apstra GUI. The table below shows the
minimum server specifications for a production environment:
696
Resource Setting
Memory 2 GB
CPU 1 vCPU
Disk Storage 64 GB
Device agents DHCP Server (renewals) & udp/67 -> udp/68 DHCP Client
Broadcast (requests)
Device agents Apstra ZTP any → tcp/80 Bootstrap and API scripts
Arista and Cisco Device Apstra ZTP any → udp/69 TFTP for POAP and ZTP
agents
Apstra ZTP Controller any → tcp/443 Device System Agent Installer API
In addition to the ZTP-specific network requirements, the Apstra ZTP server and device agents require
connectivity to the controller. Refer to Required Communication Ports in the Juniper Apstra Installation
and Upgrade Guide for more information.
697
You can monitor device ZTP status from the Apstra GUI. From the left navigation menu, navigate to
Devices > ZTP Status > Devices.
Each device interacting with DHCP and ZTP is listed along with its System ID (serial number) if known,
ZTP Status, ZTP Latest Event and when the device status was last updated.
To see the full DHCP and ZTP log for the device, click the "Show Log" icon.
Any device that interacts with DHCP or ZTP is listed. If you don't need the logs for a device anymore,
click the Delete button.
698
root@apstra-ztp:/containers_data/logs# ls -l
total 7132
-rw-r--r-- 1 root root 6351759 Oct 28 17:47 debug.log
drwxr-xr-x 2 root root 4096 Oct 27 19:20 devices
-rw------- 1 root root 0 Oct 23 20:02 dhcpd.leases
-rw-r--r-- 1 root root 926980 Oct 28 17:39 info.log
-rw------- 1 root root 58 Oct 23 20:02 README
-rw------- 1 root root 469 Oct 27 02:13 rsyslog.log
root@apstra-ztp:/containers_data/logs# tail info.log
2020-10-28 17:16:38,786 root.status INFO Incoming: dhcpd dhcpd[18]: DHCPACK on
192.168.59.9 to 04:f8:f8:6b:36:91 via eth0
2020-10-28 17:18:04,299 root.status INFO Incoming: dhcpd dhcpd[18]: DHCPREQUEST for
192.168.59.9 from 04:f8:f8:6b:36:91 via eth0
2020-10-28 17:18:04,300 root.status INFO Incoming: dhcpd dhcpd[18]: DHCPACK on
192.168.59.9 to 04:f8:f8:6b:36:91 via eth0
2020-10-28 17:19:29,250 root.status INFO Incoming: dhcpd : -- MARK --
2020-10-28 17:19:29,442 root.status ERROR Failed to update status of all
containers: /api/ztp/service 404 b'{"errors":"Resource not found"}'
2020-10-28 17:33:29,353 root.status INFO Incoming: tftp : -- MARK --
2020-10-28 17:33:29,538 root.status ERROR Failed to update status of all
containers: /api/ztp/service 404 b'{"errors":"Resource not found"}'
2020-10-28 17:33:34,768 root.status INFO Incoming: status : -- MARK --
2020-10-28 17:39:29,349 root.status INFO Incoming: dhcpd : -- MARK --
2020-10-28 17:39:29,539 root.status ERROR Failed to update status of all
containers: /api/ztp/service 404 b'{"errors":"Resource not found"}'
root@apstra-ztp:/containers_data/logs#
You can monitor the ZTP services on the Apstra ZTP server from the Apstra GUI. From the left
navigation menu, navigate to Devices > ZTP Status > Services.
[image]
Each service name includes its Docker IP address, service status and when the service status was last
updated.
RELATED DOCUMENTATION
admin@apstra-ztp:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
PORTS NAMES
5f6609074deb apstra/nginx:4.2.0-34 "sh /init.sh" 29 hours ago Up 29 hours
0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp nginx
3f0dbe2be17b apstra/tftp:4.2.0-34 "sh /init.sh" 29 hours ago Up 29 hours
0.0.0.0:69->69/udp, :::69->69/udp tftp
1e05ab10a552 apstra/status:4.2.0-34 "sh /init.sh" 29 hours ago Up 29 hours
8080/tcp status
cd7aa8ad372b apstra/dhcpd:4.2.0-34 "sh /init.sh" 29 hours ago Up 28
hours dhcpd
12e35bc71b20 mysql:8.0.33 "docker-entrypoint.s…" 29 hours ago Up 29 hours
3306/tcp, 33060/tcp db
admin@apstra-ztp:~$
5. If you don't want to use the Apstra ZTP DHCP server, stop and disable the dhcpd container.
dhcpd
admin@apstra-ztp:~$
a. SSH into the Apstra server as user admin. (ssh admin@<apstra-server-ip> where <apstra-server-ip> is
the IP address of the Astra server.)
b. Edit the /etc/netplan/01-netcfg.yaml file to configure the static management IP address. See example
below. (For more information about using netplan, see https://netplan.io/examples)
device_ztp. The device_ztp role allows users with that role to make API calls to the controller to
request device system agent installation.
For example, if you’re using Juniper Junos OS or Junos OS Evolved devices, you must ensure the
server contains the following, so the device loads the proper configuration file.
DHCP configuration files are on the Apstra ZTP VM in the /containers_data/dhcp directory.
NOTE: All configuration files are owned by root. You must use sudo to run commands as root
using the sudo command or after becoming root with the sudo -s command.
group {
option tftp-server-name "192.168.59.4";
subnet 192.168.59.0 netmask 255.255.255.0 {
range 192.168.59.21 192.168.59.99;
option routers 192.168.59.1;
}
host my-switch {
hardware ethernet 34:17:eb:1e:41:80;
fixed-address 192.168.59.100;
}
}
range Range of dynamic DHCP IP addresses. Ensure the full range is available and no
statically configured IP addresses from that range are used.
fixed-address for device with hardware ethernet MAC. Use the Switch MAC address
ddns-update-style none;
option domain-search "example.internal";
option domain-name "example.internal";
option domain-name-servers 8.8.8.8, 8.8.4.4;
d. If you're using ZTP with SONiC, you must edit the following:
class "sonic" {
match if (substring(option host-name, 0, 5) = "sonic");
option sonic-provision-url "tftp://192.168.59.4/ztp.py";
}
e. After modifying any DHCP configuration, restart the Apstra ZTP DHCP process with the sudo
docker restart dhcpd command.
Configure the controller IP and Apstra ZTP username in the /containers_data/status/app/aos.conf file on
the Apstra ZTP server.
{
"ip": "192.168.59.3",
"user": "ztp",
"password": "ztp-user-password"
}
The ztp.json file contains all configuration for the Apstra ZTP script ztp.py.
More specific data takes precedence over other data. For example, data for a specific serial
number takes precedence over any other data, then model, then platform, then finally default
data.
junos-image - Filename of the Juniper Junos TGZ • By default, the image name is loaded from the
image to load if the running version does not ZTP server via TFTP from the ZTP server’s /
match a version in the junos-versions list. container_data/tftp/ directory. For example:
"junos-image": "jinstall-host-qfx-5-20.2R2-S3.5-
signed.tgz"
sonic-image - Filename of the SONiC ONIE BIN • By default, the image name is loaded from the
image to load if the running version does not ZTP server via TFTP from the ZTP server’s /
match a version in the sonic-versions list. container_data/tftp/ directory. For example:
"sonic-image": "sonic-3.1.0a-bcm.bin"
nxos-image - Filename of the NX-OS image to • By default, the image name is loaded from the
load if the running version does not match a ZTP server via TFTP from the ZTP server’s /
version in the nxos-versions list. container_data/tftp/ directory. For example: "nxos-
image": "nxos.9.3.6.bin"
eos-image - Filename of the Arista EOS SWI • By default, the image name is loaded from the
image to load if the running version does not ZTP server via TFTP from the ZTP server’s /
match a version in the eos-versions list. container_data/tftp/ directory. For example: "eos-
image": "EOS-4.24.5M.swi"
{
"junos": {
"junos-versions": ["21.2R1-S2.2"],
"junos-image": "http://10.85.24.52/
juniper/21.2R1-S2.2/jinstall-host-qfx-5e-
x86-64-21.2R1-S2.2-secure-signed.tgz",
"device-root-password": "root123",
"device-user": "admin",
"device-user-password": "admin",
"system-agent-params": {
"platform": "junos",
"agent_type": "offbox",
"job_on_create": "install"
}
},
"QFX10002-36Q": {
"junos-versions": ["21.2R1-S2.2"],
"junos-image": "http://10.85.24.52/
juniper/21.2R1-S2.2/jinstall-host-qfx-10-f-
x86-64-21.2R1-S2.2-secure-signed.tgz"
},
"JNP10002-60C [QFX10002-60C]": {
"junos-versions": ["21.2R1-S1.3"],
"junos-image": "http://10.85.24.52/
juniper/21.2R1-S1.3/junos-vmhost-install-qfx-
x86-64-21.2R1-S1.3.tgz"
}
}
For REST API documentation for all available system-agent-params options in /api/system-agents, refer
to Swagger.
RELATED DOCUMENTATION
IN THIS SECTION
EX switches require Junos OS version 21.2 or higher. The Python module that's required for ZTP is
missing on EX switches using Junos OS versions below 21.2.
Apstra ZTP manages the bootstrap and lifecycle of Juniper Junos devices. It uses a custom script to
create offbox agents, create local users and set other system configuration. The ZTP process copies a
711
new OS image to the switch. Before installing Apstra ZTP ensure that the switch has sufficient disk
space for the OS image.
{
"junos": {
"junos-versions": [ "20.2R2-S3.5" ],
"junos-image": "http://192.168.59.4/jinstall-host-qfx-5-20.2R2-S3.5-signed.tgz",
"device-root-password": "root-password",
"device-user": "admin",
"device-user-password": "admin-password",
"custom-config": "junos_custom.sh",
"system-agent-params": {
"platform": "junos",
"agent_type": "offbox",
"job_on_create": "install"
}
}
}
IN THIS SECTION
{
"junos-evo": {
"junos-evo-versions": [ "20.4R3-S1.3-EVO" ],
"junos-evo-image": "http://192.168.59.4/junos-evo-install-qfx-ms-fixed-x86-64-20.4R3-S1.3-
EVO.iso",
"device-root-password": "root-password",
"device-user": "admin",
"device-user-password": "admin-password",
"custom-config": "junos_custom.sh",
"system-agent-params": {
"platform": "junos",
"agent_type": "offbox",
"job_on_create": "install"
}
}
}
The following additional fields can be used for dual RE platforms, such as PTX10004.
"dual-routing-engine": true,
"management-ip": "10.161.37.7",
"management-gw-ip": "10.161.39.254",
"management-subnet-prefixlen": "21",
"management-master-ip": "10.161.37.8",
"management-backup-ip": "10.161.37.9",
Apstra ZTP uses a Python script to provision the device during ZTP. To allow the Python script (ztp.py) to
run on a device that is not Junos OS Evolved, additional configuration is required. Use the
junos_apstra_ztp_bootstrap.sh script to bootstrap Apstra ZTP on Junos. It downloads and runs the ZTP
script.
Junos OS Evolved devices don't require this bootstrap; they run the Apstra ZTP python script (ztp.py)
directly.
713
When configuring custom-config for Juniper Junos devices, refer to the example junos_custom.sh, a bash
executable file executed during the ZTP process. It can set system configuration (such as Syslog, NTP,
SNMP authentication) prior to device system agent installation.
NOTE: Junos OS and Junos OS Evolved platforms with dual-RE setups require the set system
commit synchronize command. Without this configuration, the ZTP process fails. We recommend
adding the command to the junos_custom.sh file.
#!/bin/sh
SOURCE_IP=$(cli -c "show conf interfaces em0.0" | grep address | sed 's/.*address \([0-9.]*\).*/
\1/')
# Syslog
SYSLOG_SERVER="192.168.59.4"
SYSLOG_PORT="514"
# NTP
NTP_SERVER="192.168.59.4"
# SNMP
SNMP_NAME="SAMPLE"
SNMP_SERVER="192.168.59.3"
# Syslog
cli -c "configure; \
set system syslog host $SYSLOG_SERVER any notice ; \
set system syslog host $SYSLOG_SERVER authorization any ; \
set system syslog host $SYSLOG_SERVER port $SYSLOG_PORT ; \
set system syslog host $SYSLOG_SERVER routing-instance mgmt_junos ; \
commit and-quit"
cli -c "configure; \
set system syslog file messages any notice ; \
set system syslog file messages authorization any ; \
commit and-quit"
# NTP
cli -c "configure; \
set system ntp server $NTP_SERVER routing-instance mgmt_junos ; \
set system ntp source-address $SOURCE_IP routing-instance mgmt_junos ; \
714
commit and-quit;"
# SNMP
cli -c "configure; \
set snmp name $SNMP_NAME; \
set snmp community public clients $SNMP_SERVER/32 ; \
set snmp community public routing-instance mgmt_junos ; \
set snmp routing-instance-access access-list mgmt_junos ; \
commit and-quit"
To erase (zeroize) the device and restart Juniper Junos ZTP process:
When in ZTP mode, the Juniper switch downloads the ztp.py and ztp.json files to the /var/preserve/apstra
directory. For diagnostics, take note of the /var/preserve/apstra/aosztp.log file.
You can find additional useful messages in /var/log/messages (search for 'ztp').
IN THIS SECTION
NOTE: Apstra ZTP 4.0 used with Apstra version 4.0 has support for SONiC Enterprise
Distribution devices. There is no support for any SONiC devices with earlier versions of Apstra
ZTP or the software.
Apstra ZTP manages the bootstrap and life-cycle of Enterprise SONiC devices with onbox agents
installed. It uses a custom script to create onbox agents, create local users and set other system
configuration.
As part of the ZTP process a new OS image is copied to the switch. Before installing Apstra ZTP ensure
that the switch has sufficient disk space for the OS image.
NOTE: If you are using ONIE to install Enterprise SONiC on a device, you must copy the image to
the /containers_data/tftp directory and rename it to onie-installer or another ONIE download name
(onie-installer-x86_64-dell_z9100_c2538-r0 for example). When rebooting in ONIE, the device
searches for this file on the HTTP then TFTP server. If the file is not found, ZTP fails. Once ONIE
SONiC installation successfully completes, the SONiC device starts ZTP automatically.
{
"sonic": {
"sonic-versions": [ "SONiC-OS-3.2.0-Enterprise_Advanced" ],
"sonic-image": "http://192.168.59.4/sonic-3.2.0-GA-adv-bcm.bin",
"device-root-password": "root-password",
"device-user": "admin",
"device-user-password": "admin-password",
"custom-config": "sonic_custom.sh",
"system-agent-params": {
"agent_type": "onbox",
"job_on_create": "install"
}
}
}
716
NOTE: If you use another device-user besides admin (aosadmin for example) Apstra ZTP creates this
new user, but it does not change the password for the default SONiC admin user (password set to
YourPaSsWoRd by default).
When configuring custom-config for Enterprise SONiC devices, refer to the example sonic_custom.sh, a bash
executable file executed during the ZTP process. It can set system configuration (such as Radius
authentication) prior to device system agent installation.
#!/bin/bash
To restart the SONiC ZTP process, use the sudo ztp enable and sudo ztp run commands.
IN THIS SECTION
Ensure that sufficient disk space is available on the switch. As part of the ZTP process a new OS image is
copied to the switch. Before installing Apstra ZTP ensure that the switch has sufficient disk space for the
OS image.
If ZTP is installing Cisco NX-OS image, you must copy the image (nxos.7.0.3.I7.7.bin for example) to
the /containers_data/tftp directory ensuring correct file permissions.
{
"nxos": {
"nxos-versions": [ "9.2(2)" ],
"nxos-image": "http://192.168.0.6/nxos.9.2.2.bin",
"device-root-password": "admin-password",
"custom-config": "nxos_custom.sh",
"device-user": "admin",
"device-user-password": "admin-password",
"system-agent-params": {
"agent_type": "onbox",
"job_on_create": "install"
}
}
}
This configuration enables secure offbox agent HTTPS (port 443) between the offbox agent on the
server and the device API.
718
When configuring custom-config for Cisco NX-OS devices, refer to the example nxos_custom.sh, a bash
executable file executed during the ZTP process. It can execute NX-OS configuration commands that set
system configuration, such as the SSH login banner, before installing the device system agent.
NOTE: You must add copp profile strict via the NX-OS custom-config file.
#!/bin/sh
If using Apstra ZTP to prepare a Cisco NX-OS device for use with offbox agents, you must have the
custom-config file enable the following NX-OS configuration commands.
feature nxapi
feature bash-shell
feature scp-server
feature evmed
copp profile strict
nxapi http port 80
719
You can use the following nxos_custom.sh to add these along with a banner.
#!/bin/sh
/isan/bin/vsh -c "conf ; feature nxapi ; nxapi http port 443 ; feature bash-shell ; feature scp-
server ; feature evmed ; copp profile strict ; banner motd ~
########################################################
BANNER BANNER BANNER BANNER BANNER BANNER BANNER BANNER
########################################################
Lorem ipsum dolor sit amet, consectetur adipiscing elit.
Donec gravida, arcu vitae tincidunt sagittis, ligula
massa dignissim blah, eu sollicitudin nisl dui at massa.
Aliquam erat volutpat. Vitae pellentesque elit at
pulvinar volutpat. Etiam lacinia derp lacus, non
pellentesque nunc venenatis rhoncus.
########################################################
~"
NOTE: If an agent is already installed on the device, before you restart the device ZTP process
remove the agent either via the UI device agent installer or manually via the device CLI.
IN THIS SECTION
Arista EOS
NOTE: Apstra ZTP has limited support and known issues for virtual Arista EOS (vEOS) devices.
• ZTP EOS upgrades are not supported on vEOS devices. EOS versions for vEOS device must
match eos-versions set in ztp.json file.
• ZTP Logging to the controller does not work for vEOS devices due to the lack of a device
serial number. This will be addressed in a future version.
As part of the ZTP process a new OS image is copied to the switch. Before installing Apstra ZTP ensure
that the switch has sufficient disk space for the OS image.
switch1#dir flash:
Directory of flash:/
<...>
If ZTP is installing Arista EOS image, the image (EOS-4.22.3M.swi for example) you must copy to the /
containers_data/tftp directory.
721
{
"eos": {
"eos-versions": [ "4.24.5M" ],
"eos-image": "http://192.168.59.3/EOS-4.24.5M.swi",
"custom-config": "eos_custom.sh",
"device-root-password": "admin-password",
"device-user": "admin",
"device-user-password": "admin-password",
"system-agent-params": {
"agent_type": "onbox",
"job_on_create": "install"
}
}
}
When configuring custom-config for Arista EOS devices, refer to the example eos_custom.sh, a bash
executable file executed during the ZTP process. It can execute EOS configuration commands to set the
SSH login banner or other system configuration to be set prior to device system agent installation.
#!/bin/sh
FastCli -p 15 -c $'conf t\n service routing protocols model multi-agent\n hardware tcam\n system
profile vxlan-routing\n banner login\n
########################################################
UNAUTHORIZED ACCESS TO THIS DEVICE IS PROHIBITED
########################################################\n EOF\n'
NOTE: During the ZTP process, the EOS banner login is set to text saying "The device is in Zero
Touch Provisioning mode ...". By default, the ZTP script copies this to the permanent
configuration.
722
To prevent this, you must configure the custom-config pointing to a script (eos_custom.sh for
example), which configures a different banner login or configure no banner login.
NOTE: If you're using EOS 4.22, Apstra recommends adding the service routing protocols model
multi-agent to the device configuration along with any other configuration during ZTP which
requires a device reboot to activate (system profile vxlan-routing for example). This ensures that
this configuration is applied on reboot and added to the device pristine configuration.
CAUTION: If an agent is already installed on the device, before you restart the device
ZTP process remove the agent extension either via the UI Device Agent Installer or
manually via the device CLI.
l2-virtual-001-leaf1#sho extensions
Name Version/Release Status Extension
----------------------------------------- ------------------ --------- ---------
aos-device-agent-3.1.0-0.1.205.i386.rpm 3.1.0/0.1.205 A, I 1
Design
IN THIS SECTION
Templates | 754
Tags | 777
Logical Devices
IN THIS SECTION
• Specifying speed and roles for specific ports (For example, the 48th port is always a leaf, or the speed
of the 10th port is always 1 Gbps).
• Preparing for port speed transformations (For example, transforming one - 40 GbE port into four - 10
GbE ports).
• Using non-standard port speeds (For example, for a 1 GbE SFP in a 10 GbE port, the underlying
hardware is automatically configured correctly.)
• Solving for automatic cable map generation that takes into account failure domains on modular
systems (for example, a line card).
Name Description
Logical device A unique name to identify the logical device, 64 characters or fewer
name
Panel Port layout based on IP fabric, forwarding engine, line card (slot) or physical layout. A panel
contains one or more port groups. A logical device includes one or more panels.
Port Group A collection of ports with the same speed and role(s)
Name Description
• Superspine - used for superspines facing spines on 5-stage Clos data center fabric
• Spine - used for spines facing leafs, or for spines facing superspines on 5-stage Clos data
center fabric
• Access (Junos only) - Port is configured to face an access device. To learn more about this
feature and its limitations, contact "Juniper Support" on page 893 .
• Peer (link between two leaf devices) - used for MLAG domains to provide a trunk between
two leaf switches
• Unused - configuration is not rendered and ports are not allocated (use to specify a dead
port, for example)
• Generic - Certain roles are not specified in logical devices (for example, a firewall, external
router, bare metal server, or load balancer).
From the left navigation menu, navigate to Design > Logical Devices to go to logical devices in the global
catalog. Apstra ships with many predefined logical devices. Click a logical device name in the table to see
its details. For our example, we'll use a logical device consisting of 7 ports with varying roles.
726
Logical devices are mapped to device profiles (specific vendor models) and they're used in rack types and
rack-based templates.
RELATED DOCUMENTATION
1. From the left navigation menu, navigate to Design > Logical Devices and click Create Logical Device.
2. Enter a unique logical device name.
3. The default panel layout consists of 24 ports (2 rows of 12 ports each). For a different layout, select
the number and arrangement of ports to match your requirements by dragging from the bottom-right
corner of the layout.
4. Select the ports for the port group by dragging to select contiguous ports, or by clicking individual
ports. Clicking a port again deselects it.
5. Select port speed, and applicable role(s) for the selected ports.
6. Click Create Port Group (bottom-middle) to create the port group.
7. If unassigned ports remain, repeat the previous two steps until all ports are assigned. For any ports
that will not be used, assign them the Unused role.
8. To add a panel, click Add Panel (bottom-middle) and repeat the steps as for the first panel.
9. Click Create (bottom-right) to create the logical device and return to the table view.
RELATED DOCUMENTATION
1. From the left navigation menu, navigate to Design > Logical Devices and click Create Logical Device.
2. A descriptive name is helpful when referring to the logical device later. For our example we entered
96x10-8x40-2, which represents the following characteristics:
[image]
3. For the port group in the first panel, drag the bottom-right corner of the port layout to change the
default 2x12 configuration to a 3x32 configuration. Leave the number of ports (96) and speed (10
Gbps) as is, and select the Generic port role (Connected to).
[image]
4. Click Create Port Group (bottom-middle), then click Add Panel (bottom-middle).
5. Drag the bottom-right corner of the port layout to change the configuration to 2x4. Leave the
number or ports (8) as is, change the speed to 40 Gbps, and connect them to Superspine, Spine, and
Generic.
6. Click Create Port Group, then click Create (bottom-right). The new logical device appears in the table
view. (In the overview above, it's the first one in the table.)
RELATED DOCUMENTATION
1. Either from the table view (Design > Logical Device) or the details view, click the Edit button for the
logical device to edit.
[image: logical-device-table-edit]
2. Make your changes.
• To change port group details, access the dialog by clicking its description.
• To add or remove ports from a port group, drag from the bottom-right corner of the port group
layout to resize it. If you added ports, enter port speed and role(s).
728
• To add a panel, click Add Panel and enter relevant port group details.
3. Click Update (bottom-right) to update the logical device in the global catalog and return to the table
view.
Editing a logical device in the global catalog doesn't affect rack types and templates that previously
embedded that logical device. This prevents existing rack types and templates from unintentionally
being changed. If your intent is for a rack type or template to use a modified logical device, you must
"update the rack type in the template" on page 763.
RELATED DOCUMENTATION
1. Either from the table view (Design > Logical Devices) or the details view, click the Delete button for
the logical device to delete.
2. Click Delete Logical Device to delete the logical device from the global catalog and return to the
table view.
RELATED DOCUMENTATION
Interface Maps
IN THIS SECTION
IN THIS SECTION
To create dense server connectivity, let's create an interface map that breaks out the twenty-four 40
GbE transformable ports of an Arista DCS-7050QX-32 physical device to ninety-six 10 GbE ports of a
96x10-8x40-2 logical device.
96x10-8x40-2 is not one of the predefined logical devices that ships with Apstra software, so if you
have not created it you won't find it in the drop-down list. If you'd like to follow along with this example,
you can create the logical device before continuing.
1. From the left navigation menu, navigate to Design > Interface Maps and click Create Interface Map.
Leave the name blank. It will populate automatically as you enter more information.
2. From the Logical Device drop-down list, select 96x10-8x40-2. This logical device has 96-10 GbE
ports for servers and 8-40 GbE ports for uplinks to spine switches or external routers.
3. From the Device Profile drop-down list, select Arista DCS-7050QX-32. This device has 24-40 GbE
QSFP+ ports that are transformable (4x10 GbE or 1x40 GbE) and 8-40 GbE QSFP+ ports that are not
transformable. As soon as both the logical device and device profile are selected, the interface map
name is automatically populated.
730
4. Under Device profile interfaces (middle-right) click Select Interfaces for the 10 GbE logical ports. This
displays the port layout.
5. Drag to select the first 24 ports. As the ports are selected the white numbers turn gray. When all
interfaces are selected the red circle turns green.
731
6. Under Device profile interfaces (middle-right) click Select Interfaces for the 40 GbE ports. This
displays the port layout.
732
7. Drag to select the remaining 8 ports. As the ports are selected the white numbers turn gray. When all
interfaces are selected the red circle turns green.
8. Click Create to create the interface map and return to the table view. The new interface map is
shown in the overview screenshot above.
IN THIS SECTION
(Cumulus is no longer supported as of Apstra version 4.1.0, although Cumulus examples remain for
illustrative purposes.) Inter port constraints for Cumulus devices are handled in both the device profile
and the interface map. For Apstra to generate the correct ports.conf file with these constraints, the
unused interfaces must be disabled in the interface map.
For example, if each of the top (odd-numbered) QSFP28 ports in a Mellanox 2700 device are split into
four SFP28 ports, the bottom (even-numbered) QSFP28 ports are blocked. (Source: https://
docs.mellanox.com/display/sn2000pub/Cable+Installation) The blocked interfaces must be disabled.
When creating an interface map that requires disabling ports for inter port constraints, the prompt Do
you want to select the disabled interfaces for unused device profile ports? is displayed. To disable the
735
• Provision QSFP+ breakout ports to transform ports, such 40GbE ports to 10GbE, 100GbE ports to
25GbE, and so on.
• Port breakouts and available speeds affect possible values of the mapping fields.
• The logical device enables you to plan port and panel mappings accordingly. For example, you can
assign a network policy that ensures that spine uplink ports on a leaf switch are always the furthest
right ports on a panel.
• If a smaller logical device is mapped to a larger physical device, the unmapped ports in the device
profile are marked as Unused in the interface map.
From the left navigation menu, navigate to Design > Interface Maps to go to interface maps in the global
catalog. You can create, clone, edit and delete interface maps.
737
Click a port to go to interface details. Interface maps include the following details:
Interfaces Mapping between logical devices and physical devices (device profile)
2. Enter a unique name (64 characters or fewer). This field can be left blank for the name to be created
for you that consists of the concatenation of the names of the selected logical device and device
profile.
3. Select a logical device from the drop-down list. If you don't see a logical device that fits your
requirements, you can create one.
4. Select a device profile from the drop-down list. If you don't see a device profile that fits your
requirements, you can create one.
5. Map the logical device to the device profile. See example below for details.
6. Click Create to create the interface map and return to the table view.
RELATED DOCUMENTATION
CAUTION: Any changes made to predefined interface maps (the ones that ship with
Apstra software) are discarded when Apstra is upgraded. To retain a customized
interface map through Apstra upgrades, clone the predefined interface map, give it a
unique name, and customize it instead of changing the predefined one directly.
1. Either from the table view (Design > Interface Maps) or the details view, click the Edit button for the
interface map to edit.
2. Make your changes.
3. Click Update (bottom-right) to update the interface map and return to the table view.
RELATED DOCUMENTATION
RELATED DOCUMENTATION
Rack Types
IN THIS SECTION
IN THIS SECTION
Summary | 743
Rack types define the type and number of leaf devices, access switches and/or generic systems that are
used in rack builds. Since rack types don't define specific vendors or their devices, you can design your
network before choosing hardware. If you need to create a template, you'll use rack types to build the
structure of your network. Rack types include the details in the following sections:
Rack Type Name Number and Type Leaf Details Number of Generic Generic System
of Leafs Systems Details
• Roles: Spine /
Generic
• 2 access/generic
• 1 generic
Table 15: Predefined L3 Clos Rack Types without Access Switches (Continued)
Rack Type Name Number and Type Leaf Details Number of Generic Generic System
of Leafs Systems Details
Rack Type Number and Leaf Details Number of Access Number of Generic
Name Type of Leafs Access Switch Generic System
Switches Details Systems Details
Rack Type Number and Leaf Details Number of Access Number of Generic
Name Type of Leafs Access Switch Generic System
Switches Details Systems Details
Collapsed 1 single leaf Two 10 Gbps 1 single One 10 Gbps 2 One 10 Gbps
1xleaf mesh links switch leaf link link single-
Roles: single-homed homed at
at leaf access
• 2 ports:
LAG Mode: LAG Mode:
spine /
LACP (Active) No LAG
leaf
Roles:
Roles:
• 2 ports: • 8 ports:
peer • 1 port:
leaf /
Leaf /
access /
• 2 ports: Access
peer /
access /
generic
generic
• 1 port:
generic
• 1 port:
generic
743
Summary
Summary Description
Name (and optional description) A unique name to identify the rack type, 17 characters
or fewer
Leaf Devices
(Continued)
Redundancy CAUTION: Make sure that the intended platform supports the chosen redundancy
Protocol protocol. For example, L3 MLAG peers are not supported on SONiC, and ESI is
supported on Junos only.
• MLAG - For dual-homed connections. Both switches use the same logical device.
• MLAG Keepalive VLAN ID - If left blank during rack type creation, 2999 is assigned
to the peer link during the build phase. If 2999 conflicts with vendors' reserved
ranges, enter a different ID.
NOTE: Network device vendors have varying requirements for "reserved" VLAN ID
ranges. For example, Cisco NX-OS reserves the VLAN ID range from 3968 to 4094.
Arista, by default, uses a VLAN ID range from 1006 to 4094 for internal VLANs for
routed ports.
• Peer Links, and Link speed - Number of links between the MLAG devices, and their
speed
• L3 peer links, and Link speed -Used mainly for BGP peering between border MLAG
leaf devices in non-default routing zones. Mainly used for routed L3 traffic to solve
EVPN blackhole issues or if upstream routers go down. L3 peer-links act as backup
paths for the north-south traffic. Other than border leaf it can be used on any other
ToR leaf devices as well as for avoiding blackholing traffic for a VRF.
• ESI (Junos only) - Ethernet Segment ID assigned to the bundled links. Specifying device
platforms other then Juniper Junos (such as Cisco, Arista) results in blueprint build
errors. For information about Juniper ESI, see "Juniper EVPN Support" on page 950,
and "Update ESI MAC msb" on page 328.
Tags User-specified. Select tags from drop-down list generated from global catalog or create
tags on-the-fly (which then become part of the global catalog). Tags used in rack types are
embedded, so any subsequent changes to tags in the global catalog do not affect the rack
type.
745
Access Switches
ESI support at the access layer is supported. You can dual-home generic systems (servers) to access
switches. We're leveraging EVPN at the access layer to enable ESI-LAG towards the generic system
while keeping the L2 only nature of the access switch role.
• Each member of an access switch pair dual-attached to the leaf pair is supported.
• Each member of an access switch pair single-attached to the leaf pair is supported.
• One member of an access switch pair dual-attached to the leaf pair and the other member of an
access switch pair single-attached to the leaf pair is not supported.
This is supported on 3-Stage, 5-Stage, and collapsed fabric blueprints. Day 2 topology changes are
available through Add/Edit/Remove Racks.
• L2 VxLAN only is required, L3 VxLAN (RIOT) is not required, and will continue to be available only at
the leaf layer.
When creating and managing access switches, follow the general workflow for building a network while
taking into account the following options and design considerations.
1. When creating logical devices, on leaf switches facing an access switch, select the port role access,
and configure ports in the access switch logical device.
5. Create a blueprint and build it following the general "workflow" on page 2. You can perform the same
tasks as for other blueprints.
Access Switch count Number of access switches. These switches share the
same logical link group.
746
(Continued)
• Link speed
Generic Systems
Generic Description
Systems
Port Channel Port channel IDs are used when rendering leaf device port-channel configuration towards
ID Min, and generic systems. default: 1-4096. You can customize this field. (Prior to Apstra version 4.2.0, all
Max non-default port channel numbers had to be unique per blueprint. Port channel ranges could not
overlap. This requirement has been relaxed, and now they need only be unique per system.)
Tags User-specified. Select tags from drop-down list generated from global catalog or create tags on-
the-fly (which then become part of the global catalog). Useful for specifying generic systems as
servers or external routers on nodes and links. Tags used in rack types are embedded, so any
subsequent changes to tags in the global catalog do not affect the rack type.
748
(Continued)
Generic Description
Systems
• LAG Mode
• LACP (Active) - Link Aggregation Control Group (LACP) in active mode - This mode
actively advertises LACP BPDU even when the neighbor does not.
• LACP (Passive) - Link Aggregation Control Group (LACP) in passive mode - This mode
doesn't generate LACP BPDU until it sees one from a neighbor.
• Static LAG (no LACP) - Static LAGs don't participate in LACP and will unconditionally
operate in forwarding mode.
• Physical link count per individual leaf, and Link speed) - Number of links from each generic
system to each leaf and their speed. If using dual leaf switches, this number should be half of
the total links attached to the generic system.
• Tags - User-specified. Select tags from drop-down list generated from global catalog or
create tags on-the-fly (which then become part of the global catalog). Useful for specifying
generic systems as servers or external routers on nodes and links. Tags used in rack types are
embedded, so any subsequent changes to tags in the global catalog do not affect the rack
type.
NOTE: You can also add generic systems to blueprints as a Day 2 operation. For more
information, see "Add Generic System" on page 60.
749
From the left navigation menu, navigate to Design > Rack Types to go to rack types in the design (global)
catalog. Click a rack type name to see its details. You can create, clone, edit, and delete rack types.
RELATED DOCUMENTATION
• See rack type introduction for parameter details and the example for a specific use case.
• To clone or delete a logical link or generic system group within a rack type, click the Clone button
or Delete button (top-right of section).
750
RELATED DOCUMENTATION
1. From the left navigation menu, navigate to Design > Rack Type and click Create Rack Type.
2. Enter a unique name (RackType1 in this example), then select L3 Clos fabric connectivity design.
3. In the Leafs section, enter a name (MyLeaf1 in this example) and select AOS-48x10+6x100-1 from
the Leaf Logical Device drop-down list.
NOTE: Instead of scrolling through the list in the Leaf Logical Device drop-down list you can
start typing in the field to filter the list based on your input.
751
4. Change the Links per spine to 2. Notice the Topology preview on the right side shows the first leaf.
5. Click Add new leaf and enter a name for the second leaf (MyLeaf2 in this example), select
AOS-48x10+6x100-1 from the Leaf Logical Device drop-down list, then change the Links per spine
to 2. Notice the Topology preview on the right side now shows both leaf devices.
6. Click Generic Systems, click Add new generic system group and enter a name (MySystemGroup1 in
this example), change the Generic system count to 20, then select AOS-2x10-1 from the Logical
752
Device drop-down list. Notice that the Topology preview changes as you configure the rack type.
7. Click Add logical link, enter a name (MyLogicalLink1 in this example), select MyLeaf1 from the
Switch drop-down list, select LACP (Active) for LAG Mode, then change Physical link count per leaf
to 2.
8. Click Add new generic system group, and enter a name (MySystemGroup2 in this example), change
the Generic system count to 20, then from the Logical Device drop-down list, select AOS-2x10-1.
9. Click Add logical link, enter a name (MyLogicalLink2 in this example), select MyLeaf2 from the
Switch drop-down list, select LACP (Active) for LAG Mode then change Physical link count per leaf
to 2.
753
10. If you'd like to see a preview of the logical devices that you've configured in the rack type, click
Logical Devices in the Preview section.
11. Click Create to create the rack type in the global catalog and return to the table view.
RELATED DOCUMENTATION
When you change a rack type in the global catalog, it doesn't affect rack types that have already been
embedded into templates (or blueprints that were created from those templates). If your intent is for a
template to use a modified rack type, then after editing the rack type in the global catalog you must edit
the template to use it. To change the rack type used in a blueprint, you would edit the rack to replace
the rack type with the modified one.
754
RELATED DOCUMENTATION
1. To delete a rack type in the global catalog, either from the table view (Design > Rack Type) or the
details view, click the Delete button for the rack type to delete.
2. Click Delete to delete the rack type and return to the table view.
RELATED DOCUMENTATION
Templates
IN THIS SECTION
Templates Introduction
IN THIS SECTION
Templates are used to create blueprints. They define a network's policy intent and structure. The global
catalog (Design > Templates) includes predefined templates based on common designs.
From the left navigation menu, navigate to Design > Templates to go to the templates table view. Many
predefined templates are provided for you. Click a template name to see its details. You can create,
clone, edit, and delete templates.
Rack-based Template
Rack-based templates define the type and number of racks to connect as top-of-rack (ToR) switches (or
pairs of ToR switches). Rack-based templates include the following details:
756
Policy Options
ASN Allocation • Unique - applies to 3-stage designs. Each spine is assigned a different ASN.
Scheme (spine)
• Single - applies to 5-stage designs. All spine devices in each pod are assigned the same
ASN, and all superspine devices are assigned another ASN.
Overlay Control • Defines the inter-rack virtual network overlay protocol in the fabric. Overlay control
Protocol protocol on deployed blueprints can't be changed.
• Static VXLAN - uses static VXLAN routing the Head End Replication (HER) flooding to
distribute Layer 2 virtual network traffic between racks.
• MP-EBGP EVPN - uses EVPN family eBGP sessions between device loopbacks to
exchange EVPN routes for hosts (Type 2) and networks (Type 5). Only homogeneous,
single-vendor EVPN fabrics are supported. EVPN-VXLAN capabilities for inter-rack virtual
networks are dependent on the make and model of network devices used. See "Virtual
Networks" on page 177 for more information. External systems must be connected to
racks (not spine devices).
Spine to Leaf • IPv4 - uses addresses from "IPv4 resource pools" on page 784.
Links Underlay
Type • IPv6 RFC-5549 - uses addresses from "IPv6 resource pools" on page 786. Not supported
when overlay control protocol is MP-EBGP EVPN.
Structure Options
Rack Types Type of rack and number of each selected rack type. ESI-based rack types in rack-based templates
without EVPN are invalid.
757
Structure Options
Spines • Spine Logical Device and Count - Type and number of spine logical devices
• Links per Superspine Count and Speed - Number and speed of links to any superspine devices
• Tags - User-specified. Select tags from drop-down list generated from global catalog or create
tags on-the-fly (which then become part of the global catalog). Useful for specifying external
routers. Tags used in templates are embedded, so any subsequent changes to tags in the global
catalog do not affect templates.
Pod-based Template
Pod-based templates are used to create large, 5-stage Clos networks, essentially combining multiple
rack-based templates using an additional layer of superspine devices. The following images show
examples of 5-stage Clos architectures built using pod-based templates (Superspine links are not shown
for readability purposes). See "5-Stage Clos Architecture" on page 946 for more information.
758
4 x plane, 4 x superspine
Policy Option
Spine to • IPv4 - uses addresses from "IPv4 resource pools" on page 784.
Superspine
Links • IPv6 RFC-5549 - uses addresses from "IPv6 resource pools" on page 786. Not supported
when overlay control protocol is MP-EBGP EVPN.
Policy Option
Overlay Control • Defines inter-rack virtual network overlay protocol used in the fabric. Overlay control
Protocol protocol on deployed blueprints can't be changed.
• Static VXLAN - uses static VXLAN routing the Head End Replication (HER) flooding to
distribute Layer 2 virtual network traffic between racks.
• MP-EBGP EVPN - uses EVPN family eBGP sessions between device loopbacks to exchange
EVPN routes for hosts (Type 2) and networks (Type 5). Only homogeneous, single-vendor
EVPN fabrics are supported. EVPN-VXLAN capabilities for inter-rack virtual networks are
dependent on the make and model of network devices used. See "Virtual Networks" on
page 177 for more information. External systems must be connected to racks (not spine
devices).
Structure Options
• Plane Count and Per Plane Count - Number of planes and number of superspine devices per
plane
• Tags - User-specified. Select tags from drop-down list generated from global catalog or create
tags on-the-fly (which then become part of the global catalog). Useful for specif.ying external
routers. Tags used in templates are embedded, so any subsequent changes to tags in the global
catalog do not affect templates.
Collapsed Template
Collapsed templates allow you to consolidate leaf, border leaf and spine functions into a single pair of
devices. A full mesh topology is created at the leaf level instead of at leaf-spine connections. This
spineless template uses L3 collapsed rack types. Collapsed templates have the following limitations:
• No support for upgrading collapsed L3 templates to L3 templates with spine devices (To achieve the
same result you could move devices from the collapsed L3 blueprint to an L3 Clos blueprint.)
• You can't mix vendors inside redundant leaf devices - the two leaf devices must be from the same
vendor and model.
Policy Options
Overlay • Defines the inter-rack virtual network overlay protocol used in the fabric. Overlay control
Control protocol on deployed blueprints can't be changed.
Protocol
• Static VXLAN - uses static VXLAN routing the Head End Replication (HER) flooding to
distribute Layer 2 virtual network traffic between racks.
• MP-EBGP EVPN - uses EVPN family eBGP sessions between device loopbacks to exchange
EVPN routes for hosts (Type 2) and networks (Type 5). Only homogeneous, single-vendor
EVPN fabrics are supported. EVPN-VXLAN capabilities for inter-rack virtual networks are
dependent on make and model of network devices used. See "Virtual Networks" on page 177
for more information. External systems must be connected to racks (not spine devices).
Structure Options
Rack Types Type of L3 collapsed rack and number of each selected rack type.
Mesh Links Count and Defines the link set created between every pair of physical devices, including devices
Speed in redundancy groups (MLAG / ESI). These links are always physical L3. No logical links
are needed at the mesh level.
RELATED DOCUMENTATION
1. If your design requires rack types and/or logical devices that are not in the global catalog, create
them before proceeding.
2. From the left navigation menu, navigate to Design > Templates and click Create Template.
3. Enter a unique name (64 characters or fewer).
4. Select RACK BASED.
5. Select applicable policies.
6. Select a rack type from the drop-down list and select the number of that type to include in the
template. Notice that as you enter information, the topology preview on the right changes
accordingly.
RELATED DOCUMENTATION
1. If your design requires templates, rack types and/or logical devices that are not in the global
catalog, create them before proceeding.
2. From the left navigation menu, navigate to Design > Templates and click Create Template.
3. Enter a unique name (64 characters or fewer).
4. Select POD BASED.
762
• To add another type of pod, click Add pods and select another pod from the drop-down list.
7. Select a Superspine Logical Device from the drop-down list.
8. Select the number of planes and the number of superspine devices per plane.
9. Select tags, as applicable (to specify external routers for example), from the drop-down list or create
them on-the-fly.
10. Click Create to create the template.
The example below shows a pod-based template with three pods and two planes, each containing
two superspine devices:
RELATED DOCUMENTATION
5. Select a rack type from the drop-down list (only L3 collapsed rack types are available for selecting)
and select the number of that type to include in the template. Notice that as you enter information,
the topology preview on the right changes accordingly.
6. Click Create to create the template and return to the table view.
RELATED DOCUMENTATION
Edit Template
1. From the left navigation menu, navigate to Design > Templates and click the Edit button (top-right)
for the template to update.
2. Make your changes.
• To update a rack type in a rack-based template, first update the rack type in the global catalog,
then delete the original rack type from the template (click X to the right of the template). Before
clicking Update, select the same (modified) rack type from the drop-down list.
3. Click Update (bottom-right) to update the template and return to the table view.
Changes made to a template in the global catalog don't affect blueprints that were previously created
with that template, thereby preventing potentially unintended changes to those blueprints.
RELATED DOCUMENTATION
Delete Template
1. From the left navigation menu, navigate to Design > Templates and click the Delete button for the
template to delete.
2. Click Delete to delete the template and return to the table view.
764
RELATED DOCUMENTATION
Config Templates
IN THIS SECTION
IN THIS SECTION
Config templates are text files used to configure internal systems in Freeform. You'll assign a config
template to every internal system. You could paste configuration directly from your devices into a config
template to create a static config template, but then you wouldn’t be using the potential of config
templates. With some Jinja2 knowledge (and maybe some Python), you can parametrize config
templates to do powerful things.
For more information about config templates, see "Config Templates (Freeform Blueprint)" on page 466.
1. From the left navigation menu of the Apstra GUI, navigate to Design > Config Templates and click
Create Config Template.
765
2. Enter a unique name for the config template including the .jinja extension. (The .jinja extension is
required even if you're not using Jinja.)
3. Enter or paste your content into the Template Text field.
4. Click Create to create the config template and return to the config template table view. Your newly
created config template is available to be imported into any blueprint catalog.
NOTE: You can also create config templates directly in the blueprint catalog. If you've already
created your internal systems in your blueprint, you'll have access to its Device Context all in
one place which makes it easier to get device information that you need for config templates.
1. From the left navigation menu of the Apstra GUI, navigate to Design > Config Templates to go to the
table view.
2. Either from the table view or the details view, click the Edit button for the config template to edit.
3. Make your changes.
4. Click Update to update the config template and return to the table view.
1. From the left navigation menu of the Apstra GUI, navigate to Design > Config Templates to go to the
table view.
2. Either from the table view or the details view, click the Delete button for the config template to
delete.
3. Click Delete to stage the deletion and return to the table view.
766
Configlets (Datacenter)
IN THIS SECTION
Configlets Introduction
IN THIS SECTION
Configlets are configuration templates that augment Apstra’s reference design with non-native device
configuration. They consist of one or more generators. Each generator specifies a NOS type (config
style), when to render the configuration, and CLI commands (and file name as applicable). The section
that you select when creating the configlet determines when the configuration is rendered.
When you want to use a configlet, you import it from the global catalog into a blueprint catalog and
assign it to one or more roles and/or deployed devices. You can edit the roles and/or devices in a
blueprint configlet, but if you want to change the configlet itself, you must export it to the global
catalog, modify it, and re-import it into the blueprint.
You can use the same configlets across the entire enterprise, but we recommend creating and applying
regionally-specific "property sets" on page 773 instead.
767
NOTE: Improperly configured configlets may not raise warnings or restrictions. Testing and
validating configlets for correctness is the responsibility of the end user. We recommend that you
test configlets on a separate dedicated service to ensure that the configlet performs exactly as
intended.
Passwords and other secret keys are not encrypted in configlets.
Configlet Applications
• Syslog
• TACACS / RADIUS
• Management ACLs
• NTP
• Username / password
Don't use configlets to replace reference design configuration, such as for routing or connectivity. If you
change interface configuration, the Apstra-intended interface configuration could be overwritten. For
example, if a configlet creates a network span port, you must apply the configlet to an Unused port, or it
might inadvertently overwrite one that is already in use.
On Cisco NX-OS and Arista EOS devices, do not use configlets to configure multi-line banners (such as
banner motd) because of a problematic extra non-ASCII character that cannot be entered. Instead,
configure multi-line banners with Cisco POAP (Power-on Auto Provisioning) or ZTP (Arista Zero Touch
Provisioning) before installing the device agent. The banner configuration becomes part of the device's
pristine configuration and persists throughout the Apstra configuration. Another option is to manually
configure multi-line banners on the device. This method causes a configuration deviation anomaly that
768
you can clear by accepting the new configuration as the golden config. For more information, see
"Configuration Deviation" on page 502.
Configlet Parameters
Configlets include the following details. The selected config style (NOS type) and section determine
whether template text, negation template text and filename are required:
Name Description
• Section: System (NX- • Runs commands as root user. Improper changes could break the functionality
OS, EOS, SONiC) of the reference design and take down a network.
• Section: Top-Level: • When a device is unassigned from a node, the negation template text removes
Hierarchical (previously configuration. For example, if the template text is username example privilege 15
called System) (Junos) secret 0 MyPassword, the negation template text might be
no username example
• For NX-OS and EOS, the appropriate configure terminal context is applied. It
doesn’t need to be part of the configlet.
Section: Top-Level: Set / Author configlets using Juniper "Set" style rather than structured JSON
Delete (Junos)
Name Description
Section: Interface-Level: Set Author configlets using Juniper "Set" command rather than structured JSON. Text
(Junos) is validated to begin with 'set'.
Section: Interface-Level: Author configlets using Juniper "Delete" command rather than structured JSON.
Delete (Junos) Text is validated to begin with 'delete'.
Section: File (SONiC) • The entire contents of the file must be present within the configlet because the
entire file is overwritten; there is no versioning or storing of the original file
contents, soyou can’t restore it to its original content. Improper use can take
down a network. Do not use on config files of critical processes (such
as /etc/frr/frr.conf or /etc/network/interfaces/).
• Contents are written, as root user, to the /etc directory file (because of Apstra’s
Docker container host mount). To write to a file outside of /etc (/usr for
example) build the File configlet, then use a System configlet to move the file
afterwards.
Section: System Top (NX- Ensures that you can overwrite a setting to implement programmed intent. When
OS, EOS) the reference design is applied, any needed features that were “turned off” in this
configlet are reenabled.
Section: FRR (SONiC) • Configlet configuration is appended to the end of the Apstra-
generated /etc/frr/frr.conf file and becomes part of FRR intent. Configuration
is incrementally included in frr-reload.
• Template text is not validated. Errors are likely to cause deployment errors,
unintended configuration and device impact.
Template Text CLI commands to add configuration to devices. Issued directly to devices without
validation.
Negation Template Text CLI commands to disable configlet functionality (when a device is unassigned).
Issued directly to devices without validation.
6. File (SONiC)
To control the order of operations within a section, create configlets with numeric names. For example,
01_syslog renders before 02_ntp. Configlets are then ordered based on the condition of the configlet (for
example the spine or leaf role), and then by the Node ID of the configlet.
From the left navigation menu, navigate to Design > Configlets to go to configlets in the design (global)
catalog. You can create, clone, import, export, edit and delete configlets.
RELATED DOCUMENTATION
1. From the left navigation menu, navigate to Design > Configlets and click Create Configlet.
2. If you've created a JSON payload, click Import Configlet and select the file to import it. Otherwise,
continue to the next step.
3. Enter a unique configlet name.
4. Select a NOS type (config style).
5. Select the section where you want to render the configlet. Available choices depend on the
selected config style. (OSPF for external routers is no longer supported. While OSPF configlets still
appear in the Apstra GUI, they should not be used.)
6. In the Template Text and Negation Template Text fields (as applicable), enter CLI commands. For
Interface-Level Set or Delete configlets, do not include set or delete in the text. See Configlet
examples in the Reference section. Avoid using shortened versions of commands. Jinja syntax is
highlighted with color coding to improve readability, especially for complex configlets with multiple
property set variables or when Jinja control structures (such as loops and conditionals) are used.
Jinja syntax is validated. If Jinja syntax is incorrect, a validation error is raised.
CAUTION: Using a raw text editor (OSX TextEdit, Windows Notepad++) is critical.
Hidden characters can cause unforeseen issues when the configlet is deployed.
NOTE: Instead of hard-coding data into a configlet, you can refer to a "property set" on page
773 (key-value pairs). For an example, see the "Arista NTP example" on page 1180 in the
References section.
7. If Negation Template Text is required, enter the CLI commands to remove the configuration.
8. For File configlets, enter the filename in the Filename field.
9. To add another generator, click Add a style and enter details. (Tip: Configlets can contain syntax for
multiple vendors. Create one single-purpose configlet with a generator for each vendor NOS type
to include its own syntax.)
10. Click Create to add the configlet to the global catalog.
When you’re ready to use the configlet in a blueprint, "import" on page 336 it into the blueprint's
catalog.
772
1. From the table view (Design > Configlets) or the details view, click the Edit button for the configlet to
edit.
2. Make your changes (name, config style, section, template text, negation template text, filename, as
applicable).
3. Click Update (bottom-right) to update the configlet in the global catalog and return to the table view.
1. Either from the table view (Design > Configlets) or the details view, click the Delete button for the
configlet to delete.
2. Click Delete to delete the configlet from the global catalog and return to the table view.
IN THIS SECTION
But first, you need to write the property set (and the configlet or probe that'll use it). You can write it in
JSON or YAML. You can use key-value pairs, lists, dictionaries, and any combination of these data
structures by nesting them.
Below is an example of a property set and configlet that uses it to change the SNMP location field based
on a provided list of system_name to location mapping.
Property Set
{
"created_at": "2022-08-26T13:20:04.488463+0000",
"updated_at": "2022-08-28t18:57:41.169692+0000",
"values_yaml": "PS_SNMP_Locations:\n leaf1: DC-Room1-Rack32\n leaf2: DC1-room1-Rack34\n
leaf3: DC1-Room1-Rack33\n spine1: DC1-Room1-Rack30\n spine2: DC1-Room1-Rack31\n",
"values": {
"PS_SNMP_Locations": {
"spine1": "DC1-Room1-Rack30",
"spine2": "DC1-Room1-Rack31",
"leaf1": "DC1-Room1-Rack32",
"leaf3": "DC1-Room1-Rack33",
"leaf2": "DC1-Room1-Rack34"
}
},
"label": "PS_SNMP_Locations",
"id": "c4006bb8-f8f4-4aa7-82c3-8da5dfc03c43"
}
Configlet
{
"ref_archs": [
"two_stage_l3clos"
],
774
"generators": [
{
"config_style": "junos",
"section": "system",
"template_text": "{% if PS_SNMP_Locations[hostname] is defined %}\nsnmp {\n location
\"{{PS_SNMP_Locations[hostname]}}\";\n}\n{5 endif %}\n",
"negation_template_text": ::,
"filename": ""
}
],
"created_at": "2022-08-26T13:23:57.2720142",
"id": "b2739659-897d-4fa2-a8e9-2060ae1c045f",
"last_modified_at": "2022-08-26T13:29:40.1924382",
"display_name": "SNMP_location"
}
From the left navigation menu, navigate to Design > Property Sets to go to property sets in the Design
catalog. You can create, clone, edit and delete property sets.
3. You can define property sets with YAML or JSON. (With the YAML option, if you were previously
using Ansible, you can now import your Ansible variables defined in host_vars and group_vars.) Enter
your content directly via the Editor, or for YAML you can also use the builder.
4. To add another property, click Add a Property.
5. Click Create to create the property set and return to the table view.
TCP/UPD Ports
IN THIS SECTION
can be entered instead of the port numbers. For example, you could create an alias with name SSH and a
value of 22.
From the left navigation menu, navigate to Design > TCP/UDP Ports to go to TCP/UDP ports. You can
create, clone, edit and delete port aliases.
RELATED DOCUMENTATION
RELATED DOCUMENTATION
RELATED DOCUMENTATION
Tags
IN THIS SECTION
Tags Introduction
Tags add user-defined information to nodes and links. You can add tags to the following elements:
For example, you assign servers and external routers the generic port role in logical devices, and then tag
them with their specific roles when you design rack types and templates. When you create a blueprint,
tags from the relevant design elements are embedded into the tag section of the blueprint catalog.
Changes you may subsequently make to tags in the design elements don't affect the blueprint that had
previously used those tags. If you want a blueprint to use revised tags from a design element, you can
"import " on page 344them.
You can "export" on page 344 tags that you created in a blueprint to the global catalog (as long as they
have a unique name) where they can be used in subsequent design elements.
Tags are part of the graph. They appear in the device context, so it's easier to find them to use them
programmatically.
• Name - Case-insensitive. They must be unique across all tags defined in the design.
• Description - Optional field to add any details (for example, server roles, external router roles or
customer name).
From the left navigation menu, navigate to Design > Tags to go to tags in the global catalog. Four tags
(Bare Metal, Firewall, Hypervisor, Router) are predefined for you. You can create, clone, edit and delete
tags in the global catalog.
779
RELATED DOCUMENTATION
NOTE: You can change a tag name indirectly by creating a tag with the preferred name, applying
the tag to the rack type or template, then deleting the tag with the original name from the rack
type or template (then deleting the original tag).
To change a tag name indirectly:
3. Delete the tag with the original name from the rack type or template.
1. Either from the table view (Design > Tags) or the details view, click the Edit button for the tag to
change.
2. Change the description.
3. Click Update to update the tag description and return to the table view.
RELATED DOCUMENTATION
1. Either from the table view (Design > Tags) or the details view, click the Delete button for the tag to
delete.
2. Click Delete to delete the tag and return to the table view.
RELATED DOCUMENTATION
Resources
IN THIS SECTION
Resources Introduction
IN THIS SECTION
NOTE: If you need to assign a specific ASN to a specific device, you can assign the ASN
individually from the staged blueprint in the Properties panel of a selection.
Name Description
Total Usage Percentage of ASNs in use for all ranges in the resource pool. (Hover over the status bar to see
the number of ASNs in use and the total number of ASNs in the pool.)
Range Usage The ASNs included in the range and the percentage that are in use. (Hover over the status bar to
see the number of ASNs in use and the total number of ASNs in that range.)
From the left navigation menu in the Apstra GUI, navigate to Resources > ASN Pools to go to ASN pools
in the design (global) catalog. You can create, clone, edit and delete ASN pools.
When you're building your blueprint, you'll "assign resources" on page 33 from these pools in the Staged
> Physical view of the blueprint.
IN THIS SECTION
NOTE:
783
Properties
Name Description
Total Usage Percentage of VNIs in use for all ranges in the resource pool. (Hover over the status bar to see the
number of VNIs in use and the total number of VNIs in the pool.)
Range Usage The VNIs included in the range and the percentage that are in use. (Hover over the status bar to
see the number of VNIs in use and the total number of VNIs in that range.)
From the left navigation menu, navigate to Resources > VNI Pools to go to VNI pools in the design
(global) catalog. You can create, clone, edit and delete VNI pools.
When you've created your blueprint, you'll "assign resources" on page 33 from these pools in the Staged
> Virtual view of the blueprint.
784
IP Pools (Resources)
IN THIS SECTION
IP Pool Overview
IP addresses are used in the following situations:
Loopback IPs - Spines/Leafs/Generics - the loopback IP is used as the BGP router ID.
SVI Subnets - MLAG Domain - A Switch Virtual Interfaces (SVI) subnet for an MLAG domain is used to
allocate an IP address between MLAG leaf switches.
Link IPs - Spines <-> Leafs - Link IPs are used between spine devices and leaf devices to build the L3-
CLOS fabric. These IPs are necessary for BGP peering between spine devices and leaf devices, and
represent the 'fabric' of the network.
785
Link IPs - Generics - IP addresses facing generic systems are used to statically-route the generic system
loopback and route across that link.
When you're building your blueprint you'll specify which resource pool to use for assigning IP addresses.
NOTE: If you need to assign a specific IP address to a specific device, you can assign the IP
address individually from the staged blueprint in the Properties panel of a selection.
Name Description
Total Usage Percentage of IP addresses in use for all subnets in the resource pool. (Hover over the status
bar to see the number of IP addresses in use and the total number of IP addresses in the
pool.)
Per Subnet The IP addresses included in the subnet and the percentage that are in use. (Hover over the
Usage status bar to see the number of IP addresses in use and the total number of IP addresses in
that subnet.)
From the left navigation menu, navigate to Resources > IP Pools to go to IP pools in the design (global)
catalog. You can create, clone, edit and delete IPv4 pools.
786
CAUTION: IP address ranges are not validated. It is your responsibility to specify valid
IP addresses. If you configure a switch with an invalid IP block you may receive an error
during the deploy phase. For example, specifying the erroneous multicast subnet
224.0.0.0/4 would be accepted, but it would result in an unsuccessful deployment. If
you assign the same range (or overlapping range) of IP addresses to a blueprint, the
duplicate assignment is detected and you'll receive a warning in the blueprint. You can
commit changes to blueprints with warnings without resolving the issues.
1. From the left navigation menu, navigate to Resources > IP Pools and click Create IP Pool.
2. Enter a unique name and valid subnet. To add another subnet, click Add a Subnet and enter a subnet.
3. Click Create to create the pool and return to the table view.
When you've created your blueprint, you'll "assign resources" on page 33 from these pools n the Staged
> Physical view of the blueprint.
IN THIS SECTION
When you're building your blueprint you'll specify which resource pool to use for assigning IP addresses.
NOTE: If you need to assign a specific IP address to a specific device, you can assign the IP
address individually from the staged blueprint in the Properties panel of a selection.
Name Description
Total Usage Percentage of IPv6 addresses in use for all subnets in the resource pool. (Hover over the
status bar to see the number of IPv6 addresses in use and the total number of IPv6 addresses
in the pool.)
Per Subnet The IPv6 addresses included in the subnet and the percentage that are in use. (Hover over the
Usage status bar to see the number of IPv6 addresses in use and the total number of IPv6 addresses
in that subnet.)
From the left navigation menu, navigate to Resources > IPv6 Pools to go to IPv6 pools in the design
(global) catalog. The pool fc01:a05:fab::/48 is predefined. You can create, clone, edit and delete IPv6
788
pools.
When you've created the blueprint, you'll "assign resources" on page 33 from these pools in the Staged
> Virtual view of the blueprint.
IN THIS SECTION
Providers | 789
Providers
IN THIS SECTION
From the left navigation menu, navigate to External Systems > Providers to go to providers. You can
create, clone, edit and delete providers.
LDAP Provider
IN THIS SECTION
• Hostname FQDN IP(s) - The fully qualified domain name (FQDN) or IP address of the LDAP
server. For high availability (HA) environments, specify multiple LDAP servers using the same
settings. If the first server cannot be reached, connections to succeeding ones are attempted in
order.
4. For Provider-specific Parameters enter/select the following, as appropriate:
• Groups Search DN - The LDAP Distinguished Name (DN) path for the RBAC Groups
Organizational Unit (OU)
791
• Users Search DN - The LDAP Distinguished Name (DN) path for the RBAC Users Organization
Unit (OU)
• Bind DN - The LDAP Distinguished Name (DN) path for the active server user that the Apstra
server will connect as
• Password - The LDAP server user password for the Apstra server to connect as
• Advanced Config
• Timeout (seconds)
• Username Attribute Name - The LDAP attribute from the user entry that Apstra Server uses
for authentication. (usually cn or uid)
To authorize Apstra users via a LDAP provider, the LDAP server must be configured to properly return a
provider group attribute. This attribute must be mapped to a defined Apstra Role. The example
configuration below is for the open-source OpenLDAP server.
dn: ou=People,dc=example,dc=com
objectClass: organizationalUnit
ou: People
dn: ou=Groups,dc=example,dc=com
objectClass: organizationalUnit
ou: Groups
dn: cn=user,ou=Groups,dc=example,dc=com
gidNumber: 5000
cn: user
objectClass: posixGroup
memberUid: USER1
dn: cn=USER1,ou=People,dc=example,dc=com
cn: USER1
givenName: USER1
loginShell: /bin/sh
objectClass: inetOrgPerson
objectClass: posixAccount
uid: USER1
userPassword: USER1
uidNumber: 10000
gidNumber: 5000
sn: USER1
homeDirectory: /home/users/USER1
mail: USER1@example.com
After configuring and activating a provider, you must "map" on page 799 that provider to one or more
user roles to give access permissions to users with those roles.
793
IN THIS SECTION
Active Directory (AD) is a database-based system that provides authentication, directory, policy, and
other services in a Windows environment.
1. From the left navigation menu, navigate to External Systems > Providers and click Create Provider.
2. Enter a Name (64 characters or fewer), select Active Directory, and if you want Active Directory to
be the active provider, toggle on Active?.
3. For Connection Settings, enter/select the following:
• Hostname FQDN IP(s) - The fully qualified domain name (FQDN) or IP address of the AD server.
For high availability (HA) environments, specify multiple AD servers using the same settings. If the
first server cannot be reached, connections to succeeding ones are attempted in order.
4. For Provider-specific Parameters enter/select the following, as appropriate:
• Groups Search DN - The AD Distinguished Name (DN) path for the RBAC Groups Organizational
Unit (OU)
• Users Search DN - The AD Distinguished Name (DN) path for the RBAC Users Organization Unit
(OU)
• Bind DN - The AD Distinguished Name (DN) path for the active server user that the Apstra server
will connect as
• Advanced Config
• Timeout (seconds)
• Username Attribute Name - The AD attribute from the user entry that the Apstra server uses
for authentication. (usually cn or uid)
After configuring and activating a provider, you must "map" on page 799 that provider to one or more
user roles to give access permissions to users with those roles.
TACACS+ Provider
IN THIS SECTION
1. From the left navigation menu, navigate to External Systems > Providers and click Create Provider.
2. Enter a Name (64 characters or fewer), select TACACS+, and if you want TACACS+ to be the active
provider, toggle on Active?.
795
• Hostname FQDN IP(s) - The fully qualified domain name (FQDN) or IP address of the TACACS+
server. For high availability (HA) environments, specify multiple TACACS+ servers using the same
settings. If the first server cannot be reached, connections to succeeding ones are attempted in
order.
4. For Provider-specific Parameters enter/select the following, as appropriate:
Caution
Shared key is not displayed when editing a configured TACACS+ provider. If you do not change it,
the previously configured shared key is retained. If you test the provider and you have not re-
entered the shared key, a null shared key is used for the test and may not work.
• Auth Mode - Authentication mode - ASCII (clear-text), PAP (Password Authentication Protocol),
or CHAP (Challenge-Handshake Authentication Protocol)
5. You can Check provider parameters and Check login (to verify authentication with the remote user
credentials) before creating the provider.
6. Click Create to create the provider and return to the table view.
To authorize Apstra users via a TACACS+ provider, the TACACS+ server must be configured to properly
return an aos-group attribute. This attribute must be mapped to a defined Apstra Role. The example
configuration below is for the open-source tac_plus TACACS+ server.
user = jdoe {
default service = permit
name = "John Doe"
member = admin
login = des LQqpIWvpxDXDw
}
group = admin {
service = exec {
priv-lvl = 15
}
cmd=show {
permit .*
}
796
service = aos-exec {
default attribute = permit
priv-lvl = 15
aos-group = apstra-admins
}
}
After configuring and activating a provider, you must "map" on page 799 that provider to one or more
user roles to give access permissions to users with those roles.
RADIUS Provider
IN THIS SECTION
Remote Authentication Dial-In User Service (RADIUS). See below for limitations.
RADIUS Limitations
• No support for changing the RADIUS user's password on a remote RADIUS server.
• RADIUS authentication does not control Linux user login via SSH.
• Nested groups are not allowed. You must explicitly assign each group to a role.
• When a user logs in, only username and password are required for authenticating against the remote
RADIUS server. Log in credentials are not cached. Therefore, when a user logs in, a connection
between Apstra and the remote RADIUS server is required.
1. From the left navigation menu, navigate to External Systems > Providers and click Create Provider.
2. Enter a Name (64 characters or fewer), select RADIUS, and if you want RADIUS to be the active
provider, toggle on Active?.
3. For Connection Settings, enter/select the following:
797
• Port - The TCP port used by the server, default is 1812 as specified in RFC 2865.
• Hostname FQDN IP(s) - The fully qualified domain name (FQDN) or IP address of the RADIUS
server. For high availability (HA) environments, specify multiple RADIUS servers using the same
settings. If the first server cannot be reached, connections to succeeding ones are attempted in
order.
4. For Provider-specific Parameters enter/select the following, as appropriate:
• Shared Key (64 characters or fewer) - shared key configured on the server
CAUTION: Shared key is not displayed when editing a configured RADIUS provider.
If you do not change it, the previously configured shared key is retained. If you test
the provider and you have not re-entered the shared key, a null shared key is used
for the test and may not work.
An example of a pre-shared key configuration that tests successfully with Apstra software is from
Ubuntu FreeRADIUS (an open source RADIUS server). The Shared Key as given in the RADIUS
server configuration must be provided in Apstra.
home_server localhost {
ipaddr = 127.0.0.1
port = 1812
type = "auth"
secret = "testing123"
response_window = 20
max_outstanding = 65536
• Advanced Config
• Group Name Attribute Name - To specify a role that a user belongs to, the RADIUS server
must specify the users’ group. The user group information must be specified with Framed-
Filter-ID as the attribute. It is used to assign users to different RADIUS groups.
For example, the FreeRADIUS config below specifies the Framed-Filter-ID attribute to be
freerad. In this case, when mapping later, you would enter freerad for the Provider Group.
/etc/freeradius/users
freerad Cleartext-Password := "testing123"
Framed-Filter-Id = "freerad"
798
So that the user can be mapped to an existing group in the Apstra environment, the RADIUS
server must return the Apstra group name as part of the authentication response.
After configuring and activating a provider, you must "map" on page 799 that provider to one or more
user roles to give permissions to users with those roles.
CAUTION: Any users who are logged into Apstra software when a setting is changed in
an active RBAC provider, are immediately logged out without notification. To continue,
the user must log back into the Apstra server. This does not affect users who are
defined locally on the Apstra server (for example, admin).
1. Either from the table view (External Systems > Providers) or the details view, click the Edit button for
the provider to edit.
2. Make your changes.
3. Click Update (bottom-right) to edit the provider and return to the table view.
IN THIS SECTION
• You can map more than one Apstra role to the same provider group (new in version 4.0).
• When the same username exists both locally and in the RBAC provider, the local user is used to
authenticate login attempts.
• Changing users with the web-based RBAC feature does not modify accounts on the Apstra server
VM. To change these credentials, use standard Linux CLI commands: "useradd", "usermod", "userdel",
"passwd".
From the left navigation menu, navigate to External Systems > Providers > Provider Role Mapping to go
to provider role mapping.
2. Click Add mapping, select a role from the drop-down list, then enter a provider group. The following
is an example for mapping the apstra-admins group that was configured in TACACS+ configuration.
TIP: To see user role details, navigate to Platform > User Management > Roles. From there,
you can also create new roles, as needed.
3. To add another role mapping, click Add mapping and select an Apstra Role and Provider Group. You
can have more than one role associated with the same provider group.
4. Click Update to create the role map. If the provider that you mapped is the active provider, then
users with the mapped roles can log in with their usernames and passwords defined in the RBAC
server.
CAUTION: Changing role mappings for an active provider causes all remotely logged in
users to be logged out (because the session tokens are cleared when changes are
made). Users will need to log back into the system. This includes user admin, if admin is
not logged in locally.
1. From the left navigation menu, navigate to External Systems > Providers > Provider Role Mapping
and click the Edit button (top-right).
2. Edit role mapping as needed.
3. Click Update to update the role map.
Platform
IN THIS SECTION
Security | 816
Developers | 847
IN THIS SECTION
Users | 810
Roles | 814
IN THIS SECTION
Overview | 802
802
Overview
To work in the Apstra GUI environment, you need a user profile. Apstra ships with one predefined
profile for admin. As an admin you can create users, and assign one or more roles to them. Roles provide
various access and change permissions. They can be blueprint-specific or more general in nature. You
can assign custom roles that you've created or start with one of the four predefined roles that ships with
Apstra as described below:
• administrator role - includes all permissions. Users with the administrator role can create, clone, edit
and delete user roles. The admin user is assigned the administrator role.
• device_ztp role - includes one permission, to edit ZTP. For setting up Apstra ZTP server, we
recommend creating a dedicated user and assigning only this role.
You can't change permissions in predefined roles. If you want different permissions, you can create roles
and select permissions from lists of permissions as shown in the next sections.
Global Permissions
Blueprints
Devices
Design
Resources
AAA
External Systems
Platform
Other
Per-Blueprint Permissions
Common Permissions
• Read blueprint
• Commit changes
Datacenter-specific Permissions
Freeform-specific Permissions
• Manage resources
806
The blueprint locking feature prevents restricted users (based on their roles) from making changes that
effectively are not permitted. In particular, a restricted user should not be able to commit changes made
by another user.
If you have permission (based on the your assigned roles) to create/update/delete virtual networks, for
example, and another user has made uncommitted changes to the blueprint, the blueprint is locked. You
can't create/update/delete virtual networks until the changes are committed or reverted by the locking
user who made the uncommitted changes, unless you are the one who made the changes.
If you have permission (based on your assigned roles) to see the name of the user who created the
pending changes, the name is displayed.
A user with "Allow overriding other users staged changes" permission can make any changes to, apply
changes for, and revert changes for any blueprint.
You can map roles to external groups used by authentication providers such as LDAP, Active Directory,
TACACS+, and RADIUS.
With Enhanced Role Based Access Control, you can create blueprint-specific roles with specific
privileges allowing limited control to associated users. This allows you to create more hierarchical roles
and protect against accidental changes to the network.
For example, a user assigned the role Manage generic systems can add generic systems, copy existing
generics, add links to generic systems, add links to leaf devices, and update node tags. A user assigned
807
the role Manage racks and links can perform all those operations plus they can change rack speeds and
delete links. A user with the Manage racks and links role essentially has permissions for all FE/FFE
operations. If you want to restrict a user to physical server operations only, assign them the Manage
generic systems role, and not the Manage racks and links role.
Use Cases
These use cases are meant to give you an idea of how to work with roles and users. Specific steps for
creating roles and users are described in later sections.
To allow a user to read, write and commit specific blueprints, create a per-blueprint permissions role for
the specified blueprint(s). Toggle on Read blueprint, Make any change to staging blueprint, and Commit
changes. These permissions include Manage virtual networks and Manage virtual network endpoints
even though those permissions may or may not be toggled on. Assign the role to the user.
To allow a user to only manage virtual network endpoints on specific blueprints, select Per-Blueprint
Permissions, select one or more blueprint IDs (or All for all blueprints), then toggle on Manage virtual
network endpoints. Assign the role to the user.
808
To allow a user to read and write resources on any blueprint, create a global permissions role. Toggle on
Resources for Read and Write to toggle on all resources at once. Assign the role to the user.
To limit a user's role to only create virtual networks and look at blueprint details, create a role for Per-
Blueprint Permissions, and either select specific blueprints or all blueprints. Then toggle on Read
809
Blueprint, Commit changes, Manage virtual networks, and Manage virtual network endpoints. By not
selecting Make any change to staging blueprint you are limiting the changes that can be made to virtual
networks only. Assign the role to the user.
To be able to create virtual networks and allocate resources to them, you can assign several roles as
follows:
• Create Virtual Networks Only (not Including Allocating Resources) (described in previous section)
with the addition of toggling on Make any change to staging blueprint. This also permits a user with
this role to make other changes besides virtual network changes.
RELATED DOCUMENTATION
Users
IN THIS SECTION
Creating a user profile enables a user to access the Apstra platform via its GUI. (To enable a user to
access the Apstra platform via SSH, create a local Linux system user.)
1. From the left navigation menu, navigate to Platform > User Management > Users and click Create
User.
2. Enter a username, then enter a password that meets password complexity requirements. (You can
change requirements from Platform > Security > Password Complexity Parameters.)
3. Re-enter the password.
4. Select one or more roles, as required. If custom roles have been created, they appear as options along
with predefined roles. (You can see permissions included for each of the roles at Platform > User
Management > Roles.)
For example, you can create a user with the predefined user role plus a custom role that lets the user
see who has staged any blueprint changes and override those changes. Select the role user and a
custom role with the additional permissions. (See "Create User Role" on page 814 for Override
Changes role example.)
811
5. Click Create to create the user profile and return to the table view.
RELATED DOCUMENTATION
1. From the left navigation menu, navigate to Platform > User Management > Users, click the username
to change, then click the Change Password button (top-right).
812
2. Enter a new password that meets password complexity requirements. (You can change requirements
from Platform > Security > Password Complexity Parameters.)
3. Re-enter the new password.
4. Click Change Password to update the password.
RELATED DOCUMENTATION
From the left navigation menu, navigate to Platform > User Management > Users and click the Log Out
button for the user. The user is logged out of the Apstra environment.
1. Either from the table view (Platform > User Management > Users) or the details view, click the Edit
button for the user profile to change.
813
RELATED DOCUMENTATION
1. Either from the table view (Platform > User Management > Users) or the details view, click the Delete
button for the user profile to delete. (User admin can't be deleted.)
2. Click Delete to delete the user profile and return to the table view.
RELATED DOCUMENTATION
Roles
IN THIS SECTION
User roles specify permissions for working in the different areas of the Apstra environment. They can be
blueprint-specific or more general in nature. To customize a user's access and edit capability you'll assign
roles to user profiles. Start by creating roles based on the permissions you want to control.
1. From the left navigation menu of the Apstra GUI, navigate to Platform > User Management > Roles
and click Create Role.
3.
NOTE: Roles are either global or per-blueprint, they can't be both. Be careful. If you select
permissions in one type, then click the radio button for the other type, you'll lose the
permissions you already set.
Global Permissions pertain to Apstra details other than blueprint details. They include general
blueprint read, write, commit and delete permissions as well as permissions for platform, external
systems, resources, design, devices, and more. To add global permissions, select Global Permissions
and select one or more permissions.
For example, if another user has staged changes in a blueprint, that blueprint is locked for additional
changes until that (unidentified) user commits or reverts the changes (as of Apstra version 4.2.0). You
815
can create and assign a role that allows a user to see who made the changes and/or allow them to
override those changes, as shown below. (The admin role already has these permissions by default.)
4. To grant permissions pertaining to blueprint details instead, select Per-Blueprint Permissions, select
either specific blueprints or All bluprints, then select one or more permissions that are datacenter-
specific, freeform-specific or common to all blueprints.
5. Click Create to create the role and return to the Roles view.
RELATED DOCUMENTATION
1. Either from the table view (Platform > User Management > Roles) or the details view, click the Edit
button for the user role to edit. The four built-in user roles (administrator, device_ztp, user, viewer)
can't be modified.
RELATED DOCUMENTATION
1. Either from the table view (Platform > User Management > Roles) or the details view, click the Delete
button for the user role to delete. You can't delete a role if it's assigned to a user. The four predefined
user roles (administrator, device_ztp, user, viewer) can't be deleted.
2. Click Delete to delete the role and return to the table view.
RELATED DOCUMENTATION
Security
IN THIS SECTION
Security (Platform)
Allowed List
IN THIS SECTION
You can add trusted IP/subnets to the allowed list so they are never locked out, even if they violate rate
limit rules. You can add and change comments about those IP/subnets. Changes to the allowed list are
recorded in the event log (Platform > Event Log).
From the left navigation menu, navigate to Platform > Security > Allowed List. You can search and sort
the list. You can add, edit, and delete IP/subnets.
1. From the left navigation menu, navigate to Platform > Security > Allowed List and click Add IP/
Subnet.
818
1. From the left navigation menu, navigate to Platform > Security > Allowed List and click the Edit
button for the IP/subnet to edit.
2. Change the comment.
3. Click Update to complete the change and return to the table view.
1. From the left navigation menu, navigate to Platform > Security > Allowed List.
2. Select the IP/subnet(s) to delete.
• To delete a single IP/subnet, click the Delete button for the IP/subnet (right-side).
• To delete one or more IP/subnets, click the checkbox (left-side) for one or more IP/subnets and
click the Delete button above the list.
3. Click Update to complete the deletion and return to the table view.
Banned List
IN THIS SECTION
IP/subnets that violate rate limit rules are automatically added to the banned list and are locked out for
the configured lockout period, or until an admin removes them from the banned list. The banned list has
a lower precedence than the allowed list, so an IP/subnet on the banned list may actually not be banned.
Changes to the banned list are recorded in the event log (Platform > Event Log).
819
From the left navigation menu, navigate to Platform > Security > Banned List to go to IP/subnets on the
banned list. You can search and sort the list. You can remove IP/subnets from the banned list.
1. From the left navigation menu, navigate to Platform > Security > Banned List and click the Delete
button to the right of the IP/subnet(s) to delete.
2. Click Delete to remove the IP/subnet from the banned list and immediately allow logins from that IP/
subnet.
ACL Rules
IN THIS SECTION
Overview | 819
Overview
Subnet-based access control for Apstra GUI access (whitelisting) is part of platform security
enhancements. You can configure Access Control List (ACL) rules for IPv4 networks. (IPv6 is not
supported on the Apstra web framework.) When you create and enable rules, the rules are automatically
sorted from more specific to less specific, and IP addresses are checked against them in that order. If the
820
rule allows access to a subnet, any IP address within that subnet is allowed access. If the rule denies
access to a subnet, any IP address within that subnet is denied access.
Access Control List rules are disabled by default. If you enable rules, make sure you always allow access
to a subnet that your IP address is a part of, so you don't lock yourself out.
1. From the left navigation menu, navigate to Platform > Security > ACL to go to the table view.
2. Click the toggle to enable or disable the rules, as applicable.
1. From the left navigation menu, navigate to Platform > Security > ACL and click Add ACL rule.
2. Enter an IP subnet and select whether to allow or deny access to IP addresses within that subnet.
You also have the option of adding a comment.
3. Click Create to create the rule and return to the table view.
1. From the left navigation menu, navigate to Platform > Security > ACL and click the Edit button for
the rule to edit.
2. Change the policy, as applicable. You also have the option of adding/editing/deleting a comment.
3. Click Update to change the rule and return to the table view.
1. From the left navigation menu, navigate to Platform > Security > ACL and click the Delete button for
the rule to delete.
2. Click Delete to delete the rule and return to the table view.
IN THIS SECTION
Default settings allow 5 login attempts within 60 seconds. After the fifth failed attempt, the IP/subnet is
blocked and added to the banned list for 3 minutes (found at Platform > Security > Banned List), or until
an admin removes it from the list. When you change rate limit configuration, any banned IP/subnets are
immediately affected. For example, if you change the lockout period from 3 minutes to 5 minutes, an IP/
subnet that's already on the banned list would remain on the banned list for an additional 2 minutes.
1. From the left navigation menu, navigate to Platform > Security > Ratelimit Configuration and click
the Edit button (top-right).
1. From the left navigation menu, navigate to Platform > Security > Password Complexity Parameters
and click the Edit button (top-right).
2. Add, change and/or delete requirements, as applicable. Different Apstra versions have different
options as shown in the list and screenshots below:
• Password History Length - User is not allowed to re-use a certain number of previous passwords
(including the current one). For example, if you don't want the user to use their previous two
passwords, you would enter 3 in this field.
• To add a rule, click Add and enter a regular expression and error message.
823
• To change a rule, change values as appropriate and update the error message.
• To delete a rule, click the red X to the right of the rule to delete.
3. Click Update to complete the change and close the dialog. When you create or update passwords,
the new requirements will take effect.
IN THIS SECTION
Syslog Overview
System Log (syslog) is a running list of everything that's going on in your system. You can use these logs
to audit events or review anomalies. You can configure syslog to send messages for specific types of
systems (facilities) to external syslog servers. (You can also "export event logs to a CSV file" on page
836.)
Name Description
(Continued)
Name Description
(Continued)
Name Description
Time Zone The syslog message time zone. If you have proper time zone
translation, you won't need to synch the system time zone (or
Docker time zone) with your external syslog server. Rather
than assuming the message time is in Zulu/UTC-0, the time
zone translation needs to append the correct time zone
information to the timestamp. Then, you can better correlate
Apstra events in your external message systems.
Syslog messages follow Common Event Format (CEF) conventions as shown below:
NOTE: {host} is the the Apstra server hostname. If you want to change the hostname, you must
use the procedure on the "Change Apstra Server Hostname" on page 928 page. If you change
the hostname with any other method, the new hostname won't be included in syslog entries.
'{timestamp} {host}'
'CEF:{version}|{device_vendor}|{device_product}|{device_version}|'
'{device_event_class_id}|{name}|{severity}|{extension}
Where:
"OperationModeChangeToMaintenance","OperationModeChangeToNormal","OperationModeChangeToReadOnly",
"RatelimitExceptionAdd","RatelimitExceptionDelete",
"RatelimitClear","SystemChangeApiOperationModeToMaintenance","SystemChangeApiOperationModeToNorma
l","UserCrete","UserUpdate","UserDelete",
"SyslogCreate","SyslogUpdate","SyslogDelete","AuthAclEnable","AuthAclDisable","AuthAclRuleAdd","A
uthAclRuleUpdate" and "AuthAclRuleDelete".
src : Source IP of the client making HTTP requests to perform the activity.
suser : Who performed the activity.
act : Outcome of the activity - free-form string. In the case when the activity was
performed successfully, the value stored is “Success“. In case of error, include error string.
Ex: Unauthorized
cs1Label : The string “Blueprint Name”. Only exists if activity is associated with a
828
blueprint (optional)
cs1 : Name of the blueprint on which action was taken. Only exists if activity is
associated with a blueprint (optional)
cs2Label : The string “Blueprint ID”. Only exists if activity is associated with a blueprint
(optional)
cs2 : Id of the blueprint on which action was taken. Only exists if activity is
associated with a blueprint (optional)
cs3Label : The string “Commit Message”. Only exists if user has added a commit message
(optional)
cs3 : Commit Message. Only exists if user has added a commit message (optional)
deviceExternalId : Id (typically serial number) of the managed device on which action was
taken. Only exists if activity is associated with a device such as for “DeviceConfigChange”
(optional)
deviceConfig : Config that is pushed and applied on the device where “#012” is used to
indicate a line break to log collectors and parsers. Only exists if activity is associated with
a device such as for “DeviceConfigChange” (optional)
CEF:0|Apstra|AOS|4.1.2-269|101|Alert|10|msg={u'blueprint_label': u'rack-based-
blueprint-33ded50f', u'timestamp': 1679002754682990, u'origin_name':
u'50540015FA9D', u'alert': {u'first_seen': 1679002749600167, u'raised': False, u'severity': 3,
u'hostname_alert': {u'expected_hostname': u'leaf-3',
u'actual_hostname': u''}, u'id': u'0457a759-7d3a-4bf8-97e8-e13e518cf267'}, u'origin_hostname':
u'', 'device_hostname': '<device hostname unknown>', u'origin_role': u'leaf'}
From the left navigation menu, navigate to Platform > External Services > Syslog Configuration to see
configurations. You can create, clone, edit and delete syslog configurations.
Receivers (Platform)
IN THIS SECTION
• Hostname - Hostname
From the left navigation menu, navigate to Platform > Streaming > Receivers to go to receivers. You can
create and delete receivers.
Create Receiver
1. From the left navigation menu of the Apstra GUI, navigate to Platform > Streaming > Receivers and
click Create Receiver.
2. Enter/select required values.
3. Click Create to create the receiver and return to the table view.
Delete Receiver
1. From the left navigation menu of the Apstra GUI, navigate to Platform > Streaming > Receivers and
click the delete button for the receiver to delete.
2. Click Delete to delete the receiver from the system and return to the table view.
The configuration described here assumes you are using the Apstra Telegraf input plugin. You can
configure streaming receivers in Apstra with the Telegraf plugin by providing it Apstra credentials. We
832
recommend that you use a separate Apstra account with only the streaming credentials. If you configure
through the GUI, then there is no need to supply credentials in the Telegraf config file.
The easiest way to run the Telegraf receiver is in a docker container. The docker-compose.yml snippet below
shows the configuration for the Telegraf container. This pulls the latest Apstra supported Telegraf
container from Docker Hub.
• port - specifies the port that the streaming receiver will be listening on
• streaming_type - specifies the type of data to be streamed from Apstra to this receiver
The remaining parameters are only necessary if you want the Apstra Telegraf plugin to configure the
streaming receivers in Apstra via the API.
The input and output plugin configurations are shown in the snippet below. The output plugin is
configured for the Prometheus client and listens on port 9126. The input plugin is configured for Apstra.
[[inputs.aos]]
833
address = "10.1.1.200"
port = 9999
streaming_type = [ "perfmon", "alerts", "events" ]
aos_server = "$AOS_SERVER"
aos_port = $AOS_PORT
aos_login = "$AOS_LOGIN"
aos_password = "$AOS_PASSWORD"
Global statistics include information that is unrelated to any specific receiver. These statistics provide
crucial information required for better planning of receivers. Whenever you reset the Apstra server,
these global statistics are reset.
From the left navigation menu, navigate to Platform > Streaming > Global Statistics to see global
statistics.
834
IN THIS SECTION
• Blueprint deletion
• Normal - when disk usage and memory is under the utilization threshold, the operation mode is in
read/write mode
• Maintenance - when utilization threshold is surpassed, the system moves API layer to read-only
mode
Each event includes the following information which is searchable and sortable:
• Time - when the event occurred (hover over time field to see date and time)
• Source IP - The source IP address of the client making the HTTP request
• Device ID (as applicable) - typically the serial number of the managed device on which the action was
taken
• Device Config (as applicable) - The config that is pushed and applied on the device
• Blueprint ID (as applicable) - The ID of the blueprint on which action was taken
• Blueprint name (as applicable) - The blueprint label on which action was taken
• Result - The outcome of the activity. Success means operation is accepted by the system. In the case
of an error, the error string is included (unauthorized, for example)
From the left navigation menu, navigate to Platform > Event Log to go to the table of events that have
been logged.
• To view device details (info, pristine config, telemetry), click a device ID.
Audit events are written to log-rotated files as a second repository. You can configure logrotate
parameters in the Apstra server configuration file (/etc/aos/aos.conf). You can export and ship audit events
to syslog.
Apstra VM Clusters
IN THIS SECTION
Apstra VM Clusters
You can monitor and manage different aspects of the Apstra environment, such as its configuration,
usage, and containers. If your network includes many devices with offbox agents, or if you are taking
advantage of Apstra’s Intent Based Analytics feature, you might need more resources than can be
provided from just one virtual machine (VM). To increase resource capacity, you can add worker node
VMs to create a cluster with the Apstra controller node VM.
837
IN THIS SECTION
Nodes Overview
The Apstra controller acts as the cluster manager. When you add a worker VM to the main Apstra
controller VM, it registers with the Apstra server VM through sysDB. It collects facts about the VM (such
as core/memory/disk configuration and usage), and launches a local VM container. The Apstra controller
VM reacts to REST API requests, configures the worker VM for joining or leaving the cluster, and keeps
track of cluster-wide runtime information. It also reacts to container configuration entities and
schedules them to the worker VM.
Name Description
Name Apstra VM name, such as controller (the main Apstra controller node) or worker - iba (a worker
node)
Tags The controller node and any worker nodes that you add are tagged with iba and offbox, by
default. If you delete one or both of these tags or delete a worker node with one or both of
these tags, any IBA and/or offbox containers in that node automatically move to a VM with
those tags. Make sure there is another node with the tag(s) you’re deleting or the containers
will be deleted when you delete the tag or node.
838
Name Description
Capacity Score Apstra uses the capacity score for load balancing new containers across the cluster of available
nodes. It's calculated in relation to the configured application weight of each container based
on allocated memory.
Example calculation - 64GB of memory allocated for the VM and an application weight of
250MB configured for offbox agents:
• (64GB / 250MB) * 5 capacity score of each offbox agent = 1280 total capacity score
• Controller nodes have half the capacity score available due to overhead (1280 / 2 = 640 in
above example) but worker nodes have the full capacity score available (1280 in above
example)
The capacity score changes only if the memory allocated to the VM is changed, or if the
application weight is changed.
Errors As applicable. An example of an error is when an agent process has restarted because an agent
has crashed.
• Disk Usage - Current VM disk usage per logical volume (GB and percentage)
• Container Service Usage - derived from the required resources and the size of the
container. For example, if an offbox agent that needs 250 MB is running in a 500MB
worker node, the container service usage is 50%. (An IBA container may require 1GB.) A
controller node begins at 50% usage because it includes its own processing agents that
perform controller-specific processing logic.
Containers The containers running on the node and the resources that each container uses
839
Name Description
* If memory utilization exceeds 80%, a warning message appears at the top of all GUI pages. This lets
you know that you need to free up or add disk space and/or memory soon, to avoid a critical resource
shortage.
If memory utilization exceeds 90%, a critical message appears at the top of all GUI pages. Before you
can make any more changes to the fabric, you must address the shortage by adding disk space to the
problematic filesystem(s) or by adding memory, as needed. You can click the link to go to Apstra Cluster
Management for more information.
Click the Nodes tab, then click the IP address of the controller for details.
840
• Remove the iba tag from the controller VM so that IBA units are rescheduled to worker nodes, thus
reducing both memory and disk space usage.
• Create worker nodes to spread out the load for IBA units and/or offbox device agents.
You can change the default thresholds that trigger warnings and critical messages. In the "Apstra server
configuration file" on page 1196 (/etc/aos/aos.conf) change the options for
system_operation_filesystem_thresholds and/or system_operation_memory_thresholds. Then, send SIGHUP to the
ClusterManager Agent. You can set disk space utilization thresholds on a per-filesystem basis. For
example, you might want to be more conservative with /var/lib/aos/db which contains MainSysdb's
persistence files and Time Voyager revisions, so crossing a lower usage threshold (such as 85%) triggers
the read-only mode.
842
To access Apstra VMs, from the left navigation menu, navigate to Platform > Apstra Cluster. Click a node
address to see its details. You can create, clone, edit and delete Apstra nodes.
At the bottom left section of every page, you have continuous visibility of platform health. Green
indicates the active state. Red indicates an issue, such as missing agent, the disk being in read only
mode, or an agent rebooting (after the agent has rebooted, the status returns to active). If IBA Services
or Offbox Agents is green, all containers are launched. If one of them is red, at least one container has
failed. From any page, click one of the dots, then click a section for details. Clicking Controller, IBA
Services, and Offbox Agents all take you to Nodes details.
The controller node and worker nodes must use the same Apstra version (4.2.0, for example).
1. Install Apstra software on the VMs to cluster.
2. From the left navigation menu, navigate to Platform > Apstra Cluster and click Add Node.
843
3. Enter a name, tags (optional), address (IP or FQDN), and Apstra Server VM SSH username/password
login credentials. (iba and offbox tags are added by default.)
4. Click Create. As the main Apstra controller connects to the new Apstra VM worker node, the state of
the new Apstra VM changes from INIT to ACTIVE.
1. Either from the table view (Platform > Apstra Cluster) or the details view, click the Edit button for the
VM to edit.
2. Make your changes. If you delete iba and/or offbox tags from the node, the IBA and/or offbox
containers (as applicable) are moved to another node with those tags. Make sure the cluster has
another node with those tags, or the containers will be deleted instead of moved.
CAUTION: To prevent containers from being deleted, don’t delete tags unless another
node in the cluster has the same tags.
When you delete a node that includes iba and/or offbox tags, the IBA and/or offbox containers (as
applicable) are moved to another node with those tags. Make sure the cluster has another node with
those tags, or the containers will be deleted instead of moved.
CAUTION: To prevent containers from being deleted, don’t delete nodes with iba
and/or offbox tags unless another node in the cluster has the same tags.
1. Either from the table view (Platform > Apstra Cluster) or the details view, click the Delete button for
the Apstra VM to delete.
2. Click Delete to delete the Apstra VM.
844
Apstra admins may want to temporarily block all users (including themselves) from performing design
and blueprint changes in the Apstra environment because they're troubleshooting something, or want to
perform some maintenance operations on the Apstra server (backups, VM migration, VM OS updates
and so on). Admins can change the operation mode from Normal to Read-only to block users from API
and WebUI (PUT/POST). By default, only admins have permission to enable/disable the read-only mode.
845
At the bottom left section of every page, you have continuous visibility of platform health. Green
indicates the active state. Red indicates some kind of issue, such as a missing agent, the disk being in
read only mode, or an agent rebooting (after the agent has rebooted, the status returns to active). From
any page, click one of the dots, then click the section that you want details for. Clicking Operation Mode
takes you to cluster management details.
846
If you're using Juniper offbox agents, increase memory allocation to 500 MB (from the 250 MB default).
A single API call applies to all offbox agents.
1. From the left navigation menu in the Apstra GUI, navigate to Platform > Developers and click REST
API Documentation.
The Swagger API developer tool for the Apstra environment appears.
2. Click cluster, click GET /api/cluster/application-weight, then click Execute.
The currrent values for offbox and iba appear in the response body.
3. Click PUT / api/cluster/application-weight, then click Try it out.
The parameters become editable.
4. Enter values for both offbox and iba, then click Execute. (The values must be positive and multiples
of 50.) Juniper offbox agents require 500 MB.
5. To confirm your changes, click cluster, click GET /api/cluster/application-weight, then click Execute.
6. You can close the window at any time to leave the tool.
847
Developers
IN THIS SECTION
Developers (Platform)
From the left navigation menu, navigate to Platform > Developers to go to developer documentation
and tools.
• Platform REST API Documentation includes API documentation for APIs used outside of Apstra
blueprints (such as Apstra global catalog logical devices).
• Reference Designs L3 Clos includes API documentation for APIs used in standard Apstra L3 Clos
blueprints (such as Apstra blueprint virtual networks).
4. Copy the token from the response body, scroll to the top, then click Authorize (top-right, shown in
the first step).
The Authorize dialog appears.
IN THIS SECTION
This reference demonstrates the resource group API usage with parity to the UI. For full API
documentation, view the REST Platform API reference under the Apstra GUI.
To list resource group slots in a blueprint, perform an authenticated HTTP GET to https://aos-server/api/
blueprints/<blueprint_id>/resource_groups
Both ASN pools and IP pools must be assigned in order for a blueprint to complete the build phase.
If an ID is not specified, one will be created and returned in the HTTP response.
{
"id": "RFC6996-Private",
"display_name": "RFC6996-Private",
"tags": [ "default" ],
"ranges": [
{
"last": 65534,
"first": 64512
}
]
}
852
curl 'https://192.168.25.250/api/resources/asn-pools?comment=create'
-H 'AuthToken: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6
ImFkbWluIiwiY3JlYXRlZF9hdCI6IjIwMTctMDUtMzFUMDA6MjI6MDcuNTIwMTgzWiIsIn
Nlc3Npb24iOiJjOTliOGVlOS05Y2NjLTRjZTAtYTY5NS0wODI3N2ZkYjA0ZDYifQ.FnJMR3
crPoD0-lQRXnpPOJ8TCsRG9Wr-DaddnAIj6ko' - --data-binary '{"display_name"
:"Example","ranges":[{"first":100,"last":200}],"tags":[]}' --compressed
--insecure
{
"items": [
{
"created_at": "2017-05-30T12:56:07.293082Z",
"display_name": "Private ASN",
"id": "c23ea447-8f37-419a-9b1c-c48cc55d5b9c",
"last_modified_at": "2017-05-30T12:56:07.293082Z",
"ranges": [
{
"first": 65412,
"last": 65534,
"status": "pool_element_in_use"
}
],
"status": "in_use",
"tags": []
}
]
}
853
curl
'https://192.168.25.250/api/resources/asn-pools/d0312b4a-017e-4478-8b8d-df0417ce8d3b'
-X DELETE -H 'AuthToken: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2Vybm
FtZSI6ImFkbWluIiwiY3JlYXRlZF9hdCI6IjIwMTctMDUtMzFUMDA6MjI6MDcuNTIwMTgzW
iIsInNlc3Npb24iOiJjOTliOGVlOS05Y2NjLTRjZTAtYTY5NS0wODI3N2ZkYjA0ZDYifQ.FnJ
MR3crPoD0-lQRXnpPOJ8TCsRG9Wr-DaddnAIj6ko' --compressed --insecure
For instance, to post a resource pool to spine_loopback_ips, first obtain the ID of the resource pool, and
append it to a list for slot assignation. When updating the IP Pool resource group, specify all pools in the
payload at the same time. We cannot add single pools, so PUT them all at once.
Payload:
curl
'https://192.168.25.250/api/blueprints/4c1e69c6-97bd-4c99-9504-7818f138b17f/resource_groups/asn/
spine_asns'
-X PUT -H 'AuthToken: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2Vyb
mFtZSI6ImFkbWluIiwiY3JlYXRlZF9hdCI6IjIwMTctMDUtMzFUMDA6MjI6MDcuNTI
wMTgzWiIsInNlc3Npb24iOiJjOTliOGVlOS05Y2NjLTRjZTAtYTY5NS0wODI3N2ZkYj
A0ZDYifQ.FnJMR3crPoD0-lQRXnpPOJ8TCsRG9Wr-DaddnAIj6ko' --data-binary
'{"pool_ids":["c23ea447-8f37-419a-9b1c-c48cc55d5b9c"]}' --compressed --insecure
When removing IP pools from a blueprint, PUT an empty pool_id list to the blueprint with the payload
[]:
{ "pool_ids": [] }
curl
'https://192.168.25.250/api/blueprints/4c1e69c6-97bd-4c99-9504-7818f138b17f/resource_groups/asn/
spine_asns'
-X PUT -H 'AuthToken: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFt
ZSI6ImFkbWluIiwiY3JlYXRlZF9hdCI6IjIwMTctMDUtMzFUMDA6MjI6MDcuNTIwMTgzWi
IsInNlc3Npb24iOiJjOTliOGVlOS05Y2NjLTRjZTAtYTY5NS0wODI3N2ZkYjA0ZDYifQ.FnJ
MR3crPoD0-lQRXnpPOJ8TCsRG9Wr-DaddnAIj6ko' --data-binary '{"pool_ids":[]}'
--compressed --insecure
Available ASN Pool resource groups for assignment can be shown with an HTTP GET to https://aos-
server/api/blueprints/<blueprint_id>/resource_groups
curl
'https://192.168.25.250/api/blueprints/4c1e69c6-97bd-4c99-9504-7818f138b17f/resource_groups'
-H 'AuthToken: eyJhbGciOiJIUzI1NwMTctMDUtMzFUMDA6MjI6MDcuNTIwMTgz
WiIsInNlc3Npb24iOiJjOTliOGVlOS05Y2NjLTRjZTAtYTY5NS0wODI3N2ZkYjA0ZD
YifQ.FnJMR3crPoD0-lQRXnpPOJ8TCsRG9Wr-DaddnAIj6ko' --compressed --insecure
| python -m json.tool
{
"items": [
{
"name": "leaf_asns",
855
"pool_ids": [
"c23ea447-8f37-419a-9b1c-c48cc55d5b9c"
],
"type": "asn"
},
{
"name": "spine_asns",
"pool_ids": [
"c23ea447-8f37-419a-9b1c-c48cc55d5b9c"
],
"type": "asn"
},
{
"name": "leaf_loopback_ips",
"pool_ids": [
"56e8e0dc-babd-4652-92a5-fc37294a7b26"
],
"type": "ip"
},
{
"name": "mlag_domain_svi_subnets",
"pool_ids": [
"ed7d8830-c703-4ac0-8252-77e0f272a677"
],
"type": "ip"
},
{
"name": "spine_leaf_link_ips",
"pool_ids": [
"ed7d8830-c703-4ac0-8252-77e0f272a677"
],
"type": "ip"
},
{
"name": "spine_loopback_ips",
"pool_ids": [
"56e8e0dc-babd-4652-92a5-fc37294a7b26"
],
"type": "ip"
}
]
}
856
API - IP Pools
Create IP Pool
{
"id": "example_ip_pool",
"display_name": "example_ip_pool",
"tags": ["default"],
"subnets": [
{"network": "10.0.0.0/8"}
]
}
The subnets section requires a list of dictionaries with keyword network and value matching a CIDR
mask. The subnets cannot overlap with each other in the same pool. That is to say, 192.168.10.0/24 and
192.168.0.0/16 cannot be configured in the same pool.
Tags are optional and are not currently used in Apstra. If ID is specified, it will be saved, otherwise an ID
will be returned in the HTTP Response after creating the pool.
An HTTP POST to https://aos-server/api/resources/ip-pools with JSON payload will reply with the ID of
the new IP pool.
curl 'https://192.168.25.250/api/resources/ip-pools' -X
POST -H 'AuthToken: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmF
tZSI6ImFkbWluIiwiY3JlYXRlZF9hdCI6IjIwMTctMDUtMzFUMDA6MjI6MDcuNTIwMTgzWi
IsInNlc3Npb24iOiJjOTliOGVlOS05Y2NjLTRjZTAtYTY5NS0wODI3N2ZkYjA0ZDYifQ.Fn
JMR3crPoD0-lQRXnpPOJ8TCsRG9Wr-DaddnAIj6ko' --data-binary '{"display_name":
"example_ip_pool","subnets":[{"network":"10.0.0.0/8"},{"network":
"192.168.0.0/16"}],"tags":[]}' --compressed --insecure
{"id": "d0312b4a-017e-4478-8b8d-df0417ce8d3b"}
857
List IP Pools
{
"items": [
{
"created_at": "2017-05-31T03:48:38.562331Z",
"display_name": "example_ip_pool",
"id": "d5046aa6-eab2-4990-9816-0a519ce1a8db",
"last_modified_at": "2017-05-31T03:48:38.562331Z",
"status": "not_in_use",
"subnets": [
{
"network": "10.0.0.0/8",
"status": "pool_element_available"
},
{
"network": "192.168.0.0/16",
"status": "pool_element_available"
}
],
"tags": []
},
{
"created_at": "2017-05-30T12:56:50.576598Z",
"display_name": "L3-CLOS",
"id": "ed7d8830-c703-4ac0-8252-77e0f272a677",
"last_modified_at": "2017-05-30T12:56:50.576598Z",
"status": "in_use",
"subnets": [
{
"network": "10.16.0.0/16",
"status": "pool_element_in_use"
}
858
],
"tags": []
},
{
"created_at": "2017-05-30T12:56:24.222906Z",
"display_name": "Loopbacks",
"id": "56e8e0dc-babd-4652-92a5-fc37294a7b26",
"last_modified_at": "2017-05-30T12:56:24.222906Z",
"status": "in_use",
"subnets": [
{
"network": "10.254.0.0/16",
"status": "pool_element_in_use"
}
],
"tags": []
},
{
"created_at": "2017-05-31T03:49:15.485164Z",
"display_name": "example_ip_pool",
"id": "d0312b4a-017e-4478-8b8d-df0417ce8d3b",
"last_modified_at": "2017-05-31T03:49:15.485164Z",
"status": "not_in_use",
"subnets": [
{
"network": "10.0.0.0/8",
"status": "pool_element_available"
},
{
"network": "192.168.0.0/16",
"status": "pool_element_available"
}
],
"tags": []
}
]
}
Delete IP pool
curl
'https://192.168.25.250/api/resources/ip-pools/d0312b4a-017e-4478-8b8d-df0417ce8d3b'
-X DELETE -H 'AuthToken: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZS
I6ImFkbWluIiwiY3JlYXRlZF9hdCI6IjIwMTctMDUtMzFUMDA6MjI6MDcuNTIwMTgzWiIsInNl
c3Npb24iOiJjOTliOGVlOS05Y2NjLTRjZTAtYTY5NS0wODI3N2ZkYjA0ZDYifQ.FnJMR3crPoD0
-lQRXnpPOJ8TCsRG9Wr-DaddnAIj6ko' --compressed --insecure
Assign IP to Blueprint
For instance, to associate a resource pool spine_loopback_ips with a blueprint first obtain the ID of the
resource pool, and append it to a list for slot assignation. When updating the IP Pool resource group,
specify all pools in the payload at the same time. We cannot add single pools, so PUT them all at once.
Instruct Apstra to associate IP pool with ID ‘ed7d8830-c703-4ac0-8252-77e0f272a677’to the
blueprint. You may have to GET existing pool IDs prior to adding a new one to avoid deleting existing
pools.
Payload:
curl
'https://192.168.25.250/api/blueprints/4c1e69c6-97bd-4c99-9504-7818f138b17f/resource_groups/ip/
spine_loopback_ips'
-X PUT -H 'AuthToken: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6ImF
kbWluIiwiY3JlYXRlZF9hdCI6IjIwMTctMDUtMzFUMDA6MjI6MDcuNTIwMTgzWiIsInNlc3Npb2
4iOiJjOTliOGVlOS05Y2NjLTRjZTAtYTY5NS0wODI3N2ZkYjA0ZDYifQ.FnJMR3crPoD0-lQRXnp
POJ8TCsRG9Wr-DaddnAIj6ko' --data-binary '{"pool_ids":["ed7d8830-c703-4ac0-825
2-77e0f272a677"]}' --compressed --insecure
To remove IP pools from the blueprint PUT an empty pool_id list to the blueprint with the payload []:
860
{ "pool_ids": [] }
CURL Example
curl
'https://192.168.25.250/api/blueprints/4c1e69c6-97bd-4c99-9504-7818f138b17f/resource_groups/ip/
spine_loopback_ips'
-X PUT -H 'AuthToken: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZ
SI6ImFkbWluIiwiY3JlYXRlZF9hdCI6IjIwMTctMDUtMzFUMDA6MjI6MDcuNTIwMTgzWiIsI
nNlc3Npb24iOiJjOTliOGVlOS05Y2NjLTRjZTAtYTY5NS0wODI3N2ZkYjA0ZDYifQ.FnJMR3cr
PoD0-lQRXnpPOJ8TCsRG9Wr-DaddnAIj6ko' --data-binary '{"pool_ids":[]}'
--compressed --insecure
curl
'https://192.168.25.250/api/blueprints/4c1e69c6-97bd-4c99-9504-7818f138b17f/resource_groups'
-H 'AuthToken: eyJhbGciOiJIUzI1NwMTctMDUtMzFUMDA6MjI6MDcuNTIwMTgzWiIsInNlc3
Npb24iOiJjOTliOGVlOS05Y2NjLTRjZTAtYTY5NS0wODI3N2ZkYjA0ZDYifQ.FnJMR3crPoD
0-lQRXnpPOJ8TCsRG9Wr-DaddnAIj6ko' --compressed --insecure | python -m json.tool
{
"items": [
{
"name": "leaf_asns",
"pool_ids": [
"c23ea447-8f37-419a-9b1c-c48cc55d5b9c"
],
"type": "asn"
},
{
"name": "spine_asns",
861
"pool_ids": [
"c23ea447-8f37-419a-9b1c-c48cc55d5b9c"
],
"type": "asn"
},
{
"name": "leaf_loopback_ips",
"pool_ids": [
"56e8e0dc-babd-4652-92a5-fc37294a7b26"
],
"type": "ip"
},
{
"name": "mlag_domain_svi_subnets",
"pool_ids": [
"ed7d8830-c703-4ac0-8252-77e0f272a677"
],
"type": "ip"
},
{
"name": "spine_leaf_link_ips",
"pool_ids": [
"ed7d8830-c703-4ac0-8252-77e0f272a677"
],
"type": "ip"
},
{
"name": "spine_loopback_ips",
"pool_ids": [
"56e8e0dc-babd-4652-92a5-fc37294a7b26"
],
"type": "ip"
}
]
}
862
Configlets (API)
IN THIS SECTION
For full API documentation, view the Platform API reference from the web interface. This is a targeted
section to demonstrate configlet API similarly to the UI. The main difference between the Web UI and
REST API is that the Apstra API does not make any use of the configlets stored under api/design/
configlets when working with a blueprint. Design-configlets are meant for consumption under the UI.
When working with configlets on the API, work directly with the blueprint.
{
"ref_archs": [
"two_stage_l3clos"
],
"created_at": "string",
"last_modified_at": "string",
"id": "string",
"generators": [
{
"config_style": "string",
"template_text": "string",
"negation_template_text": "string"
}
],
"display_name": "string",
"section": "string"
}
863
A POST will create a new configlet. A PUT will overwrite an existing configlet. PUT requires the URL of
the configlet. https://aos-server/api/design/configlets/{id}
The response will contain the ID of the newly created configlet {"id": "995446c7-de7d-46bb-
a88a-786839556064"}
Assigning a configlet to a blueprint requires assignation of device conditions as well as embedding the
configlet details. When assigning a configlet to a blueprint, the configlets available as design resources
aren’t necessary. These are only used for UI purposes.
JSON Syntax for putting a configlet to a blueprint. Basically, this is just an ‘items’ dictionary element
containing a list of configlet schemas.
{
"items": [
{
"template_params": [
864
"string"
],
"configlet": {
"generators": [
{
"config_style": "string",
"template_text": "string",
"negation_template_text": "string"
}
],
"section": "string",
"display_name": "string"
},
"condition": "string"
}
]
}
Response
To unassign a configlet, remove it from the items list by PUT with an empty json post.
The response is an empty json set once the configlet is deleted: {"items": []}
IN THIS SECTION
For full API documentation, view the Platform API reference from the web interface. This is a targeted
section to demonstrate property sets API similarly to the web interface.
{
"items": [
{
"label": "string",
"values": {
"additionalProp1": "string",
"additionalProp2": "string",
"additionalProp3": "string"
},
"id": "string"
}
]
}
866
A POST will create a new property set. A PUT will overwrite an existing property set. PUT requires the
URL of the property set. https://aos-server:8888/api/design/property-sets/{id}
The response will contain the ID of the newly created property-set {"id": "73223e81-a451-4e7f-91fb-
fb476f4b9fc8"}
Deleting a property set requires an HTTP DELETE to the property set by URL http://aos-
server:8888/api/design/property-sets/{id}
Assigning a property set to a blueprint requires an HTTP POST to the blueprint by URL http://aos-
server:8888/api/blueprints/{blueprint_ID}/property-sets
{
"id": "73223e81-a451-4e7f-91fb-fb476f4b9fc8"
}
The response will contain the ID of the assigned property-sets {"id": "73223e81-a451-4e7f-91fb-
fb476f4b9fc8"}
867
curl "http://aos-server:8888/api/blueprints/e4068e99-813c-4290-b7cc-e145d85a98a8/property-sets/
73223e81-a451-4e7f-91fb-fb476f4b9fc8" -X DELETE -H "AuthToken: EXAMPLE"
Response
{"id": "73223e81-a451-4e7f-91fb-fb476f4b9fc8"}
Deleting a property set requires an HTTP DELETE to the blueprint property set by URL http://aos-
server:8888/api/blueprints/{blueprint_ID}/property-sets{id}
curl "http://aos-server:8888/api/blueprints/e4068e99-813c-4290-b7cc-e145d85a98a8/property-sets/
73223e81-a451-4e7f-91fb-fb476f4b9fc8" -X DELETE -H "AuthToken: EXAMPLE"
IN THIS SECTION
Besides main parameters of network interfaces like name, speed and port mode, Apstra also configures a
description for physical interfaces and aggregated logical interfaces (so called port channels). Interface
description is automatically generated if the following conditions are met:
3. The peer interface belongs to leaf, spine, or generic system with virtual network endpoint on this
server.
• facing_spine2:Ethernet1/2
868
• to.server1:eth0
• to.server2
The prefix of the name is facing_ if the peer is leaf, spine or external router. The prefix is to. in case peer
device is an L2 or L3 server. The peer interface name part is present only when the peer device is
controlled by Apstra.
The Apstra API is able to change the auto-generated interface description. However, there is no such
functionality in the Apstra UI.
The interface description may contain ASCII characters with codes 33-126 and spaces, except "?", which
is interpreted as a command-completion. The description length is limited to 240 characters, which is
the longest possible length across supported switch models.
Interfaces are stored internally as graph nodes with certain set of properties. Description is one of these
properties. To modify the description, use the generic API to interact with graph nodes.
Request:
{
"description": "facing_dkl-2-leaf:Ethernet1/2",
"mlag_id": null,
"tags": null,
"if_name": "swp2",
"label": null,
"port_channel_id": null,
"ipv4_addr": "203.0.113.10/31",
"mode": null,
"if_type": "ip",
"type": "interface",
"id": "interface-id-1",
"protocols": "ebgp"
}
869
Response:
{
"description": "New description I want!",
"mlag_id": null,
"tags": null,
"if_name": null,
"label": null,
"port_channel_id": null,
"ipv4_addr": null,
"mode": null,
"if_type": "ip",
"type": "interface",
"id": "interface-id-1",
"protocols": "ebgp"
}
To delete custom interface description and get back to automatic description generation, set the
description to empty value.
Request:
Response:
{
"description": "",
"mlag_id": null,
"tags": null,
"if_name": null,
"label": null,
"port_channel_id": null,
"ipv4_addr": null,
"mode": null,
"if_type": "ip",
"type": "interface",
"id": "interface-id-1",
"protocols": "ebgp"
}
Subsequent GET request will show that the description was automatically generated.
Request:
Response:
{
"description": "facing_dkl-2-leaf:Ethernet1/2",
"mlag_id": null,
"tags": null,
"if_name": "swp2",
"label": null,
"port_channel_id": null,
"ipv4_addr": "203.0.113.10/31",
"mode": null,
"if_type": "ip",
"type": "interface",
"id": "interface-id-1",
"protocols": "ebgp"
}
871
Probes (API)
IN THIS SECTION
The information below describes as much of the API as necessary to understand how to use IBA for
someone already familiar with Apstra API conventions. Formal API documenation is reserved for the API
documentation itself.
We will walk through the API as it's used for the example workflow described in the introduction,
demonstrating its general capability by specific example.
Create Probe
To create a probe, the operator POSTs to /api/blueprints/<blueprint_id>/probes with the following form:
{
"label": "server_tx_bytes",
"description": "Server traffic imbalance",
"tags": ["server", "imbalance"],
"disabled": false,
"processors": [
{
"name": "server_tx_bytes",
"outputs": {
"out": "server_tx_bytes_output"
},
"properties": {
"counter_type": "tx_bytes",
"graph_query": "node('system',
872
name='sys').out('hosted_interfaces').node('interface', name='intf').out('link').node('link',
link_type='ethernet', speed=not_none()).in_('link').node('interface',
name='dst_intf').in_('hosted_interfaces').node('system', name='dst_node',
role='server').ensure_different('intf', 'dst_intf')",
"interface": "intf.if_name",
"system_id": "sys.system_id"
},
"type": "if_counter"
},
{
"inputs": {
"in": "server_tx_bytes_output"
},
"name": "std",
"outputs": {
"out": "std_dev_output"
},
"properties": {
"ddof": 0,
"group_by": []
},
"type": "std_dev"
},
{
"inputs": {
"in": "std_dev_output"
},
"name": "server_imbalance",
"outputs": {
"out": "std_dev_output_in_range"
},
"properties": {
"range": {
"max": 100
}
},
"type": "range_check"
},
{
"inputs": {
"in": "std_dev_output_in_range"
},
"name": "server_imbalance_anomaly",
873
"outputs": {
"out": "server_traffic_imbalanced"
},
"type": "anomaly"
}
],
"stages": [
{
"name": "server_tx_bytes_output",
"description": "Collect server tx_bytes",
"tags": ["traffic counter"],
"units": "Bps"
}
]
}
As seen above, the endpoint is given an input of probe metadata, a processor instance list, and output
stage list.
disabled optional boolean that tells whether probe should be disabled. Disabled probes don't
provide any data and don't consume any resources. The probe is not disabled by default.
Each processor instance contains an instance name (defined by user), processor type (a selection from a
catalog defined by the platform and the reference design), and inputs and/or outputs. All additional fields
in each processor are specific to that type of processor, are specified in the properties sub-field, and can
be learned by introspection via our introspection API at /api/blueprints/<blueprint_id>/telemetry/processors;
we will go over this API later.
Matching our working example, we will go through each entry we have in the processor list in the above
example.
In the first entry, we have a processor instance of type if_counter that we name server_tx_bytes. It takes as
input a query called graph_query which is a graph query. It then has two other fields named interface and
system_id. These three fields together indicate that we want to collect a (first time-derivative of) counter
for every server-facing port in the system. For every match of the query specified by graph_query, we
extract a system_id by taking the system_id field of the sys node in the resulting path (as specified in the
system_id processor field) and an interface name by taking the if_name field of the intf node in the resulting
874
path (as specified in the interface processor field). The combination of system ID and interface is used to
identify an interface in the network, and its tx_bytes counter (as specified by counter_type) is put into the
output of this processor. The output of this processor is of type "Number Set" (NS); stage types are
discussed exhaustively later. This processor has no inputs, so we do not supply an input field. It has one
output, labeled out (as defined by the if_counter processor type); we map that output to a stage labeled
server_tx_bytes_output.
The second processor is of type std_dev and takes as input the stage we created before called
server_tx_bytes_output; see the processor-specific documentation for the meaning of the ddof field. Also,
see the processor-specific documentation for the full meaning of the group_by field. It will suffice to say
for now that in this case group_by tells us to construct a single output "Number" (N) from the input NS;
that is, this processor outputs a single number-the standard deviation taken across each of the many
input numbers. This output is named "std_dev_output".
The third processor is of type range_check and takes as input std_dev_output. It checks that the input is out
of the expected range specified by range - in this case if the input is ever greater-than 100 (we have
chosen this arbitrary value to indicate when the server-directed traffic is unbalanced). This processor has
a single output we choose to label std_dev_output_in_range. This output (as defined by the range_check
processor type) is of type DS (Discrete State) and can take values either true or false, indicating whether
or not a value is out of the range.
Our final processor is of type anomaly and takes as input std_dev_output_in_range. It raises an Apstra anomaly
when the input is in the true state. This processor has a single output we choose to label
server_traffic_imbalanced. This output (as defined by the anomaly processor type) is of type DS (Discrete
State) and can take values either true or false, indicating whether or not an anomaly is raised. We do not
do any further processing with this anomalous state data in this example, but that does not preclude its
general possibility.
Finally, we have a stages field. This is a list of a subset of output stages, with each stage indicated by the
name field which refers to the stage label. This list is meant to add metadata to each output stage that
cannot be inferred from the DAG itself. Currently, supported fields are:
units string that is meant to describe the units of the stage data.
This stage metadata is returned when fetching data from that stage via the REST API and used by the
GUI in visualization.
return a UUID, as most of the other creation endpoints in Apstra, which can be used for further
operations.
Changed in version 2.3: To get a predictable probe id instead of a UUID described above, one could
specify it by adding an "id" property to the request body.
{
"id": "my_tx_bytes_probe",
"label": "server_tx_bytes",
"processors": [],
"rest_of_the": "request_body"
}
Changed in version 2.3: Previously, stage definitions were inlined into processor definitions like this:
{
"label": "test probe",
"processors": [
{
"name": "testproc",
"outputs": {"out": "test_stage"},
"stages": [{"name": "out", "units": "pps"}]
}
]
}
This no longer works, and stage name should refer to the stage label instead of the internal stage name.
So the example above should look this way:
{
"stages": [{"name": "test_stage", "units": "pps"}]
}
Additional note: it's recommended not to inline stage definitions into processor definitions, and place
that as a stand-alone element like in POST example above.
id with id of the probe (or UUID if it was not specified at creation time),
state with actual state of the probe; possible values are "created" for a probe being configured,
"operational" for a successfully configured probe, and "error" if probe configuration has
failed.
last_error contains detailed error description for the most-recent error for probes in the "error" state. It
has the following sub-fields:
The complete list of probe messages could be obtained by issuing HTTP GET request to /api/blueprints/
<blueprint_id>/probes/<probe_id>/messages.
Additionally, HTTP GET can be sent to /api/blueprints/<blueprint_id>/probes to retrieve all the probes for
blueprint <blueprint_id>.
2.3
HTTP PATCH and PUT methods for probes are available since Apstra version 2.3.
{
"label": "new server_tx_bytes",
"description": "some better probe description",
"tags": ["production"],
"stages": [
{
"name": "server_tx_bytes",
"description": "updated stage description",
"tags": ["server traffic"],
"units": "bps"
}
]
}
877
This example updates probe metadata for the probe that was created with the POST request listed
above. All fields here are optional, values that were not specified remain unchanged.
Every stage instance is also optional, that is, only specified stages will be updated, and not specified
stages remain unchanged.
Tags collection is updated entirely, i.e. if it was tags: ["a", "b"] and the PATCH payload specified tags:
["c"], then the resulting collection will look like tags: ["c"] (NOT tags: ["a", "b", "c"]).
With PATCH it's not possible to change probe's set of processor and stages. Please read further for PUT
description which allows to do that.
This is very similar to POST, with the difference being that it replaces the old configuration for probe
<probe_id> with the new one specified in the payload. Payload format for this request is the same as for
POST, but id is not allowed.
Inspect Probe
Stages are implicitly created by being named in the input and output of various processors. You can
inspect the various stages of a probe. The API for reading a particular stage is /api/blueprints/
<blueprint_id>/probes/<probe_id>/stages/<stage_name>
NOTE: Each stage has a type. This is a function of the generating processor and the input stage(s)
to that processor. The types are: Number (N); Number Time Series (NTS), Number Set (NS);
Number Set Time Series (NSTS); Text (T); Text Time Series (TTS); Text Set (TS); Text Set Time
Series (TSTS); Discrete State (DS); Discrete State Time Series (DSTS); Discrete State Set (DSS);
Discrete Set Time Series (DSSTS)
A NS is exactly that: a set of numbers.
Similarly, a DSS is a set of discrete-state variables. Part of the specification of a DSS (and DSSTS)
stage is the possible values the discrete-state variable can take.
A NSTS is a set of time-series with numbers as values. For example, a member of this set would
be: (time=0 seconds, value=3), (time=3 seconds, value=5), (time=6 seconds, value=23), and so-on.
Number (N), Discrete-State (DS), and Text (T) are simply Number Sets, Discrete State Sets, and
Text Sets guaranteed to be of length one.
NTS, DSTS, and TS are the same as above, but are time-series instead of single values.
Let's consider the first stage - "server_tx_bytes". This stage contains the tx_bytes counter for every
server-facing port in the system. We can get it from the url /api/blueprints/<blueprint_id>/probes/<probe_id>/
stages/server_tx_bytes_output
{
"properties": [
"interface",
"system_id"
],
"type": "ns",
"units": "bytes_per_second",
"values": [
{
"properties": {
"interface": "intf1",
"system_id": "spine1"
},
"value": 22
},
{
"properties": {
"interface": "intf2",
"system_id": "spine1"
},
"value": 23
},
{
"properties": {
"interface": "intf1",
"system_id": "spine3"
},
"value": 24
}
879
]
}
As we know from our running example, the "server_tx_bytes" stage contains the tx_bytes value for every
server-facing interface in the network. Looking at the above example, we can see that this stage is of
type "ns", indicating NS or Number-Set. As mentioned before, data in stages is associated with context.
This means that every element in the set of a stage is associated with a group of key-value pairs. Per
every stage, the keys are the same for every piece of data (or, equivalently, item in the set). These keys
are listed in the "properties" field of a given stage, and are generally a function of the generating
processor. Each of the items in "values" assigns a value to each of the properties of the stage and
provides a value (the "Number" in the "Number Set"). The meaning of this data in this stage is that
tx_bytes on intf1 of spine1 is 22, on intf2 of spine1 is 23, and on intf1 of spine3 is 24 bytes per second.
Notice that "units" is set for this stage as specified in the running example.
To query the second stage in our probe, send an HTTP GET to the std endpoint /api/blueprints/
<blueprint_id>/probes/<probe_id>/stages/std_dev_output.
{
"type": "n",
"units": "",
"value": 1
}
This stage is a number. It has no context, only a single value. In our example, this is the standard
deviation across all spines.
The penultimate stage in our probe can be queried at the endpoint /api/blueprints/<blueprint_id>/probes/
<probe_id>/stages/server_traffic_imbalanced.
{
"possible_values": [
"true",
"false"
],
"type": "ds",
"units": "",
"value": false
}
880
As shown, this stage indicates whether server traffic is imbalanced ("true") or not ("false") by indicating if
the standard deviation across of tx_bytes across all server-facing ports is greater-than 100. Note the
"possible_values" field describes all values that the discrete-state "value" can take.
The final stage of our example processor raises an Apstra Anomaly (and sets its output to "true"), when
the standard deviation of tx_bytes across server-facing interfaces is greater-than 100.
You can query probe anomalies via the standard anomaly API at /api/blueprints/<bluprint_id>/anomalies?
type=probe.
Following is the JSON form of an anomaly that would be raised by our example probe (with ellipses for
data we don't care about for this example):
{
"actual": {
"value_int": 101
},
"anomaly_type": "probe",
"expected": {
"value_int": 100
},
"id": "...",
"identity": {
"anomaly_type": "probe",
"probe_id": "efb2bf7f-d8cc-4a55-8e9b-9381e4dba61f",
"properties": {},
"stage_id": "server_traffic_imbalanced"
},
"last_modified_at": "...",
"severity": "critical"
}
As seen in the above example, the identity contains the probe_id and the name of the stage on which
the anomaly was raised and which requires further inspection by the operator. Within a given stage, if
the type of the stage were a set-based type, the "properties" field of the anomaly would be filled with
the properties of the specific item in the set that caused the anomaly. This brings up the important point
881
that multiple anomalies can be raised on a single stage, as long as each is on a different item in the set.
In our example, since the stage in question is of type NS, the "properties" field is not set.
Introspect Processors
The set of processors available to the operator is a function of the platform and the reference design.
Apstra provides an API for the operator to list all available processors, learn what parameters they take,
and learn what inputs they require and outputs they yield.
It yields a list of processor descriptions. In the following example, we show the description for the
std_dev processor.
{
"description": "Standard Deviation Processor.\n\n Groups as described by group_by, then
calculates std deviation and\n outputs one standard deviation for each group. Output is NS.\n
Input is an NS or NSTS.\n ",
"inputs": {
"in": {
"required": true,
"types": [
{
"keys": [],
"possible_values": null,
"type": "ns"
},
{
"keys": [],
"possible_values": null,
"type": "nsts"
}
]
}
},
"outputs": {
"out": {
"required": true,
"types": [
{
"keys": [],
"possible_values": null,
"type": "ns"
882
}
]
}
},
"label": "Standard Deviation",
"name": "std_dev",
"schema": {
"additionalProperties": false,
"properties": {
"ddof": {
"default": 0,
"description": "Standard deviation correction value, is used to correct divisor
(N - ddof) in calculations, e.g. ddof=0 - uncorrected sample standard deviation, ddof=1 -
corrected sample standard deviation.",
"title": "ddof",
"type": "integer"
},
"enable_streaming": {
"default": false,
"type": "boolean"
},
"group_by": {
"default": [
"system_id"
],
"items": {
"type": "string"
},
"type": "array"
}
},
"type": "object"
}
}
As seen above, there is a string-based description, the name of type processor type (as supplied to the
REST API in probe configuration). The set of parameters specific to a given probe is described in the
"schema".
Special notice must be paid to "inputs" and "outputs". Even though these are in the "schema" section,
they are present on every type of processor. Each processor can take zero-or-more more input stages
and must output one-or-more stages. Optional stages have "required" set to false. The names of the
stages (relative to a particular instance of a processor) they take are described in these variables. We can
883
see that the "std_dev" processor takes a single input named "in" and a single output named "out". This is
reflected in our usage of it in the previous example.
"inputs": {
"*": {
"required": true,
"types": [
{
"keys": [],
"possible_values": null,
"type": "ns"
},
{
"keys": [],
"possible_values": [],
"type": "dss"
},
{
"keys": [],
"possible_values": null,
"type": "ts"
}
]
}
}
It means the processor accepts one or more inputs of the specified types with arbitrary names.
Changed in 3.0: Previously, inputs and outputs section didn't specify whether specific inputs or outputs
were required, so the format was changed from the following:
"inputs": {
"in": [
{
"data_type": "ns",
"keys": [
"system_id"
],
884
"value_map": null,
"value_type": "int64"
}
...
]
}
Stream Data
Any processor instance in any probe can be configured to have its output stages streamed in the
"perfmon" channel of Apstra streaming output. If the property "enable_streaming" is set to "true" in the
configuration for any processor, its output stages will have all their data streamed.
For Non-Time-Series-based stages, each will generate a message whenever their value changes. For
Time-Series based stages, each will generate a message whenever a new entry is made into the time-
series. For Set-based stages, each item in the set will generate a message according to the two prior
rules.
Each message that is generated has a value, a timestamp, and a set of key-value pairs. The value is self-
explanatory. The timestamp is the time at which the value changed for Non Time-series-based stages
and the timestamp of the new entry for Time-series based stages. The key-value pairs correspond to the
"properties" field we observed earlier in the "values" section of stages, thus providing context.
Below we have the format for messages from IBA which is encapsulated in a PerfMon message (and that
in-turn in an AosMessage). The key-value pairs of context are put into the "property" repeated field (with
"name" as the key and "value" as the value) while the value is put into the "value" field. "probe_id" and
"stage_name" are as they appear. The blueprint_id is put into the "origin_name" of the encapsulated
AosMessage. Similarly the timestamp is put into the generic "timestamp" field.
message ProbeProperty {
required string name = 5;
required string value = 6;
}
message ProbeMessage {
repeated ProbeProperty property = 1;
oneof value {
int64 int64_value = 2;
float float_value = 3;
string string_value = 4;
}
required string probe_id = 5;
885
IN THIS SECTION
You can access complete Apstra API documentation from the web interface in the Platform >
Developers section.
• Root Cause Identification instances are enabled (created) / disabled (deleted) via CRUD API for Root
Cause Identification sub-resource under the blueprint.
• The instances that can be created depends on the reference design of the blueprint. In this first
phase of Root Cause Identification, only two_stage_l3clos has Root Cause Identification support, and
right now it only allows one Root Cause Identification instance per blueprint.
POST /api/blueprints/<blueprint_id>/arca
Request Payload schema
{
"model_name": s.String() # Name of ARCA instance's system fault model (ref
design specific)
"trigger_period": s.Float(min=10.0) # ARCA instance runs every <trigger_period>
seconds.
}
886
{
"model_name": "default",
"trigger_period": 10.0
}
Return values:
201 - Successfully created the RCI instance. Response payload:
{"error": <message>}
Using the PUT API, you can tweak the execution frequency of the Root Cause Identification instance.
PUT /api/blueprints/<blueprint_id>/arca/<arca_id>
Request Payload schema
{
"trigger_period": s.Float(min=10.0)
}
Return values:
200 - Update succeeded.
404 - ARCA instance not found.
422 - Validation error. Response payload:
{"error": <message>}
887
Using the GET API, you can obtain the current status (set of root causes) of the Root Cause
Identification instance.
GET /api/blueprints/<blueprint_id>/arca/<arca_id>
Return values:
200 - see response schema below
404 - ARCA instance not found
{
"id": String, # Unique ID for the root cause in the ARCA instance
"context": String, # Encoded context such as references to graph nodes
"description": String, # Human-readable text, e.g. "link <blah> broken"
"timestamp": Timestamp, # of when RC is detected (ISO8601 format)
"symptoms": List(SYMPTOM_OBJ), # List of symptoms; always non-empty
}
888
Notes on root cause detection and IDs: A root cause may be detected multiple times over the blueprint’s
lifetime. For instance, a root cause is defined for broken cable between spine1 and leaf1. This root cause
can appear at any time, and it may disappear once the problem is fixed. A root cause has a unique ID
scoped in the ARCA instance. This means that the ID may appear and disappear corresponding to
whether the problem occurs or gets fixed, e.g. cable gets broken or reconnected What to expected as
root cause ID: In two_stage_l3clos the root cause ID is a composition of graph node and relationship
IDs, and some immutable but readable name of the root cause. Example: <graph link node id>/broken.
{
"id": String, # Unique ID for the symptom in the ARCA instance
"context": String, # Encoded context such as system ID, service name
"description": String, # Readable, e.g. "interface swp1 on leaf1 is down"
}
Given the same ARCA system fault model, the set of symptom IDs are always the same for given root
cause. However, the context may be different. For instance, the symptom “interface swp1 on leaf1 is
down” is the same, while context of different instances of this symptom may have different system IDs
depending on which system ID is assigned to leaf1 when the root cause for this symptom is detected.
Example symptom ID: <graph interface node id>/down
GET /api/blueprints/<blueprint_id>/arca
Return values
200 - see response schema below
404 - blueprint not found or blueprint not deployed
Response schema:
{
"items": List(ARCA_INSTANCE_DIGEST), # list may be empty
}
ARCA_INSTANCE_DIGEST has the same schema as the response payload of GET individual ARCA
instance, except that it does not contain the “root_causes” key.
In this phase, for two_stage_l3clos blueprints, there is at most 1 element in the list, because only 1
ARCA instance is allowed per blueprint.
889
NOTE: You can also check the health of Apstra VMs from the Apstra GUI.
From the left navigation menu of the Apstra GUI, navigate to Platform > Developers to access REST API
documentation. From there you can access cluster APIs.
{
"state": "active",
"errors": []
}
{
"state": "active",
"errors": [
"agentReboot"
]
}
IN THIS SECTION
API - Blueprints
blueprint_id = response.json()['items'][0]['id']
blueprint_name = response.json()['items'][0]['label']
print(blueprint_name, blueprint_id)
Run Python
The proceeding Python3 code can be run on the Cloudlabs AOS Server. Use the python3.6 command to
run the Python script.
From the left navigation menu, navigate to Platform > Developers > REST API Explorer to see the screen
as shown below. The left column contains a list of API categories from which you can browse. You can
also search for a specific endpoint by entering a query in the Quick Search field. The details view of an
endpoint includes information about the URL, method, summary, parameters and responses. The
893
example below shows the model for checking provider settings by login with username and password.
Technical Support
IN THIS SECTION
If you require assistance with registration or with opening a technical support case via phone, call
Juniper Customer Care at +1-888-314-5822 (toll free, US & Canada). If you are outside the US or
Canada, call +1-408-745-9500 or a country number listed on the Contact Support page.
To aid the support process, we ask that you provide Juniper Support with diagnostic information from
the Apstra environment. Separate show tech files are needed from the Apstra controller and from each
of the affected device agents. You can obtain show tech files, from the GUI (recommended) or the CLI,
as described in the next sections. You may also be asked for a "backup" on page 913 of your Apstra
database.
NOTE: For Apstra server controllers with large databases, the operation may timeout. If this
happens, you must "collect show tech using the CLI" on page 898.
3. You can collect show_tech from the Apstra GUI that includes a copy of the backup. If Juniper
Support requests a backup, check the Include Backup check box. This backup provides information
for Support and Engineering. It doesn't include credentials, so it's not suitable for restoring your
production environment. (Use backups from the "Back Up Apstra Database" on page 913 procedure
instead.)
4. Check the box for Managed Devices to see the list of managed devices (devices with agents that
have been acknowledged).
5. Select the devices that need show tech collected.
NOTE: When device show tech is collected, the configured device system agent username
and password authentication are used. If you've configured the device to use a different
authentication (AAA) method with a different username and password (such as RADIUS and
TACACS) you can't collect show tech from the Apstra GUI. You must "collect show tech with
CLI" on page 900.
895
TIP: If the image below appears, you still need to configure local credentials on the node.
Click the link to go to the controller node screen, click the Edit button (right side), then enter
896
the username and password you use for the VM console or SSH.
7. After the jobs are complete and marked SUCCESS, click the download button for each of the files
(under Logs).
TIP: After the files have been downloaded, you can free up disk space by deleting jobs.
8. From a computer with the ability to upload, upload the show tech files to your customer case.
897
NOTE: If your offbox agents are for infra, you'll collect show tech with a different method. Refer
to "Show-Tech: Infra Offbox Agents (CLI)" on page 898 for details.
1. SSH into the Apstra server that the offbox agent is running on. (ssh admin@<apstra-server-ip> where
<apstra-server-ip> is the IP address of the Apstra server.)
2. To copy the show tech file(s) to your user directory, run the aos_offbox_show_tech_collector command
with the following arguments:
• --ips <ip address of one or more devices> (for example: 11.29.53.7 11.29.53.8 11.29.53.9)
admin@aos-server:~$ ls -l
total 217440
-rw-r--r-- 1 root root 75958 Nov 15 22:27 11.29.53.7-5254009E6B20-junos-show-tech.tar.gz
-rw-r--r-- 1 root root 76180 Nov 15 22:26 11.29.53.8-52540039A6F3-junos-show-tech.tar.gz
-rw-r--r-- 1 root root 107620 Nov 15 22:25 11.29.53.9-5254001A5CEB-junos-show-tech.tar.gz
-rw------- 1 root root 8737 Nov 15 22:27 aos_di_11.29.53.7_show_tech_run.log
898
admin@aos-server:~$
3. Copy the show tech file(s) to a local computer with the ability to upload.
4. Upload the show tech file to your customer case.
4. Using SCP, run the docker cp command to copy the show tech file from the offbox agent Docker
container to the /tmp directory of the Apstra server. For example:
5. Locate the file archive in the /tmp directory and copy it to a local computer with the ability to upload.
Then upload the show tech file to your customer case.
2. Run the sudo aos_show_tech command to generate and copy the show tech file to the current working
directory of the Apstra server. For example:
3. Locate the file archive in the /tmp directory (for example, aos_show_tech_20200401_033431.tar.gz), and via
SCP, copy the file to a local computer with the ability to upload.
900
5. Locate the file archive in the /tmp directory (for example, aos_show_tech_20200401_034527.tar.gz) and copy
it, via SCP, to a local computer with the ability to upload.
6. Upload the show tech file to your customer case.
From the Apstra GUI, from the left navigation menu, navigate to Platform > About to see the Juniper
Apstra versions. This page also includes the U.S. patent numbers that apply to the Juniper Apstra
product.
901
IN THIS SECTION
You can return quickly to frequently visited pages by saving them as favorites. From your user profile
page, you can manage favorites, change your password, username and email; and log out of the Apstra
software.
902
Manage Favorites
• To add a favorite - click the star in the upper-left corner of the page to save. Leave the default name
or rename it, then click Add. The outlined star becomes a shaded star to indicate that it is saved as a
favorite.
• To remove a favorite - click the shaded star on the saved page. The star becomes an outline.
• To go to your list of favorites from anywhere in the Apstra GUI, click Favorites in the left navigation
menu.
• To go to a favorite page from the Favorites menu - click its name. Up to five saved pages appear in
the drop-down list.
• To go to to your list of favorites from the Favorites menu - click Show more to go to your profile
page where you can link to all favorite pages and change their names.
• To go to your profile page to see all your favorites, click your user name in the left navigation menu
(bottom), then click Profile.
• To change the name of a link from your profile page - click the Edit label button, change the name,
then click Update.
903
• To remove a favorite page from your profile page - click the Remove button (trash can) and click
Delete.
1. From any page, click your username in the left navigation menu (bottom) and click Profile to see your
profile page.
2. Click the Change Password button (top-right), enter your current password, then enter your new
password that meets password complexity requirements, twice.
3. Click Change Password to update your password and return to your profile.
1. From any page, click your username in the left navigation menu (bottom) and click Profile to go to
your profile page.
2. Click the Edit button (top-right), then change your name and/or email, as applicable.
3. Click Save to update your details and return to your profile.
904
Log Out
From any page, click your username in the left navigation menu (bottom) and click Log Out. Your viewing
preferences (visible fields, show links) are saved so when you log in again, you'll have the same
customized views.
IN THIS SECTION
IN THIS SECTION
The information in this section is about managing the Apstra server. For information about installing and
upgrading the Apstra server, see the Juniper Apstra Installation and Upgrade Guide.
As of Apstra version 4.2.0, the Apstra server base OS uses Ubuntu 22.04 LTS to pick up the latest Linus
OS improvements. Previous Apstra versions use Ubuntu 18.04 LTS.
As of Apstra version 4.2.0, the Apstra server backend has been completely migrated from Python 2 to
Python 3. Python 2 has been fully deprecated to allow long term support and security compliance.
1. To check general status from the Apstra server CLI, run the command sudo service aos status.
Docs: man:systemd-sysv-generator(8)
Tasks: 0 (limit: 4915)
CGroup: /aos.service
Jul 28 00:35:35 aos-server systemd[1]: Starting LSB: Start AOS management system...
Jul 28 00:35:36 aos-server aos[1040]: net.core.wmem_max = 33554432
Jul 28 00:35:37 aos-server aos[1040]: Creating aos_sysdb_1 ...
Jul 28 00:35:37 aos-server aos[1040]: Creating aos_nginx_1 ...
Jul 28 00:35:37 aos-server aos[1040]: Creating aos_auth_1 ...
Jul 28 00:35:37 aos-server aos[1040]: Creating aos_controller_1 ...
Jul 28 00:35:37 aos-server aos[1040]: Creating aos_metadb_1 ...
Jul 28 00:35:38 aos-server aos[1040]: [240B blob data]
Jul 28 00:35:38 aos-server systemd[1]: Started LSB: Start AOS management system.
admin@aos-server:~$
2. To troubleshoot, run the aos_controller_health_check script. It searches for known error signatures in the
Apstra server logs (such as agent crashes) and returns the output. If no errors are found, no output is
returned. See below for sample command.
To restart the Apstra server you can reboot the VM or run the following commands.
1. Run the command sudo service aos stop.
When the Apstra server is down, device agents may temporarily log "liveness" telemetry alarms.
2. Run the command sudo service aos start.
After services are restored (in a minute or two) the "liveness" telemetry alarm resets.
907
If you lose your admin password for the Apstra server VM, and you still have console access to the
Apstra server VM, you can reset your password.
1. Attach to the Apstra server console and send a "reset" signal to the VM. To access the GRUB menu,
immediately press the esc or shift key in the console on reboot.
2. Select Advanced options for Ubuntu.
908
4. At the next GRUB menu, select the first (recovery mode) option.
910
5. From the Recovery Menu, select root, then press Enter to enter a root shell prompt.
After reboot, you can log in to the Apstra server VM Linux CLI as user admin with the new password.
CAUTION: Reinstalling the Apstra server removes ALL Apstra data from the Apstra
server VM and reinstalls a fresh version. Use with care. This is mostly helpful for proof
of concepts or demo installs. If you have problems that require you to reinstall the
software, contact "Juniper Technical Support" on page 893.
1. If you want to retain the Apstra database, "back it up" on page 913 now.
2. Download the "Installer" .run file from Juniper Support Downloads.
3. Run the command service aos stop to stop Apstra service, if possible.
You can now "restore" on page 914 a database backup or build a new blueprint.
The Apstra server and related databases run in Docker containers. The database is stored in a single
folder in the Apstra server at /var/lib/aos/db. You can copy the database between Apstra servers.
913
Source and Target database versions must be the same version. If versions are different, contact "Juniper
Technical Support" on page 893 for assistance before proceeding.
To ensure that device agents can 'call home' properly after database restoration, Source and Target must
have the same IP address when starting the Apstra server, You can restore the software to a different IP
address, but then you must reconfigure each device agent (/mnt/flash/aos-config, /etc/aos/aos.conf) to point
to the new Apstra server IP address.
CAUTION: Any changes you make within the Apstra server are not stored in the
backup.
You can back up the database while the Apstra server is running. Device/OS image information is not
included in backups. When restoring a database, any device/OS image information is discarded.
Before backing up your database, disable any active IBA probes and wait until any database "write" tasks
have completed.
1. Run the command aos_backup to back up the database. Backups are saved as dated snapshots
(/var/lib/aos/snapshot/<date>/aos.data.tar.gz) in the Apstra server.
If all IBA probes have been disabled and all "write" tasks have completed, the following message
appears.
If many IBA probes are enabled or if any other DB "write" tasks are in progress, they may not be
included in the backup, and the following message appears.
to capture these changes right now instead of waiting for the next
backup operation.
=====================================================================
New AOS snapshot: 2023-06-29_16-15-57
admin@aos-server:~$
If this message appears, disable your IBA probes and run the aos_backup command again.
2. Backups are stored on the Apstra server itself. If the server needs to be restored or if its disk image
becomes corrupt, any backups/restores are lost along with the Apstra server. We recommend that
you periodically move backups/restores off of the Apstra server to a secure location. Also, if you've
scheduled cron jobs to periodically backup the database, make sure to rotate those files off of the
Apstra server to keep the Apstra server VM disk from becoming full. Copy the contents of the
snapshot directory to your backup infrastructure.
CAUTION: Always restore a database from a new "backup" on page 913, never from
older backups or from the backup included in a show_tech.
If you make changes after you back up the database, those changes aren't included in
the restore. This could create differences between device configs and the Apstra
environment. If this happens, you must perform a full config push, which is service-
impacting.
Don't restore a database using the backup included in a show_tech. Juniper Support and
Engineering use it for analysis. It doesn't include credentials, so it's not suitable for
restoring your production environment.
915
NOTE: If you're restoring a backup to a new Apstra server that uses a different network interface
for access (eth1 vs eth0 for example), you must update the metadb variable in the [controller]
section of the /etc/aos/aos.conf configuration file, then restart the Apstra server.
1. Verify that the contents of the snapshot folder are on the filesystem. Backups are saved as dated
snapshots (/var/lib/aos/snapshot/<date>/aos.data.tar.gz). The file must be named aos.data.tar.gz.
2. Run the aos_restore command as illustrated below. The restore process first backs up the current
database.
Stopped
10.7s
(Reading database ... 83485 files and directories currently installed.)
Removing aos-compose (99.0.0-5949) ...
tar: Removing leading `/' from member names
/var/lib/aos/db/
/var/lib/aos/db/_AosAuth-0000000064947a9d-000b0094-log
/var/lib/aos/db/_Main-0000000064947aa0-000a1865-log-valid
/var/lib/aos/db/_AosAuth-0000000064947a9d-000b0094-checkpoint-valid
/var/lib/aos/db/_Main-0000000064947aa0-000a1865-log
/var/lib/aos/db/_Main-0000000064947aa0-000a1865-checkpoint-valid
/var/lib/aos/db/_Central-0000000064947a9e-000b9681-log
/var/lib/aos/db/_AosSysdb-0000000064947a9d-000c83d2-checkpoint-valid
/var/lib/aos/db/_AosAuth-0000000064947a9d-000b0094-log-valid
/var/lib/aos/db/_AosAuth-0000000064947a9d-000b0094-checkpoint
/var/lib/aos/db/_AosSysdb-0000000064947a9d-000c83d2-log-valid
/var/lib/aos/db/.devpi/
/var/lib/aos/db/.devpi/server/
/var/lib/aos/db/.devpi/server/.nodeinfo
/var/lib/aos/db/.devpi/server/.sqlite
/var/lib/aos/db/.devpi/server/.serverversion
/var/lib/aos/db/.devpi/server/.event_serial
/var/lib/aos/db/_Main-0000000064947aa0-000a1865-checkpoint
/var/lib/aos/db/_Metadb-0000000064947a9d-000b82ea-log
/var/lib/aos/db/_Metadb-0000000064947a9d-000b82ea-log-valid
/var/lib/aos/db/_AosSysdb-0000000064947a9d-000c83d2-log
/var/lib/aos/db/blueprint_backups/
/var/lib/aos/db/blueprint_backups/configlets/
/var/lib/aos/db/blueprint_backups/configlets/167/
/var/lib/aos/db/blueprint_backups/configlets/167/graph.md5sum
/var/lib/aos/db/blueprint_backups/configlets/167/graph.json.zip
/var/lib/aos/db/blueprint_backups/configlets/161/
/var/lib/aos/db/blueprint_backups/configlets/161/graph.md5sum
/var/lib/aos/db/blueprint_backups/configlets/161/graph.json.zip
/var/lib/aos/db/blueprint_backups/configlets/166/
/var/lib/aos/db/blueprint_backups/configlets/166/graph.md5sum
/var/lib/aos/db/blueprint_backups/configlets/166/graph.json.zip
/var/lib/aos/db/blueprint_backups/configlets/164/
/var/lib/aos/db/blueprint_backups/configlets/164/graph.md5sum
/var/lib/aos/db/blueprint_backups/configlets/164/graph.json.zip
/var/lib/aos/db/blueprint_backups/configlets/163/
/var/lib/aos/db/blueprint_backups/configlets/163/graph.md5sum
/var/lib/aos/db/blueprint_backups/configlets/163/graph.json.zip
917
/var/lib/aos/db/_Central-0000000064947a9e-000b9681-log-valid
/var/lib/aos/db/_AosSysdb-0000000064947a9d-000c83d2-checkpoint
/var/lib/aos/db/_Metadb-0000000064947a9d-000b82ea-checkpoint-valid
/var/lib/aos/db/_AosController-0000000064947aa0-000d40b6-log
/var/lib/aos/db/_AosController-0000000064947aa0-000d40b6-checkpoint
/var/lib/aos/db/_Auth-0000000064947a9e-000a44d7-checkpoint
/var/lib/aos/db/_Central-0000000064947a9e-000b9681-checkpoint
/var/lib/aos/db/_Central-0000000064947a9e-000b9681-checkpoint-valid
/var/lib/aos/db/_AosController-0000000064947aa0-000d40b6-log-valid
/var/lib/aos/db/_Auth-0000000064947a9e-000a44d7-checkpoint-valid
/var/lib/aos/db/_Auth-0000000064947a9e-000a44d7-log
/var/lib/aos/db/_Auth-0000000064947a9e-000a44d7-log-valid
/var/lib/aos/db/_Metadb-0000000064947a9d-000b82ea-checkpoint
/var/lib/aos/db/_AosController-0000000064947aa0-000d40b6-checkpoint-valid
/var/lib/aos/anomaly/
/var/lib/aos/anomaly/_Anomaly-0000000064947a9e-000c9d0a-checkpoint-valid
/var/lib/aos/anomaly/_Anomaly-00000000649452ff-00034e81-checkpoint
/var/lib/aos/anomaly/_Anomaly-00000000649452ff-00034e81-checkpoint-valid
/var/lib/aos/anomaly/_Anomaly-0000000064947a9e-000c9d0a-checkpoint
/var/lib/aos/anomaly/_Anomaly-00000000649452ff-00034e81-log-valid
/var/lib/aos/anomaly/_Anomaly-0000000064947a9e-000c9d0a-log
/var/lib/aos/anomaly/_Anomaly-00000000649452ff-00034e81-log
/var/lib/aos/anomaly/_Anomaly-0000000064947a9e-000c9d0a-log-valid
/etc/aos/aos.conf
/etc/aos-img-chksum/
/etc/aos-img-chksum/checksums.signed
/etc/aos-img-chksum/checksums
/etc/aos-img-chksum/key.pub
/opt/aos/aos-compose.deb
/opt/aos/frontend_images/
/opt/aos/frontend_images/jinja_docs.zip
/opt/aos/frontend_images/sdt_docs.zip
/opt/aos/frontend_images/aos-web-ui.zip
/etc/aos/version
/etc/aos-auth/secret_key
/etc/aos-credential/secret_key
Selecting previously unselected package aos-compose.
(Reading database ... 83454 files and directories currently installed.)
Preparing to unpack /opt/aos/aos-compose.deb ...
Unpacking aos-compose (99.0.0-5949) ...
Setting up aos-compose (99.0.0-5949) ...
Verifying checksums for docker images...
Signature Verified Successfully
918
Verified.
[+] Building 0.0s
(0/0)
3. When the database has been restored and migrated to a new server, the entire system state has
been copied from the backed up installation to the new target. Run the command service aos status to
validate the restoration.
Jun 22 16:45:14 aos-server systemd[1]: Started LSB: Start AOS management system.
admin@aos-server:~$
4. The database is stored on the Apstra server itself. If the server needs to be restored or if its disk
image becomes corrupt, any backups/restores are lost along with the Apstra server. We recommend
that you periodically move backups/restores off of the Apstra server to a secure location. Also, if
you've scheduled cron jobs to periodically backup the database, make sure to rotate those files off of
the Apstra server to keep the Apstra server VM disk from becoming full. Copy the contents of the
snapshot directory to your backup infrastructure.
The commands below delete all data on the Apstra server to a fresh state.
1. Run the command service aos stop.
2. Run the command rm -rf /var/lib/aos/db/*.
3. Run the command service aos start.
CAUTION: If you bring up a new Apstra server with the same IP address as your old
Apstra server without any configuration, when the device agents re-register with the
new Apstra server they will revert to an unconfigured "Quarantined" state. You must
isolate the new Apstra server from the network while you change its IP address, restore
the database and restart the Apstra server.
If you want to maintain the same IP address on the new Apstra server, then bring up a new Apstra
server VM (with the same version as the original Apstra server) with a temporary IP address. After
migrating an aos_backup to the new Apstra server, the original Apstra server will be shut down and the IP
address will be changed to the original IP address on the new server. We recommend this process if
you're using onbox device system agents.
If you want to use a new IP address on the new Apstra server, you must manually reconfigure the
aos.conf file for each onbox device system agent. This is not required for offbox device system agents.
1. Run the command sudo aos_backup to back up the original Apstra server.
2. Copy the snapshot to the new server using a temporary IP address on the new Apstra server.
3. Compress and move the snapshot directory to the new Apstra server. This example uses the scp
command to copy the file to the new Apstra server using a different IP address.
Password:
aos_backup.tar.gz 100% 20MB 140.9MB/s 00:00
admin@aos-server:~$
4. After the snapshot has been removed from the old Apstra server, stop service (or completely shut
down the Apstra server VM) to disconnect the old Apstra server.
5. If you want to use the same IP address, you must manually reconfigure the eth0 interface on the new
Apstra server to the IP address of the old Apstra server. For more information, see the Configuration
section of the Juniper Apstra Installation and Upgrade guide.
6. On the new Apstra server, uncompress the tar.gz file.
7. Run the command aos_restore to restore the database on the new Apstra server. This command
automatically starts the service after restoring the database.
admin@aos-server:~$ cd 2020-07-27_22-49-34
admin@aos-server:~/2020-07-27_22-49-34$ sudo bash aos_restore
[sudo] password for admin:
====================================================================
Backup operation completed successfully.
====================================================================
New AOS snapshot: 2020-07-27_23-07-13
Stopping aos_sysdb_1 ... done
Stopping aos_auth_1 ... done
Stopping aos_controller_1 ... done
Stopping aos_nginx_1 ... done
Stopping aos_metadb_1 ... done
(Reading database ... 110457 files and directories currently installed.)
Removing aos-compose (3.3.0-658) ...
Processing triggers for ureadahead (0.100.0-21) ...
Processing triggers for systemd (237-3ubuntu10.41) ...
922
/var/lib/aos/db/_AosController-000000005f1f376f-0003998b-log
/var/lib/aos/db/_Main-000000005f1f376f-000569a8-checkpoint-valid
/var/lib/aos/db/_Metadb-000000005f1f376d-000cb9a9-log-valid
/var/lib/aos/db/_AosAuth-000000005f1f376d-000a40ff-checkpoint
/var/lib/aos/db/_AosController-000000005f1f376f-0003998b-checkpoint-valid
/var/lib/aos/anomaly/
/var/lib/aos/anomaly/_Anomaly-000000005f1f36a4-000aaa68-checkpoint-valid
/var/lib/aos/anomaly/_Anomaly-000000005f1f331b-0000e8eb-checkpoint
/var/lib/aos/anomaly/_Anomaly-000000005f1f376f-00002176-checkpoint
/var/lib/aos/anomaly/_Anomaly-000000005f1f376f-00002176-log
/var/lib/aos/anomaly/_Anomaly-000000005f1f331b-0000e8eb-log
/var/lib/aos/anomaly/_Anomaly-000000005f1f2abc-0000a867-log
/var/lib/aos/anomaly/_Anomaly-000000005f1f331b-0000e8eb-checkpoint-valid
/var/lib/aos/anomaly/_Anomaly-000000005f1f2abc-0000a867-checkpoint
/var/lib/aos/anomaly/_Anomaly-000000005f1f36a4-000aaa68-checkpoint
/var/lib/aos/anomaly/_Anomaly-000000005f1f376f-00002176-log-valid
/var/lib/aos/anomaly/_Anomaly-000000005f1f36a4-000aaa68-log
/var/lib/aos/anomaly/_Anomaly-000000005f1f331b-0000e8eb-log-valid
/var/lib/aos/anomaly/_Anomaly-000000005f1f2abc-0000a867-checkpoint-valid
/var/lib/aos/anomaly/_Anomaly-000000005f1f2abc-0000a867-log-valid
/var/lib/aos/anomaly/_Anomaly-000000005f1f36a4-000aaa68-log-valid
/var/lib/aos/anomaly/_Anomaly-000000005f1f376f-00002176-checkpoint-valid
/opt/aos/aos-compose.deb
/opt/aos/frontend_images/
/opt/aos/frontend_images/aos-web-ui.zip
Selecting previously unselected package aos-compose.
(Reading database ... 110440 files and directories currently installed.)
Preparing to unpack /opt/aos/aos-compose.deb ...
Unpacking aos-compose (3.3.0-658) ...
Setting up aos-compose (3.3.0-658) ...
Processing triggers for ureadahead (0.100.0-21) ...
Processing triggers for systemd (237-3ubuntu10.41) ...
Starting aos_nginx_1 ... done
Starting aos_sysdb_1 ... done
Starting aos_controller_1 ... done
Starting aos_metadb_1 ... done
Starting aos_auth_1 ... done
admin@aos-server:~/2020-07-27_22-49-34$
924
8. Run the command service aos status and verify that the Apstra server is running.
9. From the Apstra GUI, from the left navigation menu, navigate to Devices > Managed Devices to
verify that your devices are online in the "Active" state.
When you boot up the Apstra server for the first time, a unique self-signed certificate is automatically
generated and stored on the Apstra server at /etc/aos/nginx.conf.d (nginx.crt is the public key for the
webserver and nginx.key is the private key.) The certificate is used for encrypting the Apstra server and
REST API. It's not for any internal device-server connectivity. Since the HTTPS certificate is not retained
when you back up the system, you must manually back up the etc/aos folder. We recommend replacing
the default SSL certificate. Web server certificate management is the responsibility of the end user.
Juniper support is best effort only.
925
admin@aos-server:/$ sudo -s
[sudo] password for admin:
root@aos-server:/# cd /etc/aos/nginx.conf.d
root@aos-server:/etc/aos/nginx.conf.d# cp nginx.crt nginx.crt.old
root@aos-server:/etc/aos/nginx.conf.d# cp nginx.key nginx.key.old
2. Create a new OpenSSL private key with the built-in openssl command.
3. Create a certificate signing request. If you want to create a signed SSL certificate with a Subjective
Alternative Name (SAN) for your Apstra server HTTPS service, you must manually create an
OpenSSL template. For details, see Juniper Support Knowledge Base article KB37299.
CAUTION: If you have created custom OpenSSL configuration files for advanced
certificate requests, don't leave them in the Nginx configuration folder. On startup,
Nginx will attempt to load them (*.conf), causing a service failure.
4. Submit your Certificate Signing Request (nginx.csr) to your Certificate Authority. The required steps
are outside the scope of this document; CA instructions differ per implementation. Any valid SSL
certificate will work. The example below is for self-signing the certificate.
5. Verify that the SSL certificates match: private key, public key, and CSR.
7. Confirm that the new certificate is in your web browser and that the new certificate common name
matches 'aos-server.apstra.com'.
When you boot up the Apstra server for the first time, a unique self-signed certificate is automatically
generated and stored on the Apstra server at /etc/aos/nginx.conf.d (nginx.crt is the public key for the
webserver and nginx.key is the private key.) The certificate is used for encrypting the Apstra server and
REST API. It's not for any internal device-server connectivity. Since the HTTPS certificate is not retained
when you back up the system, you must manually back up the etc/aos folder. We support and
recommend replacing the default SSL certificate.
admin@aos-server:/$ sudo -s
[sudo] password for admin:
root@aos-server:/# cd /etc/aos/nginx.conf.d
root@aos-server:/etc/aos/nginx.conf.d# cp nginx.crt nginx.crt.old
root@aos-server:/etc/aos/nginx.conf.d# cp nginx.key nginx.key.old
2. If a Random Number Generator seed file .rnd doesn't exist in /home/admin, create one.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:California
Locality Name (eg, city) []:Menlo Park
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Apstra, Inc
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []:aos-server.apstra.com
Email Address []:support@apstra.com
root@aos-server:/etc/aos/nginx.conf.d#
You have the option of changing the default Apstra server hostname (aos-server).
1. SSH into the Apstra server as user admin (ssh admin@<apstra-server-ip> where <apstra-server-ip> is
the IP address of the Apstra server.)
2. As root user, run the command aos_hostname <hostname> where <hostname> is the new hostname.
The new hostname will display the next time you log in.
NOTE: Do not use /etc/hostname to change the Apstra server hostname. With this method, if you
configure syslog to be forwarded to an external server, the default hostname will be entered into
the log instead of the new one.
929
Install Apstra-CLI
1. Download the Apstra CLI Utility for your Apstra version from the Application Tools section of Juniper
Support Downloads.
2. Copy the Apstra CLI Docker container tar.gz file to the Apstra server. (The file name is something like,
apstracli-release_4.2.0.11.tar.gz.) For example:
3. Load the provided Docker image into Docker with the docker image load command. For example:
1. Start the Apstra CLI Docker container with the docker run command. In the example below, replace
4.2.0.11 with your Apstra CLI version, and replace 10.28.65.3 with the IP address of your Apstra server.
The password is your Apstra GUI password (not the VM password).
2. Apstra CLI comes with a built-in feature that auto-completes commands. Press the TAB key, then the
up and down arrow keys to explore this tool and its functionality. You can also type --help for
descriptions of each function.
For examples of how to use apstra-cli, see "Apstra-CLI Commands" on page 1186 in the References
section. For assistance with using Apstra CLI, contact "Juniper Support " on page 893.
Guides
IN THIS SECTION
Mixed Uplink Speeds between Leaf Devices and Spine Devices | 985
931
IN THIS SECTION
Telemetry collectors are Python modules that help collect extended telemetry. The following sections
describe the pipeline for creating telemetry collectors and extending Apstra with new collectors. You
need familiarity with Python to be able to develop collectors.
To keep your system environment intact, we recommend that you use a virtual environment to isolate
the required Python packages (for development and testing). You can download the base development
environment, aos_developer_sdk.run, from https://support.juniper.net/support/downloads/?p=apstra/.
To load the environment, execute:
4.096kB
e2e40f457231: Loading layer [==================================================>] 1.771MB/
1.771MB
Loaded image: aos-developer-sdk:2.3.1-129
================================================================================
Loaded AOS Developer SDK Environment Container Image
aos-developer-sdk:2.3.1-129.
================================================================================
This command loads the aos_developer_sdk Docker image. After the image load is complete, the
command to start the environment is printed. Start the container environment as specified by the
command. To install the dependencies, execute:
root@f2ece48bb2f1:/# cd /aos_developer_sdk/
root@f2ece48bb2f1:/aos_developer_sdk# make setup_env
...
The environment is now set up for developing and testing the collectors. Apstra SDK packages, such as
device drivers and REST client, are also installed in the environment.
Develop Collector
To develop a telemetry collector, specify the following in order.
1. Service for which the collector is developed - Identify what the service is. For example, the service
could be to collect received and transmitted bytes from the switch interfaces. Identify a name for the
service. Using service names that are reserved for built-in services (ARP, BGP, interface, hostname,
route, MAC, XCVR, LAG, MLAG) is prohibited.
2. The schema of the data provided to Apstra - Identify how the collector output is to be structured. A
collection of key-value pairs should be posted to Apstra. Identify what each item is, that is, what is
the key/value syntactically and semantically. For the above mentioned example, key is a string that
identifies the interface name. The value is a JSON string, with the JSON having two keys 'rx' and 'tx'
both having an integer value.
933
3. Network Operating System (NOS) for which the collector is developed - The collector plugins are
NOS-specific. Before writing a collector, identify the NOS(s) for which collector(s) are required.
4. How the required data can be obtained from the device - Identify the commands that can be used in
the device to retrieve the required information. For example, 'show interfaces' command gives
received and transmitted bytes from an Arista EOS device.
5. Storage Schema Path - The type of key and value in each item determines the storage schema path.
The type of collector selected determines the storage schema for the application. The storage
schema defines the high level structure of the data returned by the service. The storage schema path
for your collector can be determined using the following table:
Table 28: Determining Storage Schema Path
6. Application Schema - Application schema defines the schema for each item posted to the framework.
Application schema is expressed using draft 4 version of json schema. Each item is comprised of a
key and value. The following table specifies two sample items.
Table 29: Sample item with its storage schema path
aos.sdk.telemetry.schemas.generic {
"identity": "eth0",
"value": "up",
}
934
Table 29: Sample item with its storage schema path (Continued)
aos.sdk.telemetry.schemas.iba_string_data {
"key": {
"source_ip": "10.1.1.1",
"dest_ip": "10.1.1.2",
},
"value": "up",
}
NOTE: * An item returned by collectors with generic storage schema should specify the key
value using the key 'identity' and the value using the key 'value'.
* An item returned by collectors with IBA-based schemas should specify the key value using
the key 'key' and the value using the key 'value'.
Using this information, you can write the JSON schema. The following table maps the sample
item specified above to its corresponding JSON schema.
{ {
"identity": "eth0", "type": "object",
"value": "up", "properties": {
} "identity": {
"type": "string",
},
"value": {
"type": "string",
}
}
}
{
935
{ {
"key": { "type": "object",
"source_ip": "10.1.1.1", "properties": {
"dest_ip": "10.1.1.2", "key": {
}, "type": "object",
"value": "up", "properties": {
} "source_ip": {
"type": "string",
"format": "ipv4"
},
"dest_ip": {
"type": "string",
"format": "ipv4"
},
"required": ["source_ip", "dest_ip"],
}
},
"value": {
"type": "string",
}
}
}
{
You can specify more complex schema using the constructs available in JSON schema. Update the
schema in the file aos_developer_sdk/aosstdcollectors/aosstdcollectors/json_schemas/<service_name>.json
NOTE: As of Apstra version 4.0.1, you can "import the service schema" on page 682 via the
GUI.
Write Collector
IN THIS SECTION
The device driver instance inside the collector provides methods to execute commands against the
devices. For example, most Apstra device drivers provide methods get_json and get_text to execute
commands and return the output.
NOTE: The device drivers for aos_developer_sdk environment are preinstalled. You can explore
the methods available to collect data. For example:
'get_aos_version_related_info', 'get_device_aos_version',
'get_device_aos_version_number', 'get_device_info', 'get_json',
'get_text', 'ip_address', 'onbox', 'open', 'open_options', 'password',
'probe', 'set_device_info', 'upload_file', 'username']
Parse Data
The collected data needs to be parsed and re-formatted per the Apstra framework and the service
schema identified above. Collectors with generic storage schema follow the following structure:
{
"items": [
{
"identity": <key goes here>,
"value": <value goes here>,
},
{
"identity": <key goes here>,
"value": <value goes here>,
},
...
]
}
[
{
"key": <key goes here>,
"value": <value goes here>,
},
{
"key": <key goes here>,
"value": <value goes here>,
},
...
]
938
In the structures above, the data posted has multiple items. Each item has a key and a value. For
example, to post interface specific information, there would be an identity/key-value pair for each
interface you want to post to the framework.
NOTE: In the case when you want to use a third party package to parse data obtained from a
device, list the Python package and version in the path.
<aos_developer_sdk>/aosstdcollectors/requirements_<NOS>.txt. The packages installed by the
dependency do not conflict with packages that Apstra software uses. The Apstra-installed
packages are available at /etc/aos/python_dependency.txt in the development environment.
When data is collected and parsed as per the required schema, post the data to the framework. You can
use the post_data method available in the collector. It accepts one argument, and that is the data that
should be posted to the framework.
In addition to defining the collector class, define the function collector_plugin in the collector file. The
function takes one argument and returns the collector class that is implemented.
"""
Service Name: interface_in_out_bytes
Schema:
Key: String, represents interface name.
Value: Json String with two possible keys:
rx: integer value, represents received bytes.
tx: integer value, represents transmitted bytes.
DOS: eos
Data collected using command: 'show interfaces'
Type of Collector: BaseTelemetryCollector
Storage Schema Path: aos.sdk.telemetry.schemas.generic
Application Schema: {
'type': 'object',
939
'properties': {
'identity': {
'type': 'string',
},
'value': {
'type': 'object',
'properties': {
'rx': {
'type': 'number',
},
'tx': {
'type': 'number',
}
},
'required': ['rx', 'tx'],
}
}
}
"""
import json
from aos.sdk.system_agent.base_telemetry_collector import BaseTelemetryCollector
# }
# ...
# }
"""
Service Name: iba_bgp
Schema:
Key: JSON String, specifies local IP and peer IP.
Value: String. ‘1’ if state is established ‘2’ otherwise
DOS: eos
Data collected using command: 'show ip bgp summary vrf all'
Storage Schema Path: aos.sdk.telemetry.schemas.iba_string_data
Application Schema: {
'type': 'object',
'properties': {
key: {
'type': 'object',
'properties': {
941
'local_ip': {
'type': 'string',
},
'peer_ip': {
'type': 'string',
}
},
'required': ['local_ip', 'peer_ip'],
},
'value': {
'type': 'string',
}
}
}
"""
def parse_text_output(collected):
result = [
{'key': {'local_ip': str(vrf_info['routerId']), 'peer_ip': str(peer_ip)},
'value': str(
1 if session_info['peerState'] == 'Established' else 2)}
for vrf_info in collected['vrfs'].itervalues()
for peer_ip, session_info in vrf_info['peers'].iteritems()]
return result
The folder aos_developer_sdk/aosstdcollectors/test in the repository contains folders based on the NOS. Add
your test to the folder that matches the NOS. For example, a test to a collector for Cumulus is added to
aos_developer_sdk/aosstdcollectors/test/cumulus. We recommend that you name the unit test with the prefix
test_.
The existing infrastructure implements a Pytest fixture collector_factory that is used to mock the device
driver command response. The general flow for test development is as follows.
1. Use the collector factory to get a collector instance and mocked Apstra framework. The collector
factory takes the collector class that you have written as input.
import json
from aosstdcollectors.eos.interface_in_out_bytes import InterfaceRxTxCollector
command_response = {
'interfaces': {
'Ethernet1': {
'interfaceCounters': {
'inOctets': 10,
'outOctets': 20,
}
},
'Ethernet2': {
'interfaceCounters': {
'inOctets': 30,
'outOctets': 40,
943
}
}
}
}
# Set the device get_json method to retrieve the command response.
collector.device.get_json.side_effect = lambda _: command_response
expected_data = [
{
'identity': 'Ethernet1',
'value': json.dumps({
'rx': 10,
'tx': 20,
}),
},
{
'identity': 'Ethernet2',
'value': json.dumps({
'rx': 30,
'tx': 40,
})
}
]
# validate the data posted by the collector
data_posted_by_collector = json.loads(mock_framework.post_data.call_args[0][0])
assert sorted(expected_data) == sorted(data_posted_by_collector["items"])
Package Collector
All the collectors are packaged based on the NOS. To generate all packages, execute make at
aos_develop_sdk. You can find the build packages at aos_developer_sdk/dist. The packages build can be
broadly classified as:
944
Package Description
Built-In These packages have the prefix aosstdcollectors_builtin_. To collect telemetry from a device
Collector per the reference design, Apstra requires services as listed in the <deviceblah> section. Built-In
Packages collector packages contain collectors for these services. The packages are generated on a per
NOS basis.
Custom These package have the prefix aosstdcollectors_custom_ in their names. The packages are
Collector generated on a per NOS basis. The package named aosstdcollectors_custom_<NOS>-0.1.0-
Packages py2-none-any.whl contains the developed collector.
Apstra SDK These packages have a prefix apstra_devicedriver_. These packages are generated on a per
Device Driver NOS basis. Packages are generated for NOS that are not available by default in Apstra.
Packages
Upload Packages
If the built-in collector packages and the Apstra SDK Device Driver for your Device Operating System
(NOS) were not provided with the Apstra software, you must upload them to the Apstra server.
If you are using an offbox solution and your NOS is not EOS, you must upload the built-in collector
package.
Upload the package containing your collector(s) and assign them to a Device System Agent or System
Agent Profile.
IN THIS SECTION
The registry maps the service to its application schema and the storage schema path. You can manage
the telemetry service registry with the REST endpoint /api/telemetry-service-registry. You can't enable the
collector for a service without adding a registry entry for the particular service. The registry entry for a
service cannot be modified while the service is in use.
NOTE: When executing make, all application schemas are packaged together to a tar file
(json_schemas.tgz) in the dist folder. With apstra-cli, you have the option of importing all the
schemas in the .tgz file.
Start Collector
To start a service, use the POST API /api/systems/<system_id>/services with the following three arguments:
Arguments
NOTE: You can also manage collectors via the apstra-cli utility.
Delete Collector
To retrieve collected data, use the GET API /api/systems/<system_id>/services/<service_name>/data. Only the
data collected in the last iteration is saved. Data does not persist over Apstra restart.
946
To retrieve the list of services enabled on a device, use the GET API /api/systems/<system_id>/services.
IN THIS SECTION
IN THIS SECTION
5-stage Clos architecture allows for large-scale topologies. With its additional aggregation layer, you can
interconnect multiple pods into a single fabric. Superspine devices provide the additional layer that
interconnects multiple pods. Planes are groups of superspine devices. Each 5-stage topology consists of
947
one or more planes. Each plane consists of one or more superspine devices. See below for an example.
Careful planning and consideration are required to build large 5-stage Clos networks. Refer to the
limitations below when you're designing and validating your 5-stage topology. For assistance, contact
"Juniper Support" on page 893.
• You must use the same overlay control protocol (static VXLAN or MP-EBGP-EVPN, specified during
template creation) for all rack types in all pods.
• IPv6 support in the underlay depends on the NOS. See the "4.2.0 feature matrix" on page 989.
• The entire fabric across all pods must be either all IPv4, all IPv6 or all dual-stack
• External generic systems on spine devices and leaf devices in the same pod
Extending EVPN networks across multiple pods within the same blueprint adds the following value:
• Scaling: provide any-to-any connectivity for applications distributed across multiple pods.
• Redistributing Workloads: To load-balance applications, you can migrate a group of applications from
one pod to another pod while preserving application IP and MAC addresses.
• Performing pod maintenance: Migrate all applications from one pod to another, while preserving the
application IP and MAC addresses.
• Active / Standby applications across sites / pods: Deploy A/S applications across multiple pods to
provide high availability at pod level, or as part of application migration tasks.
• Facilitate external connectivity for a virtual network from a remote pod without external
connectivity.
5-stage Clos networks support the Junos QFX series of switches. You can use the ESI redundancy
protocol, create templates from them, and then use those templates as pods in 5-stage Clos networks.
For more information about working with Juniper devices with EVPN, see "Juniper EVPN Support" on
page 950.
Just like in other Apstra-managed networks, required configuration is rendered to bring up multi-pod
networks, and with proprietary Intent-based Networking technology the networks are validated to
ensure they operate as designed.
The method for creating cross-pod "virtual networks" on page 177 is the same method as for 3-stage
networks.
• Make sure that devices have a sufficient number of ports and port groups; the exact number
depends on your design.
949
• Spine logical devices require a leaf-facing port group, and if they will be facing a superspine device
they also require a Superspine port role in that port group.
• Superspine logical devices require a Spine port role in the port group.
2. Confirm that the global catalog includes interface maps (Design > Interface Maps) that map the
logical devices to the correct device profiles; create them if necessary. The required number of
interface maps depends on your design; each device model used requires its own interface map. At a
minimum, if you are using only one model, you need two interface maps as listed below:
SEE ALSO
IN THIS SECTION
Overview | 950
Overview
The Junos EVPN ESI multi-homing feature enables you to directly connect end servers to leaf devices
and provide redundant connectivity via multi-homing. This feature is supported only on LAGs that span
two leaf devices on the fabric. EVPN ESI also removes the need for "peer-link", and hence facilitates
clean leaf-spine design.
Blueprints using the MP-EBGP EVPN Overlay Control Protocol can use Juniper Junos devices. Racks
with leaf-pair redundancy can implement EVPN ESI multi-homing.
EVPN ESI multi-homing helps to maintain EVPN service and traffic forwarding to and from the multi-
homed site in the event of the following types of network failures and avoid single point of failure as per
the scenarios below:
• Link failure from one of the leaf devices to end server device
• Fast convergence on the local VTEP by changing next-hop adjacencies and maintaining end host
reachability across multiple remote VTEPs
EVI - EVPN instance that spans between the leaf devices making up the EVPN. It's represented by the
Virtual Network Identifier (VNI). EVI is mapped to VXLAN-type virtual networks (VN).
MAC-VRF - A virtual routing and forwarding (VRF) table to house MAC addresses on the VTEP leaf
device (often called a "MAC table"). A unique route distinguisher and VRF target is configured per MAC-
VRF.
951
Ethernet Segment (ES) - Ethernet links span from an end host to multiple ToR leaf devices and form ES.
It constitutes a set of bundled links.
Ethernet Segment Identifier (ESI) - Represents each ES uniquely across the network. ESI is only
supported on LAGs that span two leaf devices on the fabric.
ESI helps with end host level redundancy in an EVPN VXLAN-based blueprint. Ethernet links from each
Juniper ToR leaf connected to the server are bundled as an aggregated Ethernet interface. LACP is
enabled for each aggregated Ethernet interface of the Juniper devices. Multi-homed interfaces into the
ES are identified using the ESI.
• ESI based ToR leaf devices cannot have any L2/L3 peer links as EVPN multi-homing eliminates peer
links used by MLAG/vPC.
• A bond of two physical interfaces towards a single leaf is not supported in the ESI implementation
(version 3.3.0); make sure the server with LAG in that rack type spans two leaf devices.
• L2 External Connectivity Points (ECPs) with an ESI-based rack type is not supported. Only L3 ECPs
are supported.
• Per-leaf VN assignment - having different VLAN sets among individual leaf devices for an ESI-based
port channel is not supported.
• Connecting a single server to a single leaf using a bond of two physical interfaces cannot use an ESI.
• ESI is supported only on LAGs (port-channels) and not directly on physical interfaces. This has no
functional impact, as leaf local port-channels for multi-home links are automatically generated.
• Only ESI active-active redundancy mode is supported. Active-standby mode is not supported.
• active-active redundancy mode is only supported for Juniper EVPN multi-homing where each
Juniper ToR leaf attached to an ES is allowed to forward traffic to and from a given VLAN.
• More than two leaf devices in one ESI segment using ESI-based rack types is not supported.
• Switching from an ESI to MLAG rack type or vice versa is not supported under Flexible Fabric
Expansion (FFE) operations.
952
Topology Specification
In the example below Leaf1 and Leaf2 are part of the same ES, and Leaf3 is the switch sending traffic
towards the ES.
NOTE: In Junos MAC/IP Type 2 route type doesn't contain VNI and RT for the IP part of the
route, it is derived from the accompanying Type 5 route type.
953
Type 1 routes are used for per-ES auto-discovery (A-D) to advertise EVPN multi-homing mode. Remote
ToR leaf devices in the EVPN network use the EVPN Type 1 route type functionality to learn the EVPN
Type 2 MAC routes from other leaf devices. In this route type ESI and the Ethernet Tag ID are
considered to be part of the prefix in the NLRI. Upon a link failure between ToR leaf and end server
VTEP withdraws Ethernet Auto-Discovery routes (Type 1) per ES. The Juniper EVPN multi-homing
Ethernet Tag value is set to the VLAN ID for ES auto-discovery/ES route types.
Mass Withdrawal - Used for fast convergence during link failure scenarios between leaf devices to the
end server using Type 1 EAD/ES routes.
DF Election - Used to prevent forwarding of the loops and the duplicates as only a single switch is
allowed to decapsulate and forward the traffic for a given ES. Ethernet Segment Route is exported and
imported when ESI is locally configured under the LAG. Type 4 NLRI is mainly used for designated
forwarder(DF) elections and to apply Split Horizon Filtering.
Split Horizon - It is used to prevent forwarding of the loops and the duplicates for the Broadcast,
Unknown-unicast and Multicast (BUM) traffic. Only the BUM traffic that originates from a remote site is
allowed to be forwarded to a local site.
EVPN Services
IN THIS SECTION
EVPN VLAN-Aware
At a high level, Ethernet Services can be (1) VLAN-based, (2) VLAN Bundle or (3) VLAN-Aware. Only
VLAN-Aware is supported on Junos. With the EVPN VLAN-Aware Service each VLAN is mapped
directly to its own EVPN instance (EVI). The mapping between VLAN, Bridge Domain (BD) and EVPN
instance (EVI) is N:1:1. For example, N VLANs are mapped into a single BD mapped into a single EVI. In
954
this model all VLAN IDs share the same EVI as shown below:
VLAN-aware Ethernet Services in Junos have a separate Route target for each VLAN (which is Juniper
internal optimization), so each VLAN has a label to mimic VLAN-based implementations.
From the control plane perspective EVPN MAC/IP routes (Type 2) for VLAN-aware services carry VLAN
ID in the Ethernet Tag ID attribute that is used to disambiguate MAC routes received.
From the data plane perspective - every VLAN is tagged with its own VNI that is used during packet
lookup to place it onto the right Bridge Domain(BD)/VLAN.
Creating an EVPN network follows the same workflow as for other networks.
1. Create/Install "offbox device agents" on page 617 for all switches. (Onbox agents are not supported
on Junos.)
2. Confirm that the global catalog includes logical devices (Design > Logical Devices) that meet Juniper
device requirements; create them if necessary:
3. Confirm that the global catalog includes interface maps (Design > Interface Maps) that map the
logical devices to the correct device profiles for the Juniper devices; create them if necessary.
• For single leaf racks, specify redundancy protocol None in the Leaf section.
• When specifying the end server in the Server section, specify attachment type as Dual-Homed
towards ESI-based ToR leaf devices. EVPNs using ESs have a link aggregation option. Select
the LAG mode LACP (Active)
955
7. Create resource pools for "ASNs" on page 780, "IP addresses" on page 784, and "VNIs" on page 782.
8. Create a "blueprint" on page 5 based on the ESI-based template, then build the EVPN-based network
topology for the Juniper devices by assigning "resources" on page 33, "device profiles" on page 36,
and "device IDs" on page 37.
Configuration Rendering
IN THIS SECTION
Limitations | 957
Reference Design
• Underlay - The underlay in the data center fabric is Layer-3 configured using standard eBGP over the
physical interfaces of Juniper devices.
• Overlay - Overlay is configured eBGP over lo0.0 address. EVPN VXLAN is used as an overlay
protocol. All the ToR devices are enabled with L2 VN. Each one of these L2 VNs can have its default
gateway hosted on connected ToR leaf devices. For the inter-VN traffic VXLAN routing is done in the
fabric using L3 VNIs on the border leaf devices as per standard design.
• VXLAN VTEPs - On Juniper leaf devices one IP address on lo0.0 is rendered which is used as VTEP
address. The VTEP IP address is used to establish the VXLAN tunnel.
• EVPN multi-homing LAG - Unique ESI value and LACP system IDs are used per EVPN LAG. The
multi-homed links are configured with an ESI and a LACP system identifier is specified for each link.
The ESI is used to identify LAG groups and loop prevention. To support Active/Active and multi-
homing for Juniper leaf devices, they are configured with the same LACP parameter for a given ESI
956
ESI MAC addresses are auto-generated internally. You can "configure the value of the most
significant byte (msb)" on page 328 used in the generated MAC. A new facade API is added to update
the MSB value. A new node is added to the rack based template that contains the MAC MSB value.
The default value of this byte is 2 and you can change it to any even number up to 254. Updating this
value results in regeneration of all ESI MACs in the blueprint. This is exposed to address DCI use
cases where ESIs must be unique across multiple blueprints (IP Fabrics).
• L3VNIs - L3VNI is rendered as a routing zone per VRF. Multi-tenancy functionality is available to
ensure that workloads remain logically separated within a VN (overlay) construct using routing zone.
• Route Target (RT) for L2/L3 VNIs - Auto-generated for L2/L3 VNIs in the format VNI:1. There is 1
(fabric-wide) RT per MAC-VRF (that is, L3VNI). The value must be the same across all switches
participating in one EVI. You can find the RT in the blueprint by navigating to Staged > Virtual >
Virtual Networks and clicking the VN name. RT is in the parameters section.
957
• Route Distinguisher (RD) for L2/L3 VNIs - For Junos VLAN-Aware based model, the RD is per EVI
(switch). There is no RD for each l2 VNI. RD exists only for routing zone VRF in the format
{primary_loopback}:vlan_id.
• Virtual Switch Configuration - Under the switch-options hierarchy for Juniper devices the vtep-
source-interface parameter is rendered, then the VTEP IP address used to establish the VXLAN
tunnel is specified. Reachability to loopback interface (for example, lo0.0) is provided by the underlay.
The RD here defines the EVI specific RD carried by Type 1, Type 2, Type 3 routes. RD for the global
switch options is provided in the format {loopback_id}:65534.
The RT here defines the global RT inherited by EVPN routes. It is used by Type 1 routes. A default RT
value is rendered for it (100:100) for global switch options across all switches.
• MTU - The MTU values that are rendered for Juniper Devices:
• L2 ports: 9100
• L3 ports: 9216
• Anycast Gateway - The same IP on IRB interfaces of all the leaf devices is configured and no virtual
gateway is set. Every IRB interface that participates in the stretched L2 service has the same IP/MAC
configured as below:
In this model, all default gateway IRB interfaces in an overlay subnet are configured with the same IP
and MAC address. A benefit of this model is that only a single IP address is required per subnet for
default gateway IRB interface addressing, which simplifies gateway configuration on end systems.
Limitations
The following limitations apply to EVPN multi-homing topologies for Juniper devices as of version 3.3.0:
• Only two-way multi-homing is supported. More than two Juniper leaf devices in a multi-homed
group is not supported.
958
• Juniper EVPN with EVPN on other network vendors in the same blueprint is not supported.
• In Juniper EVPN multi-homing, L3 External Connectivity Points (ECP) towards generic systems are
supported; L2 ECP is not supported.
• BGP routing from Junos leaf devices to Apstra-managed Layer 3 servers is not supported.
SEE ALSO
IN THIS SECTION
NOTE: The apstra-cli utility is an experimental tool and has limited support. Do not use it in
production environments unless advised by Juniper Support. Some versions of apstra-cli are not
intended for certain Apstra releases. Some apstra-cli commands may or may not work between
different Apstra releases. It's always best to test a version of apstra-cli with a specific Apstra
release in a non-production environment, or contact "Juniper Support" on page 893 for
assistance.
The apstra-cli utility enables you to extract information from the Apstra server for analytics (and other
functionalities). The workflow for IBA probes is as follows:
1. Install apstra-cli.
2. Install packages.
After probes are instantiated you can use "Syslog" on page 823 to send messages to Syslog servers.
Install apstra-cli
"Install the apstra-cli utility" on page 929.
Install Packages
1. Download the latest Apstra SDK package from Juniper Support Knowledge Base article KB37156.
2. Custom collector packages enable the collection of telemetry from devices. Extract the collector for
your platform (for example, aosstdcollectors_custom_eos-0.1.0.post10-py2-none-any.whl where eos is the
platform and 10 is the version).
3. Collectors require specific Python library packages. If the Apstra environment has Internet access,
the files are automatically installed. If the environment doesn't have Internet access, download the
following files from the official Python repository. Make sure to download the correct versions:
• netaddr-0.7.17-py2.py3-none-any.whl
• gtextfsm-0.2.1.tar.gz
• pyeapi-0.8.2.tar.gz
4. From the left navigation menu in the Apstra GUI, navigate to Devices > System Agents > Packages
and click Upload Packages.
960
5. Either click Choose File and navigate to the custom collector package (and if the Internet is
inaccessible, the three (3) Python packages), or drag and drop the file(s) into the dialog window. See
example below for Arista devices in an environment without Internet access:
961
962
6. Click Upload to upload the packages to the Apstra server, then close the dialog to return to the
summary table view.
1. From the left navigation menu, navigate to Devices > System Agents > Agent Profiles and click
Create Agent Profile.
2. For this example, select EOS from the platform drop-down list.
963
3. In the Packages section, select the four uploaded packages to associate them with the agent profile.
(If your environment has Internet access, you only need to include the custom collector package.)
4. Click Create to create the agent profile and return to the summary table view.
For more information about agent profiles, see "Agent Profiles" on page 673.
Create Agents
Now let's create agents for Arista devices and use the agent profile to associate the packages to them.
We recommend that you use agent profiles to associate custom collector packages so you can bulk
update agents later, as needed, with a single command.
1. From the left navigation menu, navigate to Devices > System Agents > Agents and click Create
Onbox Agent(s).
964
2. Enter details for the agent and select the agent profile from the drop-down list as shown in the image
below:
3. To verify that packages have been successfully installed on agents, from the left navigation menu,
navigate to Devices > Managed Devices and click the management IP of the device. Click the Agent
tab. The Config section lists any installed packages. If you manually uploaded the Python packages
(netaddr, gtextfsm and pyeapi) they are listed. If the Apstra server has Internet access, they were
automatically uploaded and won't be listed here. (To see all packages installed on the device, log in to
965
To update agents by IP range with a specific agent profile, use the command system-agents update-profile
as shown in the example below. When setting the --profile option, apstra-cli shows available agent
profiles. To select, use the up and down arrow keys.
For example.
All probes described in this document are included in apstra-cli build 412 and later. Probe .j2 files may
be made available if the probe file is not built into the apstra-cli build.
Some of these probes require an updated service registry. Download the latest Apstra SDK and extract
the json-schemas.tar.gz file. Copy the file to the /home/admin directory of the Apstra server so it is available
in the apstra-cli /mytmp directory.
To create probes, use the probe create apstra-cli command. You'll be prompted for additional options.
To list available probes supplied with apstra-cli, use --file and tab-completion. Scroll through the list
with the up and down arrow keys.
memory_usage_threshold_anomalies.j2
bandwidth_utilization_history.j2
power_supply_anomalies.j2
virtual_infra_vlan_mismatch.j2
hardware_vtep_counters_enabled.j2
968
To see installed IBA probes in the blueprint, navigate to Analytics > Probes.
IN THIS SECTION
The following section describes how to install some of the most interesting probes which are not
available by default.
Packet Drops
Packet drop IBA probes detect an abnormal amount of packet drops on device interfaces that the Apstra
software manages, based on interface telemetry that device agents collect.
Filename Description
Switch Memory Leak IBA probes detect abnormal memory leaks in specified processes on devices that
the Apstra software manages, based on system telemetry that device agents collect. This probe requires
device user credentials set in the device agent configuration that has login and access to the device
BASH prompt.
Filename Description
The memory_usage_threshold_anomalies.j2 IBA probe requires additional "Probe template variables" for
os_family and process.
The only option for os_family is eos for Arista EOS. The (2) options for process are edac-poller and fastcapi or
configagent.
NOTE: "FastCapi" as service process is valid only for EOS version 4.18. For the newer version of
EOS, for example 4.20 and later only ConfigAgent is valid. Take extra care that service name is in
lowercase during probe creation. So it should be configagent instead of ConfigAgent.
To install the IBA probe for a second process, repeat the probe create command for the other process.
You can edit the IBA probe name to include the process name.
c2de-49f8-8708-df465f0cdc68
apstra-cli>
Fault Tolerance
Filename Description
spine_fault_tolerance.j2 Find out if failure of given number of spines in the fabric is going to be tolerated.
Raise anomaly if total traffic on all spines is more than the available spine capacity,
with the specified number of spine failures.
lag_link_fault_tolerance.j2 Find out if failure of one link in a server LAG is going to be tolerated. Monitors total
traffic in each LAG against total available capacity of the bond, with one link failure.
Raise anomaly for racks with more than 50% of such overused bonds, sustained for
certain duration.
AOSOM-Streaming Guide
IN THIS SECTION
Troubleshooting | 984
AOSOM-Streaming Overview
IN THIS SECTION
Grafana | 973
Prometheus | 974
InfluxDB | 976
NOTE: AOSOM streaming is demonstration software, not intended for production environments.
You can configure Apstra to generate Google Protocol Buffer (protobuf) streams for counter data
(perfmon), alerts, and events. Each data type is sent to a streaming receiver over its own TCP socket.
Even if all three data types are configured for the same streaming receiver, three connections are
created between the Apstra server and the streaming receiver. This also allows for all three types to be
sent to three different streaming receivers. You can choose from the many open-source projects, or
develop your own solutions to capture, store and inspect the protobuf data. Apstra has developed a
project available on GitHub called AOSOM-Streaming to demonstrate how this can be achieved using
several open-source components. The AOSOM-Streaming project is meant to help you understand how
you can consume the AOS protobuf stream. It is for demonstration purposes only, except for the Apstra
Telegraf input plugin. Apstra software fully supports this plug-in for use as part of your streaming
telemetry solution.
973
The Aosom Streaming project provides a packaged solution to collect and visualize telemetry streaming
information coming from an Apstra server. This provides a web interface experience and example
queries to handle alerts, counters, and Apstra events. This open-source project officially lives on Github
at https://github.com/Apstra/aosom-streaming.
Grafana
From a web browser enter the URL http://<aosom-streaming>:3000 and enter username admin (default)
and password admin (default).
The grafana GUI includes two main sections (top left). Apstra AOS Blueprint describes overall telemetry
alerts and traffic throughput, as well as individual devices for interface telemetry. Blueprints are learned
automatically using the Apstra ‘telegraf’ Docker container; no further configuration is necessary.
In the screenshot above, we can observe traffic in the demo Apstra environment, and aggregate CPU,
traffic, and errors.
To filter telemetry events based on specific and individual devices, change the dashboard at the top to
Apstra AOS Device. Here we can observe there are two active route anomalies in the blueprint, and
974
Prometheus
Prometheus is used for alerts and device telemetry counter storage in the Aosom-streaming appliance.
From a web browser enter the URL http://<aosom-streaming>:9090 to access the Prometheus GUI.
When incoming events appear, Apstra dynamically builds each of the queries. To see example query
names, begin typing under ‘execute’. Starting with ‘alert’ it tab-completes available alerts that
975
InfluxDB
InfluxDB is used to store Apstra events from telemetry streaming. From a web browser enter the URL
http://<aosom-streaming>:8083 to access InfluxDB.
We can show the available influxdb keys with queries, such as show field keys or show measurements.
977
Once we know a measurement, we can view the data and keys with select * from <measurement> -- In
this case, we’ll capture the LAG interface status.
Configure Aosom-Streaming
To configure telemetry streaming as part of this project, you'll edit variables.env, run the make start file and
restart the containers. No Apstra server configuration is required. Documentation for starting, stopping,
and clearing data is available at https://github.com/Apstra/aosom-streaming
The telegraf project connects to the Apstra API and posts an IP:Port that Apstra uses to stream realtime
telemetry data back to.
2. Configure variables.env.
AOS_SERVER=192.168.57.250
LOCAL_IP=192.168.57.128
INPUT_PORT_INFLUX=4444
INPUT_PORT_PROM=6666
AOS_LOGIN=admin
AOS_PASSWORD=admin
978
AOS_PORT=443
GRAFANA_LOGIN=admin
GRAFANA_PASSWORD=admin
• AOS_SERVER - the IP address of the Apstra server that sends telemetry data to the aosom-streaming
server.
• LOCAL_IP - the IP address assigned to ens33 (first ethernet interface). In this case, it is learned via
DHCP on this VM. See ip addr show dev ens33. GRAFANA configuration options to specify the
username and password for the grafana web interface.
• AOS_LOGIN, AOS_PASSWORD, AOS_PORT - You can customize username, port and password information.
3. Run the command make start to set up the project, or if you're making configuration changes, run make
update.
admin@aeon-ztps:~$ docker ps
CONTAINER ID IMAGE
4edf204e7be9 apstra/telegraf:latest
You can check the different Telegraf versions in the Apstra Docker Hub.
3. If required, modify the docker-compose.yml file and point to the correct Docker image.
4. Run the command docker-compose up -d to restart the service.
5. Run the docker ps command to verify that the container is running with the new image.
NOTE: For assistance regarding which version to install or if you have any questions about
the procedure, contact "Juniper Support" on page 893.
980
IN THIS SECTION
You can build your own Aosom-streaming VM, which is a Docker container. This steps show you how to
set up a basic Docker server.
Download the Ubuntu 16.04.2 ISO and provision a new VM. The default username is aosom and the
password is admin.
For larger blueprints, we recommend changing RAM to at least 8GB and CPU to at least 2 vCPU. More
disk space may also be required.
Resource Quantity
RAM 8GB
CPU 2 vCPU
Network 1 vNIC
Install Packages
apt-get update
981
aosom@ubuntu:~$ sudo apt-get install docker docker-compose git make curl openssh-server
[sudo] password for aosom:
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
bridge-utils cgroupfs-mount containerd dns-root-data dnsmasq-base docker.io
git-man liberror-perl libnetfilter-conntrack3 libperl5.22 libpython-stdlib
libpython2.7-minimal libpython2.7-stdlib libyaml-0-2 patch perl
perl-modules-5.22 python python-backports.ssl-match-hostname
python-cached-property python-cffi-backend python-chardet
python-cryptography python-docker python-dockerpty python-docopt
python-enum34 python-funcsigs python-functools32 python-idna
python-ipaddress python-jsonschema python-minimal python-mock
python-ndg-httpsclient python-openssl python-pbr python-pkg-resources
python-pyasn1 python-requests python-six python-texttable python-urllib3
python-websocket python-yaml python2.7 python2.7-minimal rename runc
ubuntu-fan xz-utils
Suggested packages:
mountall aufs-tools btrfs-tools debootstrap docker-doc rinse zfs-fuse
| zfsutils git-daemon-run | git-daemon-sysvinit git-doc git-el git-email
git-gui gitk gitweb git-arch git-cvs git-mediawiki git-svn diffutils-doc
perl-doc libterm-readline-gnu-perl | libterm-readline-perl-perl make
python-doc python-tk python-cryptography-doc python-cryptography-vectors
python-enum34-doc python-funcsigs-doc python-mock-doc python-openssl-doc
python-openssl-dbg python-setuptools doc-base python-ntlm python2.7-doc
binutils binfmt-support make
The following NEW packages will be installed:
bridge-utils cgroupfs-mount containerd dns-root-data dnsmasq-base docker
docker-compose docker.io git git-man liberror-perl libnetfilter-conntrack3
libperl5.22 libpython-stdlib libpython2.7-minimal libpython2.7-stdlib
libyaml-0-2 patch perl perl-modules-5.22 python
python-backports.ssl-match-hostname python-cached-property
python-cffi-backend python-chardet python-cryptography python-docker
python-dockerpty python-docopt python-enum34 python-funcsigs
python-functools32 python-idna python-ipaddress python-jsonschema
python-minimal python-mock python-ndg-httpsclient python-openssl python-pbr
982
Add the aosom user to the Docker group. This allows ‘aosom’ to make Docker configuration changes
without having to escalate to sudo.
The AOSOM-Streaming package does not set the Docker restart policy; this is up to your orchestration
toolchain. Open aosom-streaming/docker-compose.yml and add restart: always to each of the service
directives. This ensures that Docker containers are online after a service reboot.
# -------------------------------------------------------------------------
# Prometheus -
@@ -30,6 +31,7 @@ services:
- '-config.file=/etc/prometheus/prometheus.yml'
ports:
- '9090:9090'
+ restart: always
# -------------------------------------------------------------------------
# influxdb
@@ -43,6 +45,7 @@ services:
ports:
- "8083:8083"
- "8086:8086"
+ restart: always
# -------------------------------------------------------------------------
# Telegraf - Prom
@@ -57,6 +60,7 @@ services:
- /etc/localtime:/etc/localtime
ports:
- '6666:6666'
+ restart: always
# -------------------------------------------------------------------------
# Telegraf - Influx
@@ -71,3 +75,4 @@ services:
- /etc/localtime:/etc/localtime
ports:
- '4444:4444'
+ restart: always
Set up variables.env and start container per Aosom-Streaming application setup section.
Modify /etc/hostname to aosom, and change the loopback IP in /etc/hosts to aosom from ubuntu.
984
Troubleshooting
IN THIS SECTION
While most troubleshooting information is included in the Github main page at https://github.com/
Apstra/aosom-streaming, you can run some simple commands to make sure the environment is healthy.
You should see a blueprint ID, and some influxdb ‘write’ events when telemetry events occur on AOS -
BGP, liveness, config deviation, etc.
GetBlueprints() - Id 0033cf3f-41ed-4ddc-91f5-ea68318fba9b
2017-07-31T23:59:13Z D! Finished to Refresh Data, will sleep for 20 sec
2017-07-31T23:59:15Z D! Output [influxdb] buffer fullness: 11 / 10000 metrics.
2017-07-31T23:59:15Z D! Output [influxdb] wrote batch of 11 metrics in 5.612057ms
2017-07-31T23:59:20Z D! Output [influxdb] buffer fullness: 4 / 10000 metrics.
2017-07-31T23:59:20Z D! Output [influxdb] wrote batch of 4 metrics in 5.349171ms
2017-07-31T23:59:25Z D! Output [influxdb] buffer fullness: 11 / 10000 metrics.
2017-07-31T23:59:25Z D! Output [influxdb] wrote batch of 11 metrics in 4.68295ms
2017-07-31T23:59:30Z D! Output [influxdb] buffer fullness: 4 / 10000 metrics.
2017-07-31T23:59:30Z D! Output [influxdb] wrote batch of 4 metrics in 5.007029ms
GetBlueprints() - Id 0033cf3f-41ed-4ddc-91f5-ea68318fba9b
2017-07-31T23:59:33Z D! Finished to Refresh Data, will sleep for 20 sec
To see and ensure that all the expected containers are running, run docker ps:
aosom@ubuntu:~/aosom-streaming$ docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
e03d003a2ef9 grafana/grafana:4.1.2 "/run.sh" 3 minutes ago Up 3
985
The leaf devices in your racks can have different uplink speeds to a spine. When designing for mixed
speeds, make sure you plan sufficient ports for spine-to-leaf connections with mixed link speeds for Day
0, and for adding racks as a Day 2 operation. The spine logical device must have mixed port speeds
defined that specify the port role as Leaf for the required number of ports. The following limitations
apply:
• Parallel links between the same devices cannot have mixed speeds.
• You can't update spine logical devices if they're used in a blueprint. You could possibly use the AOS-
CLI utility for manual patching. AOS-CLI is an experimental tool and it may not be able to provide a
solution. For assistance, contact "Juniper Support" on page 893.
The example below shows how to design rack types and templates with mixed speeds.
1. Create an L3 Clos rack type with logical devices AOS-7x10-Leaf and AOS-40x10+6x40-1 for two
leaf switches, having 10 GbE and 40GbE, respectively, as uplinks towards spine devices
986
987
2. Create a Rack Based template based on the mixed speed rack type.
3. You can create a Pod Based template based on the above rack based template.
988
4. As a Day 0 operation you can create a "blueprint" on page 5 with one of the above templates; or as a
Day 2 operation you can select a mixed speed rack type when "adding a rack" on page 156 to an
existing blueprint.
References
IN THIS SECTION
Devices | 1009
Analytics | 1031
Graph | 1207
Feature Matrix
IN THIS SECTION
IN THIS SECTION
Miscellaneous | 994
Fabric Roles
(Continued)
Fabric Connectivity
(Continued)
IPv6 Overlay No No No No No
(VTEP)
Device Management
(Continued)
(Continued)
Routing Policies
Miscellaneous
IP Link CT Type
(Continued)
(Continued)
BGP session on Yes Yes Not possible Not possible Not possible
SVI (mlag)
towards dual-
homed generic
using secondary
IPs (IPv4,
default VRF)
BGP session on Yes Yes Not possible Not possible Not possible
SVI (mlag)
towards dual-
homed generic
using secondary
IPs (IPv6,
default VRF)
1000
(Continued)
BGP session to No No No No No
generic with
dynamic ASN
(IPv4)
BGP session to No No No No No
generic with
dynamic ASN
(IPv6)
(Continued)
(Continued)
BGP No No No No No
Unnumbered
session (link-
local peering) on
SVI (BP has IPv6
app disabled,
default VRF
only)
(Continued)
BGP session No No No No No
(IPv6 addressed)
with IPv4 SAFI
(rfc5549) with
static ASN (BP
has IPv6 app
enabled)
BGP session No No No No No
(IPv6 addressed)
with IPv4 SAFI
(rfc5549) with
dynamic ASN
(BP has IPv6
app enabled)
(Continued)
(Continued)
BGP session No No No No No
with specific
peer IP and and
dynamic ASN
(IPv4)
BGP session No No No No No
with specific
peer IP and and
dynamic ASN
(IPv6)
BGP session No No No No No
(IPv6 addressed)
with IPv4 SAFI
(rfc5549) with
static ASN (BP
has IPv6 app
enabled)
1006
(Continued)
BGP session No No No No No
(IPv6 addressed)
with IPv4 SAFI
(rfc5549) with
dynamic ASN
(BP has IPv6
app enabled)
(Continued)
(Continued)
Dynamic prefix No No No No No
peering (IPv6
peering, IPv4
AFI, rfc5549),
(BP has IPv6
app enabled)
DCI Features
Devices
IN THIS SECTION
IN THIS SECTION
Recommended qualified NOS versions and device (series) are listed below. Other versions in the same
code train that contain only bug fixes are also expected to work. This is usually indicated with version
numbers that differ only by the last digit; however, this is not strictly guaranteed by the NOS vendors.
If you plan to use a device or NOS version close to the qualified ones but not listed, we highly
recommend that you review the NOS release notes to ensure no backward incompatible or breaking
changes are listed. We strongly advise testing the new version thoroughly in a staging environment
before deploying it to production.
To request consideration for qualification for a release train not listed, contact your Juniper Apstra Sales
representative.
• Arista EOS
• 4.25.4M > 4.25.5M (reason: same code train, last digit change and M indicates Maintenance
release)
1011
• Cisco NXOS
• 10.2(9)M > 10.2(10)M (reason: same code train, last digit change and M indicates Maintenance
release)
• 20.2R1 > 20.2R2 (reason: R1 > R2 can have new features + bugfixes)
• Arista EOS
• Cisco NXOS
• 10.2(1)F > 10.2(3)F (reason: multiple last digit change, F indicates Feature release)
Device Operating System Device Role Qualified NOS Versions Supported Devices
(Series)
• 21.4R3 • EX4400-48MP
• 22.2R3 • EX4400-24T
• 22.4R2 • EX4400-48T
• EX4400-48F
• EX4650-48Y
• 21.4R3 • QFX5210
• 22.2R3
• 22.4R2
• QFX5120
• QFX10002
• QFX10008
• QFX10016
1013
Device Operating System Device Role Qualified NOS Versions Supported Devices
(Series)
• 21.4R3-EVO
• 22.2R3-EVO
• 22.4R2-EVO
• ACX7100-32C
• ACX7100-48L
• ACX7024
1014
Device Operating System Device Role Qualified NOS Versions Supported Devices
(Series)
• Dell Z9332F-ON
• Dell Z9264F-ON
• Dell Z9100-ON
• Dell N3248TE-ON
• Dell S5296F-ON
• Dell S5248F-ON
• Dell S5232F-ON
• Dell S5212F-ON
• Dell S6000-ON
• Edgecore/Accton
AS7816-64X
• Edgecore/Accton
AS7726-32X
• Edgecore/Accton
AS7712-32X
• Edgecore/Accton
AS7326-56X
• Edgecore/Accton
AS5712-54X
• Edgecore/Accton
AS5835-54T
1015
Device Operating System Device Role Qualified NOS Versions Supported Devices
(Series)
• Edgecore/Accton
AS5835-54X
1016
Device Operating System Device Role Qualified NOS Versions Supported Devices
(Series)
• C3132QV
C3164PQ
• C3172PQ
• C36180YC-R
• C9236C
• C9348GC-FXP
• C9364C
• C9364C-GX
• C9372TX
• C9348GC-X
• C93108TC-EX
• C93180LC-EX
• C93180YC-EX
• C93180YC-FX
• C93180YC-FX3S
• C93240YC-FX2
• C9332C
• C9336C-FX2
• C9336C-FX2
1017
Device Operating System Device Role Qualified NOS Versions Supported Devices
(Series)
• C9372PX
• C9396PX
• C9504
• C9508
1018
Device Operating System Device Role Qualified NOS Versions Supported Devices
(Series)
• CCS-720XP-96ZC2
• DCS-7050CX3M-32S
• DCS-7050QX-32
• DCS-7050QX-32S
• DCS-7050SX2-72Q
• DCS-7050SX3-48YC8
• DCS-7050SX3-48YC1
2
• DCS-7050SX3-96YC8
• DCS-7050SC064
• DCS-7050SC-128
• DCS-7050T-36
• DCS-7050TX3-48C8
• DCS-7050TX-48
• DCS-7050TX-64
• DCS-7050TX-72Q
• DCS-7060CX2-32S
• DCS-7060CX-32S
• DCS-7150S-24
1019
Device Operating System Device Role Qualified NOS Versions Supported Devices
(Series)
• DCS-7150S-52
• DCS-7150S-64
• DCS-7160-32CQ
• DCS-7160-48TC6
• DCS-7160-48YC6
• DCS-7250QX-64
• DCS-7260CX3-64
• DCS-7260CX3-64E
• DCS-7260CX-64
• DCS-7280CR2-60
• DCS-7280CR2A-30
• DCS-7280CR3-32P4
• DCS-7280CR3-96
• DCS-7280CR3K-32D
4
• DCS-7280QR-C36
• DCS-7280QRA-C36S
• DCS-7280SE-64
• DCS-7280SE-68
• DCS-7280SE-72
• DCS-7280SR2-48YC6
• DCS-7280SR3-48YC8
1020
Device Operating System Device Role Qualified NOS Versions Supported Devices
(Series)
• DCS-7280SR-48C6
• DCS-7280TR-48C6
• DCS-7504N
• DCS-7504R3
• DCS-7508N
• DCS-7508R3
• DCS-7512R3
IN THIS SECTION
You can upgrade a network operating system (NOS) from a recommended NOS release in a previous
Apstra release to a recommended NOS release in a newer Apstra release. In the same Apstra release,
you can upgrade between NOS releases. See the sections below for supported paths. Prior to Apstra
4.2.0, upgrading to an unqualified version resulted in an ERROR state. 4.2.0 doesn't have this restriction.
If you upgrade to an unqualified version, be sure to perform due diligence.
For information about other upgrade paths that may be available, or to request support for a specific
upgrade path, contact "Juniper Support" on page 893.
20.4R3-S3.4 22.2R3.15
21.2R3-S4.8 • 21.2R1-S2.2
• 21.4R3-S4.16
• 22.4R2.8
21.4R3-S4.16 • 21.2R3-S4.8
• 22.2R3.15
22.2R3.15 21.4R3-S4.16
22.4R2.8 • 20.4R3-S2.6
• 21.2R3-S4.8
21.2R3.10-EVO • 21.4R3.13-EVO
• 22.4R2.11-EVO
21.4R3.13-EVO • 20.4R3-S3.5-EVO
• 22.2R3.13-EVO
22.4R2.11-EVO 22.2R3.13-EVO
9.3(3) 9.3(11)
9.3(7) 10.2(5)
9.3(8) 9.3(11)
10.1(2) 10.2(5)
10.2(5) • 9.3(7)
• 9.3(11)
4.23.6M • 4.24.5M
• 4.27.6M
4.24.5M • 4.23.6M
• 4.27.4M
4.25.3.1M 4.27.6M
4.27.6M 4.25.3.1M
3.5.4-GA-adv 4.0.5-GA-Enterprise-Advanced
1023
IN THIS SECTION
Controller Section
[controller]
# <metadb> provides directory service for AOS. It must be configured properly
# for a device to connect to AOS controller.
metadb = tbt://aos-server:29731
# Use <web> to specify AOS web server IP address or name. This is used by
# device to make REST API calls to AOS controller. It is assumed that AOS web
# server is running on the same host as metadb if this option is not specified
web =
# <interface> is used to specify the management interface.This is currently
# being used only on server devices and the AOS agent on the server device will
# not come up unless this is specified.
interface =
metadb
Agent Server Discovery is a client-server model. The Apstra Device agent registers directly to the Apstra
server via the metadb connection. The Apstra server can be discovered from static IP or DNS.
Dynamic DNS - By default, Apstra device agents point to the DNS entry aos-server, relying on dhcp-
provided DNS resolution and hostname resolution. On the Apstra server, if the metadb connection entry
points to a DNS entry, then the Apstra agents must be able to resolve that DNS entry as well. DNS must
be configured so aos-server resolves to an interface on the Apstra server itself, and so the agents are
configured with metadb = tbt://aos-server:29731
Static DNS - We can add a static DNS entry pointing directly to the IP of aos-server. Add a static DNS
entry, or use a DNS Nameserver configuration on the device.
1024
web
In a future release, the Apstra REST API will be able to run on a separate server from the Apstra server
itself. This feature is for Apstra internal usage only.
interface
The device agent source interface applies to Linux servers only (Ubuntu, CentOS). This source IP is the
server interface that the device agent uses when registering with Apstra. For example, on a server, to
bind the device agent to eth1 instead of the default eth0, specify interface = eth1.
Service Section
[service]
# AOS device agent by default starts in "telemetry-only" mode.Set following
# variable to 1 if you want AOS agent to manage the configuration of your
# device.
enable_configuration_service = 0
# When managing device configuration AOS agent will restore backup config if it
# fails to connect to AOS controller in <backup_config_restoration_timeout>,
# specified as <hh:mm:ss>. Set it to 00:00:00 to disable backup restoration
backup_config_restoration_timeout = 00:00:00
The service section manages specific agent configuration related to configuration rendering and
telemetry services.
1025
enable_configuration_service
This field specifies the operation mode of the device agent: telemetry only or full control.
enable_configuration_service = 0 To push telemetry (alerts) only, leave the default value of 0. Configuration
files wont be modified unless a network administrator specifies it.
enable_configuration_service = 1 Setting this field to 1 allows Apstra to fully manage the device agent
configuration, including pushing discovery and full intent-based configuration.
backup_config_restoration_timeout
Configuration is not stored on the device. This prevents a device from booting up and immediately
participating in fabric that may not be properly configured yet. The Apstra device agent is configured
after the discovery phase completes.
backup_restoration_timeout = 00:00:00 This disabled state (default) keeps the Apstra device agent from
replacing the running configuration if it cannot contact the Apstra server. Any previous configuration
state is not restored.
backup_restoration_timeout = 00:15:00 Any value other than the default 00:00:00 enables the Apstra agent to
boot and replace the running configuration with the most known previous state after the specified
period of time (fifteen minutes in this example). Specifically, the files from /.aos/rendered/ are restored to
the system after the configuration restore period expires.
Logrotate Section
[logrotate]
# AOS has builtin log rotate functionality. You can disable it by setting
# <enable_log_rotate> to 0 if you want to use linux logrotate utility to manage
# your log files. AOS agent reopens log file on SIGHUP
enable_log_rotate = 1
# Log file will be rotated when its size exceeds <max_file_size>
max_file_size = 1M
# The most recent <max_kept_backups> rotated log files will be saved. Older
# ones will be removed. Specify 0 to not save rotated log files, i.e. the log
# file will be removed as soon as its size exceeds limit.
max_kept_backups = 5
# Interval, specified as <hh:mm:ss>, at which log files are checked for
# rotation.
check_interval = 1:00:00
1026
Apstra logs to the /var/log/aos folder under a series of files. Apstra implements its own method of log
rotation to prevent /var/log/aos from filling up. You can enable (2) or disable (1) log rotation. Each
individual log file is rotated when it approaches the appropriate maximum size. Log rotation occurs by
default every hour.
[device_info]
# <model> is used to specify the device's hardware model to be reported to AOS
# device manager. This is only used by servers, so can be ignored for non-
# server devices such as switches. By default a server reports "Generic Model"
# which matches a particular HCL entry's selector::model value in AOS. Specify
# another model for the server to be classified as a different HCL entry.
model = Generic Model
model
The device info section is used to modify the default device model of servers as they register to Apstra.
For example, Server 2x10G changes the server to a dual-attached L3 server. All valid options for model
include:
• Generic Model
• Server 2x10G
• Server 1x25G
• Server 1x40G
• Server 4x10G
Apstra uses CLI to retrieve telemetry from Junos OS and Junos OS Evolved devices.
Service Command
/network-instances/network-instance/mac-table/entries/
entry
Arista EOS uses a few techniques from the EOS SDK API to directly subscribe to event notifications
from the switch, for example 'interface down' or 'new route' notifications. When using an event-based
notification, you do not have to continually render 'show' commands every few seconds. The EOS SDK
gives you the information immediately as soon as the switch has the status.
CAUTION: Event-based subscription requires the EOSProxySDK agent. For details, see
"Arista Device Agents" on page 652.
When the Arista API does not provide information (LLDP statistics), Apstra runs CLI commands at a
regular interval to derive telemetry expectations.
Service Command
Service Command
Cisco telemetry is derived from the NX-API with 'show' commands and embedded event manager
applets that provide context data to the device agent while it is running. Most commands are run as
their CLI version wrapped into JSON output.
Service Command
Service Command
Service Command
Hostname hostname
ARP ip -4 neigh
MLAG clagctl -j
1031
Analytics
IN THIS SECTION
IN THIS SECTION
Ensure that the same metric is not collected twice from the same device.
Goal Present utilization data for system CPU, system memory and maximum disk utilization of a
partition on every system present
Widgets / Probes • Systems with high cpu utilization / Device System Health
Goal Present sustained service execution anomalies under the device telemetry health probe
Widgets / Probes • Systems with degraded waiting time per service / Device Telemetry Health
• Systems that sustained telemetry timeouts per service / Device Telemetry Health
• Systems that sustained telemetry failures per service / Device Telemetry Health
• Systems that sustained telemetry underruns per service / Device Telemetry Health
Goal Ensure drained switches are indeed drained of traffic by ensuring total bandwidth is minimal
Goal Find issues in physical infrastructure that affect the available throughput caused by issues
such as imbalanced traffic over a group of L3 (ECMP) or L2 (LAG) links
Goal Visualize traffic trends for general insights into fabric usage
1034
Goal Find problems in physical or virtual infrastructure that affect workload connectivity
Widgets / Probes • Hypervisor VLANs missing in Fabric / Hypervisor & Fabric VLAN Config Mismatch
• Hypervisor PNIC LAG Status / Hypervisor & Fabric LAG Config Mismatch
• Critical Services affected by VLAN misconfig / VMs Without Fabric Configured VLANs
Goal Find single points of failure in physical or virtual infrastructure that affect high availability and
available bandwidth for workloads
Widgets / Probes • Hypervisors without ToR switch redundancy / Hypervisor Redundancy Checks
IN THIS SECTION
Hypervisor and Fabric LAG Config Mismatch Probe (Virtual Infra) | 1075
Hypervisor and Fabric VLAN Config Mismatch Probe (Virtual Infra) | 1076
Apstra software ships with many predefined probes that you can instantiate (Analytics > Probes >
Create Probe > Instantiate Predefined Probe).
IN THIS SECTION
The BGP Session Monitoring probe shows BGP session status for all switches and raises anomalies for
flapping BGP sessions. In Freeform blueprints, the probe also monitors and raises anomalies when BGP
sessions are down, missing or unknown (new in Apstra version 4.2.0). (In Datacenter blueprints, BGP
session up and down state is included with built-in telemetry, so it's not required in this probe.)
1037
BGP Session
The BGP Session processor includes the parameters as shown in the screenshot below:
The BGP Session stage shows all BGP sessions for devices.
BGP Session Down is included only in Freeform blueprints. The processor includes the parameters as
shown in the screenshot below:
1038
The BGP Session Down stage determines if the BGP session is not "up" and raises an anomaly
accordingly.
The BGP Session Flapping processor includes the parameters as shown in the screenshot below:
The BGP Session Flapping stage checks if the BGP session has new flaps for the last service interval
period. (2 minutes by default).
The Sustained BGP Session Flapping processor includes the parameters as shown in the screenshot
below:
1039
The Sustained BGP Session Flapping stage checks if the BGP session has new flaps for the specified
period of time. For example, assume there are BGP flaps between leaf1 and spine1 nodes. The fabric
BGP session between these nodes generates new BGP flaps when the interface status is changed on
spine1 that's connected to leaf1. When shutdown and up interface is performed seven times on spine 1,
it creates seven flaps for fabric BGP sessions between leaf1 and spine1. The seven new flaps are added
and two anomalies are raised.
For more information about this probe, from the blueprint, navigate to Analytics > Probes, click Create
Probe, then select Instantiate Predefined Probe from the drop-down list. Select the probe from the
Predefined Probe drop-down list to see more details specific to the probe.
The bandwidth utilization probe calculates bandwidth utilization. It captures history of bandwidth
utilization trends at differing levels of aggregation.
1040
1041
For more information about this probe, from the blueprint, navigate to Analytics > Probes, click Create
Probe, then select Instantiate Predefined Probe from the drop-down list. Select the probe from the
Predefined Probe drop-down list to see details specific to the probe.
1042
The critical services probe monitors critical services identified by user tags and provides trending data
for interfaces hosting the generic systems tag. Users are proactively notified of issues from potential
bandwidth contention. Additionally, historical data is persisted for trending analysis for troubleshooting
or assisting in right-sizing future deployments. By default, the probe displays 1h/1d/30day average
information and alerts if any individual interface with the specified tag reaches utilization threshold.
1043
For more information about this probe, from the blueprint, navigate to Analytics > Probes, click Create
Probe, then select Instantiate Predefined Probe from the drop-down list. Select the probe from the
Predefined Probe drop-down list to see details specific to the probe.
The device environmental checks probe monitors critical environmental metrics for managed switches
including power supply, fan and temperature for real-time values of historical data retention over time.
When you instantiate this predefined probe, the instantiation menu displays a list of switch models in
the blueprint. PSU count, fan count and air-flow direction information provide intent for deploying the
switches.
If you have multiple blueprints that use the same switch model, you can set one expectation for the
switch in one blueprint and a different expectation for the switch in a different blueprint.
Within one blueprint, all switches of the same model must have the same expectations. For example,
you can’t differentiate between specific QFX5120-48Y switches.
1044
For more information about this probe, from the blueprint, navigate to Analytics > Probes, click Create
Probe, then select Instantiate Predefined Probe from the drop-down list. Select the probe from the
Predefined Probe drop-down list to see details specific to the probe.
The device system health probe alerts if the system health parameters (CPU, memory and disk usage)
exceed their specified thresholds for the specified duration.
1045
1046
For more information about this probe, from the blueprint, navigate to Analytics > Probes, click Create
Probe, then select Instantiate Predefined Probe from the drop-down list. Select the probe from the
Predefined Probe drop-down list to see details specific to the probe.
The device telemetry health probe verifies telemetry collector health. It runs analytics on the collection
statistics from available service execution and if the telemetry collection health degrades, anomalies are
raised.
1047
For more information about this probe, from the blueprint, navigate to Analytics > Probes, click Create
Probe, then select Instantiate Predefined Probe from the drop-down list. Select the probe from the
Predefined Probe drop-down list to see details specific to the probe.
The device traffic probe (previously known as headroom probe) provides insights about link capacity
between two points in the network. It provides multiple interface counters (rx, tx, discard, errors and so
on) for all managed devices. It displays all interface counters available for the system, their utilization on
a per-port and aggregated utilization per-system basis. If rules are violated, it raises anomalies.
1048
NOTE: You can change probe inputs, but if you change the probe processors then the probe is
not a predefined probe anymore and the traffic layer view is not available in the active topology.
For more information about the traffic layer view, see "Physical Blueprint" on page 46.
1049
Source Live Interface Purpose: Wires in Interface traffic counters every 5 seconds (by
Processor Counters default) for all managed devices and keeps historical data based on
("Traffic
Monitor" on retention period specified during probe creation.
page 1175)
Output Average Set of interface counters samples, for
Stages Interface each port of each managed device, based
Counters
on specified average time with historical
data.
Additional System Purpose: This processor consumes in 'Average Interface Counters' for
Processor(s) interface calculating interface counters per system with historical data. It uses
counters
("System properties rx_bps_average, rx_utilization_average, tx_bps_average, and
Utilization" on tx_utilization_average to compute the system TX and RX utilization
page 1169) and to compute headroom between the specified source and
destination systems.
To see traffic between a particular source and destination from the device traffic probe, click System
Interface Counters, check the Show Context check box, then select a source and destination from the
drop-down lists. Roll over different sections to display relevant information. Different colors represent
link capacity, where green means plenty of capacity and red means that the link is running out of
1050
capacity.
For more information about this probe, from the blueprint, navigate to Analytics > Probes, click Create
Probe, then select Instantiate Predefined Probe from the drop-down list. Select the probe from the
Predefined Probe drop-down list to see details specific to the probe.
1051
The drain traffic anomaly probe raises anomalies when excess traffic is on a node that is being drained.
1052
For more information about this probe, from the blueprint, navigate to Analytics > Probes, click Create
Probe, then select Instantiate Predefined Probe from the drop-down list. Select the probe from the
Predefined Probe drop-down list to see details specific to the probe.
Purpose This probe calculates ECMP imbalance on generic system-facing ports. The set of
external-facing links (keyed by common system_id) is determined to be imbalanced if the
standard deviation of the tx_bytes counter (averaged periodically over the specified
period) for the involved interfaces is above "Max Standard Deviation". If such imbalance is
observed for more than "Threshold Duration" over the last "Duration" time period, an
anomaly is raised. The last "Anomaly History Count" anomaly state changes are stored for
observation. If more than "Max Imbalanced Systems" systems are imbalanced,an anomaly
is raised. We maintain for inspection the number of imbalanced systems over the last
"System Imbalance History Count" samples.
Source external interface Purpose: wires in interface traffic samples (measured in transmitted
Processor traffic (Interface bytes per second) from each interface connected to the generic
Counters)
systems.
systems Purpose: Count how many systems have external ecmp imbalance
imbalanced anomaly true at any instant in time.
count
(Match Input Stage: sustained_ecmp_imbalance
Count)
Output Stage: Number of systems with external
system_tx_imbalance_count ecmp imbalance.
For more information about this probe, from the blueprint, navigate to Analytics > Probes, click Create
Probe, then select Instantiate Predefined Probe from the drop-down list. Select the probe from the
Predefined Probe drop-down list to see details specific to the probe.
A given set of ECMP links (only calculated on leaf-to-spine links), identified by common
system_id, is determined to be imbalanced if the standard-deviation of the tx_bytes
counter (averaged periodically over the specified period) for the involved leaf-interfaces
is above "Max Standard Deviation".
1055
If such imbalance is observed for more-than "Threshold Duration" over the last"Duration"
time period, we raise an anomaly.
The last "Anomaly History Count" anomaly state-changes are stored for observation.
We maintain for inspection the number of imbalanced systems over the last "System
Imbalance History Count" samples.
Source leaf fabric Purpose: wires in interface traffic samples (measured in bytes per
Processor interface second) from each spine-facing interface on each leaf.
traffic
(Interface
Output Stage: Set of traffic samples (for each spine-facing
Counters-)
leaf_fabric_int_traffic interface on each leaf). Each set member has
the following keys to identify it: label (human-
readable name of the leaf), system_id (id of the
leaf system, usually serial number), interface
(name of the interface).
Additional leaf fabric Purpose: Calculate average traffic during period specified by
Processor(s) interface average_period facade parameter. Unit is bytes per second.
traffic avg
(Periodic Input Stage: leaf_fabric_int_traffic
Average)
Output Stage: Set of traffic average values (for each spine-
leaf_fabric_int_tx_avg facing interface on each leaf). Each set
member has the following keys to identify it:
label (human-readable name of the leaf),
system_id (id of the leaf system, usually serial
number), interface (name of the interface).
leaf fabric Purpose: calculate standard deviation for a set consisting of traffic
interface std- averages for each spine-facing interface on a given leaf. Grouping per
dev (Standard
Deviation) leaf is achieved using 'group_by' property set to 'system_id'.
sustained Purpose: Evaluate if standard deviation between spine-facing interfaces on each leaf
ecmp has been outside acceptable range, (as defined by 'live ecmp imbalance' processor) for
imbalance
(Time in more than 'threshold_duration' seconds during last 'total_duration' seconds. These two
State) parameters are part of facade specification.
systems Purpose: Count how many systems have ecmp imbalance anomaly true at any instant in
imbalanced time.
count (Match
Count) Input Stage: system_imbalance
imbalanced Purpose: Evaluate if the number of imbalanced systems is within acceptable range,
system count which in this instance means less than 'max_systems_imbalanced' value which is a
out of range
(Range) facade parameter.
For more information about this probe, from the blueprint, navigate to Analytics > Probes, click Create
Probe, then select Instantiate Predefined Probe from the drop-down list. Select the probe from the
Predefined Probe drop-down list to see details specific to the probe.
The ECMP imbalance (spine to superspine interfaces) probe calculates ECMP imbalance on spine-to-
superspine ports. A given set of ECMP links (only calculated on spine-to-superspine links), identified by
common system_id, is determined to be imbalanced if the standard-deviation of the tx_bytes counter
(averaged periodically over the specified period) for the involved spine interfaces is above "Max
Standard Deviation". If such imbalance is observed for more-than "Threshold Duration" the last
"Duration" period, we raise an anomaly. The last "Anomaly History Count" anomaly state-changes are
stored for observation. If more-than "Max Imbalanced Systems" systems are imbalanced, we raise a
distinct anomaly. We maintain for inspection the number of imbalanced systems over the last "System
Imbalance History Count" samples.
1058
1059
For more information about this probe, from the blueprint, navigate to Analytics > Probes, click Create
Probe, then select Instantiate Predefined Probe from the drop-down list. Select the probe from the
Predefined Probe drop-down list to see details specific to the probe.
The ESI imbalance probe calculate ESI imbalance. It calculates the standard deviation across links for all
ESIs in the network. If any are over the specified threshold in the last specified time period, an anomaly
is raised. It also calculates percentage of ESIs in each rack in this state.
1060
1061
For more information about this probe, from the blueprint, navigate to Analytics > Probes, click Create
Probe, then select Instantiate Predefined Probe from the drop-down list. Select the probe from the
Predefined Probe drop-down list to see details specific to the probe.
EVPN host flaps occur when an L2 loop is mistakenly created under the leaf devices by connecting a
hub to two different leaf devices.
1062
For more information about this probe, from the blueprint, navigate to Analytics > Probes, click Create
Probe, then select Instantiate Predefined Probe from the drop-down list. Select the probe from the
Predefined Probe drop-down list to see details specific to the probe.
The EVPN VXLAN Type-3 route validation probe validates EVPN Type-3 routes on every leaf in the
network. It collects appropriate telemetry data, compares it to the set of Type-3 routes expected to be
present and alerts if expected routes are missing on any device.
• Anomaly Threshold (in %): If routes are missing for more than or equal to percentage of Anomaly
Time Window, an anomaly is raised. If Anomaly Time Window ATW, and Anomaly Threshold is AT. It
calculates Z = (ATW * AT)/100 in seconds. E.g. If ATW = 20 seconds, AT = 5%, then Z = (20 * 5)/100
= 1 second. When the route is in Missing state for Z seconds from total ATW duration, anomaly is
raised.
• Collection period: All these probes are polling-based so they have a polling period.
• Monitored VNs: Specify the virtual networks to be monitored. Either list of desired VN's e.g.
"1-3,6,8,10-13" or " * " to monitor all virtual networks.
• Missing: This route is missing on the device when compared to the expected route set.
1063
• Unexpected: There are no expectations rendered (by AOS) for this route.
This probe is created with an empty Monitored VNs (monitored_vn) list, which means that the probe
does not monitor any virtual networks by default. When you instantiate this probe you must specify a
list of virtual networks (up to ten) for which routes are collected, or you can specify " * " in which case all
virtual networks are monitored.
CAUTION: Specifying " * " in the Monitored VNs field may result in high cpu/memory/
network I/O overhead associated with BGP routing table iteration on the device side.
1064
NOTE: Auto-enabling the EVPN VXLAN Route Summary analytics dashboard enables the EVPN
VXLAN Type-3 Route Validation and EVPN Flood List Validation probes automatically (but not
the EVPN VXLAN Type-5 Route Validation probe). See Configuring Auto-Enabled
Dashboards<configure_dashboard> for information about enabling the dashboard.
For more information about this probe, from the blueprint, navigate to Analytics > Probes, click Create
Probe, then select Instantiate Predefined Probe from the drop-down list. Select the probe from the
Predefined Probe drop-down list to see details specific to the probe.
The EVPN VXLAN Type-5 route validation probe validates the EVPN Type 5 routes on every leaf. The
collected data is matched against the graph data to ascertain any missing routes on any system.
• Anomaly Threshold (in %): If routes are missing for more than or equal to percentage of Anomaly
Time Window, an anomaly is raised. If Anomaly Time Window ATW, and Anomaly Threshold is AT. It
calculates Z = (ATW * AT)/100 in seconds. E.g. If ATW = 20 seconds, AT = 5%, then Z = (20 * 5)/100
= 1 second. When the route is in Missing state for Z seconds from total ATW duration, anomaly is
raised.
1065
• Collection period: All these probes are polling-based so they have a polling period.
• Missing: This route is missing on the device when compared to the expected route set.
• Unexpected: There are no expectations rendered (by AOS) for this route.
If this probe is enabled it monitors all virtual networks from all devices. It does not provide the
“monitored VN list” configuration option like the VXLAN Type-3 probe does.
1066
NOTE: Auto-enabling the EVPN VXLAN Route Summary analytics dashboard enables the EVPN
VXLAN Type-3 Route Validation and EVPN Flood List Validation probes automatically (but not
the EVPN VXLAN Type-5 Route Validation probe). See Configuring Auto-Enabled
Dashboards<configure_dashboard> for information about enabling the dashboard.
For more information about this probe, from the blueprint, navigate to Analytics > Probes, click Create
Probe, then select Instantiate Predefined Probe from the drop-down list. Select the probe from the
Predefined Probe drop-down list to see details specific to the probe.
Purpose The External Routes probe automatically activates the collection of received or advertised
routes across all BGP sessions established with generic systems into a single stage output
table (mixing received, used and advertised routes). This probe assists with troubleshooting
external network connectivity problems.
Parameters The External Routes probe parameters below can be configured at time of creation or
anytime afterwards.
More-specific prefixes mask: Match more-specific prefixes from a parent prefix, up until
le_mask prefix length.
Less-specific prefixes mask: Match less-specific prefixes from a parent prefix, up from
ge_mask to the prefix length of the route.
For more information about this probe, from the blueprint, navigate to Analytics > Probes, click Create
Probe, then select Instantiate Predefined Probe from the drop-down list. Select the probe from the
Predefined Probe drop-down list to see details specific to the probe.
Purpose This probe determines hot/cold interface counters. It determines if interface counters are
hot (too high) or cold (too low). A given interface (considering only leaf fabric interfaces) is
considered to be in a hot state if its average counter value is greater than "Max". A given
interface (considering only leaf fabric interfaces) is considered to be in a cold state if its
average counter value is less than "Min". If such undesired state is observed for more-
than "Threshold Duration" over the last "Duration" period, an anomaly is raised. Distinct
anomalies are raised for hot and cold states. If more than "Max Hot Interface Percentage"
percent of interfaces on a given device are hot, we raise an anomaly. If more than "Max
Cold Interface Percentage" percent of interfaces on a given device are cold, we raise an
anomaly. Finally, the last "Anomaly History Count" anomaly state-changes are stored for
observation.
Source leaf interface Purpose: wires in interface traffic samples (measured in bytes per second)
Processor traffic from each spine facing interface on each leaf.
(Interface
Counters)
Output Stage: Set of traffic samples (for each spine-facing interface
leaf_int_traffic on each leaf). Each set member has the following keys
to identify it: system_id (id of the leaf system, usually
serial number), interface (name of the interface), role
(role of the interface, such as 'fabric').
Additional leaf interface Purpose: Calculate average traffic during period specified by
Processor(s) tx avg average_period facade parameter. Unit is bytes per second.
(Periodic
Average) Input Stage: leaf_int_traffic
1068
interface sum Purpose: Sum average traffic for all interface under consideration per
per device device.
(Sum)
Input Stage: leaf_int_tx_avg
interface sum Purpose: Sum average traffic for all interface under consideration per
per device device, per interface role.
per link role
(Sum) Input Stage: leaf_int_tx_avg
live leaf Purpose: Evaluate if the average traffic on spine facing interfaces on each
interface leaf is within acceptable range. In this case acceptable range means larger
cold (Range)
than min facade parameter (in bytes per second unit).
live leaf Purpose: Evaluate if the average traffic on spine-facing interfaces on each
interface hot leaf is within acceptable range. In this case acceptable range is between 0
(Range)
and max facade parameter (in bytes per second unit).
sustained Purpose: Evaluate if the average traffic spine facing interfaces on each
cold leaf leaf has been outside acceptable range, (as defined by 'live leaf interface
interface
(Time in cold' processor) for more than 'threshold_duration' seconds during the
State) last 'total_duration' seconds. These two parameters are part of facade
specification.
sustained hot Evaluate if the average traffic spine facing interfaces on each leaf has
leaf interface been outside acceptable range, (as defined by 'live leaf interface hot'
(Time in
State) processor) for more than 'threshold_duration' seconds during the last
'total_duration' seconds. These two parameters are part of facade
specification.
system Purpose: Calculate percentage of interfaces that are cold on any given
percent cold device under consideration.
(Match
Percentage) Input Stage: cold_leaf_int
system Purpose: Calculate percentage of interfaces that are hot on any given
percent hot device under consideration.
(Match
Percentage) Input Stage: hot_leaf_int
device cold Purpose: Evaluate if the percentage of cold interfaces on a specific device
(Range) is outside the acceptable range, where acceptable range in his case means
less than 'max_cold_interface_percentage', which is a facade parameter.
device hot Purpose: Evaluate if the percentage of hot interfaces on a specific device
(Range) is outside the acceptable range, where acceptable range in his case means
less than 'max_hot_interface_percentage', which is a facade parameter.
For more information about this probe, from the blueprint, navigate to Analytics > Probes, click Create
Probe, then select Instantiate Predefined Probe from the drop-down list. Select the probe from the
Predefined Probe drop-down list to see details specific to the probe.
The hot/cold interface counters (specific interfaces) probe determines hot/cold specific interface
counters. It determines if interface counters averaged over "Average Period" are hot (too high) or cold
(too low). A given interface (out of the specified list) is considered to be in a hot state if its average
counter value is greater than "Max". A given interface (out of the specified list) is considered to be in a
cold state if its average counter value is less than "Min". If such undesired state is observed for more-
than "Threshold Duration" over the last "Duration" time period, we raise an anomaly. Distinct anomalies
are raised for hot and cold states. If more than "Max Hot Interface Percentage" percent of interfaces on
a given device are hot, we raise an anomaly. If more than "Max Cold Interface Percentage" percent of
interfaces on a given device are cold, we raise an anomaly. Finally, the last "Anomaly History Count"
anomaly state-changes are stored for observation.
1072
1073
For more information about this probe, from the blueprint, navigate to Analytics > Probes, click Create
Probe, then select Instantiate Predefined Probe from the drop-down list. Select the probe from the
Predefined Probe drop-down list to see details specific to the probe.
The hot/cold interface counters (spine-to-superspine interfaces) probe calculates ECMP imbalance on
spine-to-superspine ports. A given set of ECMP links (only calculated on spine-to-superspine links),
identified by common system_id, is determined to be imbalanced if the standard-deviation of the
tx_bytes counter (averaged periodically over the specified period) for the involved spine interfaces is
above "Max Standard Deviation". If such an imbalance is observed for more-than "Threshold Duration"
the last "Duration" period, we raise an anomaly. The last "Anomaly History Count" anomaly state-
changes are stored for observation. If more-than "Max Imbalanced Systems" systems are imbalanced, we
raise a distinct anomaly. We maintain for inspection the number of imbalanced systems over the last
"System Imbalance History Count" samples.
1074
1075
For more information about this probe, from the blueprint, navigate to Analytics > Probes, click Create
Probe, then select Instantiate Predefined Probe from the drop-down list. Select the probe from the
Predefined Probe drop-down list to see details specific to the probe.
Purpose Detect inconsistent LAG configs between fabric and virtual infra and calculate LAGs
missing on hypervisors and managed leaf devices connected to hypervisors.
Source Hypervisor NICs with LAG output stage: Hypervisor NICs LAG Intent Status
Processor (generic graph collector) (discrete state set) (generated from graph)
Additional Hypervisor NIC LAG input stage: Hypervisor NICs LAG Intent Status
Processor(s) anomalies (state)
output stage: Hypervisor NIC LAG Mismatch Anomaly
(discrete state set)
Example vSphere Integration - This probe detects inconsistent LAG configs between fabric LAG
Usage dual-leaf devices and ESXi hosts. LACP mode information is collected from the fabric
LAG dual-leaf devices and also connects to vCenter API and collects LAG groups and
members per hypervisor.
• LAG member ports on ToR are connected to non-LAG physical ports on ESXi.
• Non-LAG member ports on ToR are connected to LAG physical ports on ESXi.
NSX Integration - Enabling this probe activates a continuous LAG validation between
NSX-T transport nodes and data center fabric. It validate that LAGs are properly
configured between fabric LAG dual-leaf devices and NSX-T transport nodes. The
NSX-T uplink profile defines the network interface configuration facing the fabric in
terms of LAG and LACP config. Network interface misconfiguration between the
transport node and the ToR switch is validated and detected.
• NSX-T transport nodes are not configured for LAG but ToR has LAG member ports
in the fabric.
• ESXi hosts are dual-attached to ToR leaf devices but corresponding NSX-T transport
nodes are “single-attached” or they are using “NIC-teaming” using active-standby or
load-balanced config.
2. Add NSX-T Manager in the blueprint (External Systems > Virtual Infra Managers).
Let’s say in the NSX-T uplink profile, LAG is deleted but the fabric has LAG in terms of
ToR leaf devices having LAG member ports. As a result in a blueprint after enabling this
probe LAG mismatch anomalies are raised.
Since the LAG on the NSX-T transport nodes has been deleted, there is a mismatch
between physical network adapter (pnic) on ESXi host LAG configuration and LAG
configuration on ToR leaf devices.
For more information about this probe, from the blueprint, navigate to Analytics > Probes, click Create
Probe, then select Instantiate Predefined Probe from the drop-down list. Select the probe from the
Predefined Probe drop-down list to see details specific to the probe.
IN THIS SECTION
Source Fabric configured VLAN configs output stage: Fabric VLAN configs (number set)
Processors (generic graph collector) (generated from graph)
2. Click the Fabric VLAN Configs stage to show the VLANs tagged towards NSX-T transport nodes on
fabric ToR leaf devices as shown below:
3. Click the Common in Fabric and Hypervisor stage to show that VLANs in the NSX-T transport nodes
and the fabric match.
1080
If the VLAN defined in the Uplink Transport Zone used for BGP peering is modified in the NSX-T
Manager, then VLAN mismatch anomalies are raised.
• If the configured VLAN NSX-T transport node is in the fabric, but the end VMs or servers are not part
of this virtual network or VLAN.
• If a segment is created in NSX-T for either an overlay or VLAN-based transport zone. It could be that
the configured VLAN spanning the logical switch/segment on the transport node is missing on the
fabric.
• If L2 bridging for VMs in different overlay logical segments is broken because one VM exists in one
logical switch/segment and the other VM exists in a separate uplink logical switch/segment.
As an example, a VLAN is missing in NSX-T 3.0 Host Transport node on the Overlay segment connected
to ToR leaf devices and respective VXLAN VN is present in Juniper Apstra Fabric and ports towards
1081
In some scenarios, a VLAN mismatch anomaly can be remediated. If so, the Remediate Anomalies button
appears on the probe details page as shown in the screenshot above. Example scenarios include:
• NSX-T transport nodes use an uplink profile to define transport VLAN over which overlay tunnel
comes up. Fabric could be missing the rack-local VN for transport VLAN on hypervisors. One-click
remediation can be provided by creating a new rack-local virtual network with the proper VLAN ID in
the fabric.
1082
• A rack-local virtual network is defined with VLAN ID Y, however, the connected virtual infra nodes
(i.e hypervisors) do not have the VLAN ID in the logical segment/switch. One-click remediation can
be provided by removing the endpoint from the affected VLAN ID.
If the Remediate Anomalies button appears under the stage name, you can click it to automatically stage
the changes required to remediate the anomaly. You can see the staged changes on the Uncommitted
tab.
Review the staged configuration, add any necessary resources (such as IP subnet address, virtual
gateway IP, as so on), then commit the configuration.
• If the vCenter Distributed Virtual Switch (vDS) port group does not have a corresponding rack-local
VN (VLAN) for VLAN ID X. With one-click remediation, a new rack-local virtual network (VLAN) with
the proper VLAN ID is created.
• If endpoint X in a rack-local VN with VLAN ID Y, does not have a corresponding dVS port group.
With one-click remediation, the endpoint is removed from the affected VLAN ID.
Note
vCenter vDS must be used with VLAN specific ID allocation on the port group for L2 network
segmentation at the hypervisor level.
A VLAN-based rack-local virtual network is extending each VLAN segment defined on the vDS, across
servers within the same rack. For example, vDS port group VLAN 10 = rack-local virtual network with
VLAN 10.
For more information about this probe, from the blueprint, navigate to Analytics > Probes, click Create
Probe, then select Instantiate Predefined Probe from the drop-down list. Select the probe from the
Predefined Probe drop-down list to see details specific to the probe.
1083
Purpose Detect maximum transmission unit (MTU) value deviations across hypervisor physical
network adapters (pnics).
Source Interface MTU (generic output stage: Interface MTU (number set) (generated
Processor graph collector) from graph)
MTU Mismatch (range) input stage: Hypervisor MTU Deviation (number set)
Example Usage NSX Integration - If validation fails between NSX-T nodes and the controller in terms
of mismatch of minimum configured MTU to support Geneve encapsulation or if the
VLANs defined on NSX-T nodes are not configured on ToR leaf interfaces connecting
an NSX node to the fabric, then anomalies are raised.
For more information about this probe, from the blueprint, navigate to Analytics > Probes, click Create
Probe, then select Instantiate Predefined Probe from the drop-down list. Select the probe from the
Predefined Probe drop-down list to see details specific to the probe.
Purpose Detect virtual infra interfaces with maximum transmission units (MTU) below a
specified threshold (default: 1600).
Source Interface MTU (generic output stage: Interface MTU (number set) (generated
Processor graph collector) from graph)
Example Usage NSX Integration - To carry VXLAN-encapsulated overlay traffic, an MTU greater than
1600 is recommended. NSX-T transport nodes connected to ToR leaf devices that are
below the specified threshold are detected.
1084
If any of the hypervisors were below the threshold, the expected value would change
to true and an anomaly would be raised.
For more information about this probe, from the blueprint, navigate to Analytics > Probes, click Create
Probe, then select Instantiate Predefined Probe from the drop-down list. Select the probe from the
Predefined Probe drop-down list to see details specific to the probe.
Purpose Detect virtual infra hosts that are not configured for LLDP. (Formerly known as
Virtual Infra missing LLDP config).
Source Processor Hypervisor NIC LLDP output stage: Hypervisor NIC LLDP config (discrete
Config (generic graph) state set) (generated from graph)
Additional LLDP config by switch input stage: Hypervisor NIC LLDP config
Processor(s) (match count)
output stage: LLDP config by switch (number set)
Example Usage VMware Integration - If LLDP information is missing on ToR connected to physical
ports on ESXi, an anomaly is raised.
For more information about this probe, from the blueprint, navigate to Analytics > Probes, click Create
Probe, then select Instantiate Predefined Probe from the drop-down list. Select the probe from the
Predefined Probe drop-down list to see details specific to the probe.
Source Hypervisor and connected leaf output stage: hypervisor_and_leaf (text set)
Processors (generic graph) (generated from graph)
Example NSX-T Integration - an anomaly is raised in cases without redundancy or a single point
Usage of failure (SPOF) in hypervisor connectivity. Examples include:
• NSX-T transport nodes with a single non-LAG uplink towards ToR leaf devices in the
fabric can result in a single point of failure (SPOF) for overlay traffic.
• NSX-T transport nodes with a single LAG uplink with both members going to a single
ToR leaf can result in a single point of failure (SPOF).
1086
• Lack of redundancy between fabric LAG dual-leaf devices and ESXi hosts.
For more information about this probe, from the blueprint, navigate to Analytics > Probes, click Create
Probe, then select Instantiate Predefined Probe from the drop-down list. Select the probe from the
Predefined Probe drop-down list to see details specific to the probe.
Purpose This probe determines if fabric interfaces are flapping. A given interface (considering only
fabric interfaces) is considered to be flapping if it transitions state more than "Threshold"
times over the last "Duration". Such flapping will cause an anomaly to be raised. If more-
than "Max Flapping Interfaces Percentage" percent of interfaces on a given device are
flapping, an anomaly will be raised for that device. Finally, the last "Anomaly History
Count" anomaly state-changes are stored for observation.
Source leaf fab int Purpose: wires in interface status telemetry for all fabric interfaces on
Processor status (Service the leaf devices.
Data Collector)
Output Stage: Set of operational states ("up" or "down"). Each set
leaf_if_status member corresponds to a leaf fabric interface and
has the following keys to identify it: system_id (id of
the leaf system, usually serial number), interface
(name of the interface).
Additional leaf fabric interface Purpose: create recent history time series for each interface
Processor(s) status history status In terms of the number of samples, the time series will hold
(Accumulate)
the smaller of: 1024 samples or samples collected during the last
'total_duration' seconds (facade parameter).
leaf fabric interface Purpose: Count the number of state changes in the
flapping (Range) leaf_fab_int_status_accumulate ("up" to "down" and "down" to
"up"). If the count is higher than 'threshold' facade parameter
return "true", otherwise "false".
For more information about this probe, from the blueprint, navigate to Analytics > Probes, click Create
Probe, then select Instantiate Predefined Probe from the drop-down list. Select the probe from the
Predefined Probe drop-down list to see details specific to the probe.
1088
The interface flapping (specific interfaces) probe determines if specific interfaces are flapping. A given
interface (considering only those specified) is considered to be flapping if it transitions state more than
"Threshold" times over the last "Duration". Such flapping causes an anomaly to be raised. If more-than
"Max Flapping Interfaces Percentage" percent of interfaces on a given device are flapping, an anomaly is
raised for that device. Finally, the last "Anomaly History Count" anomaly state-changes are stored for
observation.
1089
For more information about this probe, from the blueprint, navigate to Analytics > Probes, click Create
Probe, then select Instantiate Predefined Probe from the drop-down list. Select the probe from the
Predefined Probe drop-down list to see details specific to the probe.
The interface flapping (specific interfaces) probe determines if specific interfaces are flapping. A given
interface (considering only those specified) is considered to be flapping if it transitions state more than
"Threshold" times over the last "Duration". Such flapping causes an anomaly to be raised. If more-than
"Max Flapping Interfaces Percentage" percent of interfaces on a given device are flapping, an anomaly is
raised for that device. Finally, the last "Anomaly History Count" anomaly state-changes are stored for
1090
observation.
1091
For more information about this probe, from the blueprint, navigate to Analytics > Probes, click Create
Probe, then select Instantiate Predefined Probe from the drop-down list. Select the probe from the
Predefined Probe drop-down list to see details specific to the probe.
The Interface Policy predefined probe is used to monitor 802.1X supplicants and interface
authentication. You can instantiate this probe to maintain 802.1X networks. The 802.1X hosts probe
gives a fast view of network 802.1X MAC addresses, authorization status, ports, and dynamic VLAN
1092
information.
For more information about interface policies, see Interface Policies <interface_policies>.
For more information about this probe, from the blueprint, navigate to Analytics > Probes, click Create
Probe, then select Instantiate Predefined Probe from the drop-down list. Select the probe from the
Predefined Probe drop-down list to see details specific to the probe.
The LAG imbalance probe calculates LAG imbalance. It calculates the standard deviation across physical
links for all LAGs in the network.
1093
1094
For more information about this probe, from the blueprint, navigate to Analytics > Probes, click Create
Probe, then select Instantiate Predefined Probe from the drop-down list. Select the probe from the
Predefined Probe drop-down list to see details specific to the probe.
Monitors leaf devices hosting critical services identified by user "tags" and provides trending data for
fabric-facing interfaces and alerts if bandwidth utilization reaches a threshold (80%). Users are
proactively notified of issues from potential bandwidth contention. Additionally, historical data is
persisted for trending analysis for troubleshooting or assisting in right-sizing future deployments. By
default, the probe will display the total fabric interface as well as the total percentage of bandwidth used
for each tagged leaf device for the past one day (1-day). An anomaly will be raised if the used bandwidth
from the tagged leaf reaches 80% of the total available uplink bandwidth.
1095
For more information about this probe, from the blueprint, navigate to Analytics > Probes, click Create
Probe, then select Instantiate Predefined Probe from the drop-down list. Select the probe from the
Predefined Probe drop-down list to see details specific to the probe.
The link fault tolerance in leaf and access LAG probe monitors LAG fault tolerance issues from a
capacity viewpoint.
1096
1097
For more information about this probe, from the blueprint, navigate to Analytics > Probes, click Create
Probe, then select Instantiate Predefined Probe from the drop-down list. Select the probe from the
Predefined Probe drop-down list to see details specific to the probe.
The MLAG Imbalance probe calculates MLAG imbalance. It calculates standard deviation across links for
all MLAGs in the network. If any are over the specified threshold in the last specified time period, an
anomaly is raised. It calculates the percentage of MLAGs in each rack in this state. It calculates standard
deviation across port-channels for all port-channels in all MLAGs in the network. If any are over the
specified threshold in the last specified time period, an anomaly is raised. It also calculates the
percentage of MLAGs in each rack in this state. Finally, it calculates standard deviation of port-channels
across their containing MLAGs. If the standard deviation for any of these MLAGs is over the specified
threshold, an anomaly is raised. Finally, we calculate the percentage of port-channels in each rack in this
state.
Source mlag Purpose: wires in interface traffic samples (measured in bytes per second)
Processor interface all leaf interfaces that are part of an MLAG. Unit is bytes per second.
traffic
(Interface
Output stage: Set of traffic samples (for each mlag interface on each
Counters)
mlag_int_traffic leaf). Each set member has the following keys to
identify it: mlag_id, server (label of the server node),
leaf (label of the leaf node), rack (label of the rack),
system_id (leaf serial number), interface (name of the
interface).
(Standard Output Stage: Set of numbers, one for each mlag_id, each
Deviation) mlag_int_traffic_imbalance indicating standard deviation of the
average traffic on each interface that is
part of this MLAG. Each set member has
the following keys to identify it: rack,
mlag_id. Unit is bytes per second.
port-channel Purpose: Calculate total traffic per port channel. Unit is byte per second.
total traffic
(Sum) Input Stage: mlag_int_traffic_avg
mlag port- Purpose: Calculate standard deviation between traffic averages on both
channel port channels belonging to an MLAG. Unit is bytes per second.
traffic std-
dev Input Stage: mlag_port_channel_total
(Standard
Deviation)
Output Stage: Set of numbers, one for each MLAG
mlag_port_channel_imbalance identified by mlag_id, rack pair. Each
number indicates standard deviation of
the average traffic on each port channel
that is part of this MLAG. Each set
member has the following keys to
identify it: rack, mlag_id. Unit is bytes
per second.
1099
live port- Purpose: Evaluate if the port channel imbalance as measured by standard
channel deviation for the average traffic on each member interface is within
imbalance
(Range) acceptable range. In this case acceptable range is between 0 and std_max
facade parameter (in bytes per second unit).
mlag port- Purpose: Calculate percentage of MLAGs on a given rack that have port
channel channel imbalance anomaly.
imbalance
per rack Input Stage: mlag_port_channel_imbalance_out_of_range
(Match
Percentage)
Output Stage: Set of numbers, each
mlag_port_channel_imbalance_anomaly_per_rack indicating the
percentage of port
channels with
imbalance on each
rack. Each set
member has the
following key to
1101
For more information about this probe, from the blueprint, navigate to Analytics > Probes, click Create
Probe, then select Instantiate Predefined Probe from the drop-down list. Select the probe from the
Predefined Probe drop-down list to see details specific to the probe.
The multiagent detector probe raises an anomaly if EOS is not running in multiagent mode, indicating
that a reboot is required.
For more information about this probe, from the blueprint, navigate to Analytics > Probes, click Create
Probe, then select Instantiate Predefined Probe from the drop-down list. Select the probe from the
Predefined Probe drop-down list to see details specific to the probe.
1102
The Optical Transceivers probe monitors optical statistics based on the following telemetry data:
If telemetry data falls outside the specified range for the specified amount of time, a warning or alarm is
raised, as applicable.
Warnings and alarms specify whether the value causing the anomaly was too high or too low.
1103
For more information about this probe, from the blueprint, navigate to Analytics > Probes, click Create
Probe, then select Instantiate Predefined Probe from the drop-down list. Select the probe from the
Predefined Probe drop-down list to see details specific to the probe.
1104
The packet discard percentage probe raises visibility into issues related to physical interfaces.
1105
For more information about this probe, from the blueprint, navigate to Analytics > Probes, click Create
Probe, then select Instantiate Predefined Probe from the drop-down list. Select the probe from the
Predefined Probe drop-down list to see details specific to the probe.
1106
The spine fault tolerance probe monitors spine fault tolerance issues from a capacity viewpoint.
1107
For more information about this probe, from the blueprint, navigate to Analytics > Probes, click Create
Probe, then select Instantiate Predefined Probe from the drop-down list. Select the probe from the
Predefined Probe drop-down list to see details specific to the probe.
Purpose The Total East/West Traffic probe calculates total east/west traffic. This probe takes the
sum of all traffic to leaf devices from their directly-attached servers and subtracts from
that the sum of all traffic to external routers (all traffic values in this calculation are
averaged periodically over "Average Period"). The result of this is the total east/west
traffic. Time series of length "History Sample Count" is maintained for the sum of server
traffic, the sum of external traffic, and the total east/west traffic.
When instantiating this probe, external router tag(s) must be specified (new in version
4.0).
Source external router Purpose: wires in interface traffic samples (measured in bytes per
Processors south-north second) for traffic sent to external routers
link traffic
(Interface Output Stage: ext_router_interface_traffic
Counters)
leaf server Purpose: wires in interface traffic samples (measured in bytes per
traffic counters second) for traffic received on leaf devices from the servers
(Interface
Counters)
Output Stage: Set of traffic samples (for each server-facing
server_traffic_counters interface on each leaf) in the receive
direction. Each set member has the
following keys to identify it: system_id (id of
the leaf system, usually serial number),
interface (name of the interface).
Additional external router south- Purpose: Calculate average traffic for each interface-facing
Processor(s) north links traffic average external router traffic during period specified by
(Periodic Average)
average_period facade parameter. Unit is bytes per second.
server traffic average Purpose: Calculate average server traffic during period
(Periodic Average) specified by average_period facade parameter. Unit is bytes
per second.
south-north traffic (Sum) Purpose: Calculate total traffic by summing average traffic on
each interface-facing external router. Unit is bytes per
second.
total server traffic (Sum) Purpose: Calculate total server traffic by summing average
traffic on each interface attached to servers in receive
direction. Unit is bytes per second.
server generated traffic Purpose: Purpose: Calculate total average server traffic over
average (Periodic average_period seconds, which is a facade parameter. Unit is
Average)
bytes per second.
east-west traffic Purpose: create recent history time series showing how total
(Subtract) average east-west traffic changed over time. In terms of the
number of samples, the time series holds
history_sample_count values (facade parameter). Unit is
bytes per second.
Purpose Calculate VMs missing a VLAN and calculate VMs not backed by VLANs on managed
leaf devices connected to hypervisors.
Source VMs backed by Fabric VLANs output stage: VMs backed by Fabric VLANs (number
Processors (generic graph collector) set) (generated from graph)
Affected VM Anomalies (range input stage: VMs not backed by Fabric VLANs
<processor_range>)
output stage: Affected VM Anomalies (discrete
state set)
Example NSX-T Integration - VMs participating in a particular network are attached to an NSX
Usage logical switch. In NSX transport zone controls to which hypervisors or ESXi host an NSX
logical switch can span. To have VXLAN connectivity for these VMs they need to be part
of the same transport zone. This predefined anomaly helps validate that all VLAN
backend interfaces defined for NSX-T nodes are also configured on the ToR interfaces
connecting that node to the fabric.
VLAN probe anomaly checks for VLAN specification in case of NSX-T via one of the two
methods below:
Method One: When you have VMs that are connected to the NSX-T overlay, you can
configure a bridge-backed logical switch to provide layer 2 connectivity with other
devices or VMs. So via VLAN specification on NSX-T layer 2 bridges and fabric if
respective VXLAN VN is not there, then an anomaly is raised.
Method Two: Edge uplinks go out through VLAN logical switches. So let's say if the
uplink VLAN logical switch has a particular VLAN ID and respective VLAN on ToR port
connected to the hypervisor host is not configured then also this VLAN probe will raise
anomalies and help detect such misconfiguration.
There is one VM on each ESXi host that needs a VXLAN VN endpoint on each leaf, i.e.
nsxcompute_001_leaf1 and nsxedge_001_leaf1 to communicate on the overlay network.
When VXLAN VNs assigned to ToR leaf devices are deleted, VLAN misconfig anomalies
are raised as below under Fabric Health in the dashboard.
VMs not backed by Fabric VLANs shows VMs with VLAN missing.
The VXLAN flood list validation probe validates the VXLAN flood list entries on every leaf in the
network. It collects appropriate telemetry data, compares it to the set of flood list forwarding entries
expected to be present and alerts if expected entries are missing on any device.
• Anomaly Threshold (in %): If routes are missing for more than or equal to percentage of Anomaly
Time Window, an anomaly is raised. If Anomaly Time Window ATW, and Anomaly Threshold is AT. It
calculates Z = (ATW * AT)/100 in seconds. E.g. If ATW = 20 seconds, AT = 5%, then Z = (20 * 5)/100
= 1 second. When the route is in Missing state for Z seconds from total ATW duration, anomaly is
raised.
• Collection period: All these probes are polling-based so they have a polling period.
• Missing: This route is missing on the device when compared to the expected route set.
1113
• Unexpected: There are no expectations rendered (by AOS) for this route.
NOTE: Auto-enabling the EVPN VXLAN Route Summary analytics dashboard enables the EVPN
VXLAN Type-3 Route Validation and EVPN Flood List Validation probes automatically (but not
the EVPN VXLAN Type-5 Route Validation probe). See Configuring Auto-Enabled
Dashboards<configure_dashboard> for information about enabling the dashboard.
1114
For more information about this probe, from the blueprint, navigate to Analytics > Probes, click Create
Probe, then select Instantiate Predefined Probe from the drop-down list. Select the probe from the
Predefined Probe drop-down list to see details specific to the probe.
IN THIS SECTION
Processor: Accumulate
IN THIS SECTION
The Accumulate processor used in IBA probes creates one number or discrete state time-series on
output for each input with the same properties; each time the input changes, it takes its timestamp and
value and appends them to the corresponding output series. If total duration (total_duration) is set and
the length of the output time series in time is greater than duration, it removes old samples from the
time series until this is no longer the case. If max samples (max_samples) is set and the length of the
output time series in terms of number of samples is greater than max_samples, it removes old samples
from the time series until this is no longer the case.
Parameter Description
Total Duration (total_duration) Limits the number of samples by their total duration.
(in seconds) or an expression that evaluates to number
of seconds (default:0)
1116
(Continued)
Parameter Description
(Continued)
Parameter Description
graph_query: [node("property_set",
label="probe_propset", name="ps")]
duration: int(query_result[0]
["ps"].values["accumulate_duration"])
Example: Accumulate
Assume a configuration of
max_samples: 3
total_duration: 0
1118
[if_name=eth0] : "up"
[if_name=eth1] : "down"
[if_name=eth3] : "up"
[if_name=eth0] : "down"
[if_name=eth1] : "down"
[if_name=eth3] : "up"
[if_name=eth0] : "up"
[if_name=eth1] : "down"
[if_name=eth3] : "up"
[if_name=eth0] : "down"
[if_name=eth1] : "down"
[if_name=eth3] : "up"
If the expressions are used for max_samples or total_duration, then they are evaluated for each input
item and the corresponding key is added for each output item.
max_samples: context.ref_max_samples * 2
total_duration: context.ref_duration * 2
Sample input:
Output
Processor: Average
The Average processor groups as described by Group by, then calculates averages and outputs one
average for each group.
Parameter Description
(Continued)
Parameter Description
Group by (group_by) Accepts a list of property names to group input items into output items, produces only
one output group for the empty list.Most processors take input and produce output.
Many of them produce one output per input (for example, if input is a DSS, output is a
DSS of same size). However, some processors reduce the size of the output relative to
the size of the input. Effectively, they partition the input into groups, run some
calculation on each of the groups that produce a single value per each group, and use
that as output. Clearly, the size of the output set depends on the grouping scheme.
We call such processors grouping processors and they all take the Group by
configuration parameter.
In the case of an empty list, the input is considered to be a single group; thus, the
output is of size 1 and either N, DS, or TS. If a list of property names is specified, for
example ["system_id", "iface_role"], or a single property is specified, for example
["system_id"], we divide the input into groups such that for each group, every item in
the group has the same values for the given list of property names. See the "standard
deviation processor" on page 1162 example for how this works.
The output type of a processor depends on a value of the group_by parameter; for an
empty list, a processor produces a single value result, such as N, DS, or T, and for
grouping by one or more properties it returns a set result, such as NS, DSS, or TS.
Enable Streaming Makes samples of output stages streamed if enabled. An optional boolean that
(enable_streaming) defaults to False. If set to True, all output stages of this processor are streamed in the
generic protobuf schema.
Example: Average
See "standard deviation" on page 1162 example. It's the same except we calculate average instead of
standard deviation.
Processor: Comparison
IN THIS SECTION
The Comparison processor takes two Table(number) inputs: 'A' and 'B'. It then matches corresponding
items from the inputs by their keys, and performs a comparison operation defined by the 'operation'
configuration property. If the inputs have different sets of keys, the 'significant_keys' configuration
property should be set, which is a list of keys used to map items from the inputs. Otherwise, if the inputs
set of keys are different, no items will be matched and an empty result is returned. Also, inputs and
significant_keys (if specified) must allow only 1:1 item mapping from 'A' to 'B'. If it allows to match one
item from 'A' to more than one item from 'B' and vice versa, the probe goes into error state.
Parameters Description
Comparison Operation Operation for comparing operands. le (less than or equal), ne (not equal), ge (greater
(operation) than or equal), gt (greater than), lt (less than), eq (equal)
Significant Keys List of keys to map items from the inputs for applying the specified operation. It is
(significant_keys) typically used by processors that take multiple inputs and perform operations on
them. When inputs have the same sets of keys it does not need to be specified.
When inputs have different sets of keys, it must be specified and it must allow only
1:1 items mapping from the given inputs, otherwise the probe will go into error
state.
Enable Streaming Makes samples of output stages streamed if enabled. An optional boolean that
(enable_streaming) defaults to False. If set to True, all output stages of this processor are streamed in
the generic protobuf schema.
Example: Comparison
Input A:
[system_id=leaf1,interface=eth0,counter_type=tx_bytes]: 34
[system_id=leaf1,interface=eth1,counter_type=tx_bytes]: 58
1122
Input B:
[system_id=leaf1,interface=eth0,counter_type=rx_bytes]: 15
[system_id=leaf1,interface=eth1,counter_type=rx_bytes]: 73
Output (Discrete-State-Set):
[system_id=leaf1,interface=eth0]: "true"
[system_id=leaf1,interface=eth1]: "false"
The EVPN Type 3 processor generates a configuration containing expectations of EVPN type 3 routes.
Parameter Description
Monitored VNs The VNs to be monitored. Specify * to monitor all the VNs or list the desired
ones, e.g. "1-3,6,8,10-13".
Enable Streaming Makes samples of output stages streamed if enabled. An optional boolean
(enable_streaming) that defaults to False. If set to True, all output stages of this processor are
streamed in the generic protobuf schema.
The EVPN Type 5 processor generates a configuration containing expectations of EVPN type 5 routes.
1123
Parameter Description
Enable Streaming Makes samples of output stages streamed if enabled. An optional boolean
(enable_streaming) that defaults to False. If set to True, all output stages of this processor are
streamed in the generic protobuf schema.
The Extensible Service Data Collector processor collects data supplied by a custom service that is not
'lldp', 'bgp' or 'interface'.
Parameter Description
Data Type Type of data the service collects: numbers (ns) (such as device temperature), discrete
states (dss) (such as device status), text or tables
1124
(Continued)
Parameter Description
Graph Query One or more queries on graph specified as strings, or a list of such queries. (String will
(graph_query) be deprecated in a future release.) Multiple queries should provide all the named
nodes referenced by the expression fields (including additional_properties). Graph
query is executed on the "operation" graph. Results of the queries can be accessed
using the "query_result" variable with the appropriate index. For example, if querying
property set nodes under name "ps", the result will be available as "query_result[0]
["ps"]".
(Continued)
Parameter Description
Ingestion filter New (reserved) key. Ingestion filter determines what metrics from the collector make it
(ingestion_filter) into the probe. We support a degenerate case of ingestion filter, that is, probe specifies
full identities of all metrics that need to be ingested. With this feature, you can ingest
metrics that satisfy a criterion that is expressed using an ingestion filter.
Keys available to express in the filter are same as the metric identity keys.
• No metric identity key can exist directly under "properties". If any metric identity
key is mistakenly specified directly under properties, a validation error is raised.
• Only explicitly specified keys under "ingestion_filter" can be referenced by the rest
of the probe configuration. This is to enhance probe readability and allow better
overall validation.
• Existing reserved key "keys" is now made optional and can be omitted. The key
names should exactly match those specified in the schema of the corresponding
service definition.
Keys (keys) List of keys that are significant for specifying data elements for this service
Query Expansion For every path, originally returned by graph queries, passed to each generator the
latter one produces a set of items and for each item it produces a new path extended
by a corresponding property name which value is set of a value of the produced item.
1126
(Continued)
Parameter Description
Query Group by List (of strings) of node and relationship names used in the graph query to group query
(query_group_by) results by. Each element in this list represents a named node or relationship matcher in
the graph_query field.It is not an expression to be consistent with existing group_by field
in grouping processors. Non-expression is simple and more intuitive.
When grouping is active (query_group_by is not null), query results are d by the specified
list of names, where one output item is created per each group. In this case, the
expressions can only access matcher names specified in query_group_by and the query
results for each group are accessed using a new group_items variable. The group_items
variable is a list of query results, where each result has named nodes/relationships, not
present in query_group_by.
The following list describes the behavior for various values of this field:
• Omitted or provided as json null (ala None in Python) - No grouping is done. This is
equivalent to current behavior of extensible_data_collector. Using ‘group_items’ in
this case is not permitted and results in probe error state.
• Empty list ([]) - Produces one group containing all the query results.
• One or more matcher names - The query results are grouped by the specified
nodes or relationships. If this list covers all available matchers in the query, the
number of groups is equal to the number of query results.
Query Tag Filter Filters named nodes in the graph queries by assigned tags.
(query_tag_filter)
Value Map A mapping of discrete-state values to human readable strings. A dictionary with all
possible Discrete-State-Set states mapped to human-readable representation;
applicable for Discrete-State-Set data (that is, when data_type is 'dss') only.
{
"0": "unknown",
"1": "down",
"2": "up",
"3": "missing"
}
1127
(Continued)
Parameter Description
Additional Keys Each additional key/value pair is used to extend properties of output stages where
value is considered as an expression executed in context of the graph query and its
result is used as a property value with respective key. The value of this property is
evaluated for each item to associate items with metrics provided by a corresponding
collector service. The association is done by keys because each collector reports a set
of metrics where each metric is identified by a key in a format that is specific for each
collector.
Enable Streaming Makes samples of output stages streamed if enabled. An optional boolean that
(enable_streaming) defaults to False. If set to True, all output stages of this processor are streamed in the
generic protobuf schema.
IN THIS SECTION
The Generic Graph Collector processor imports data from the graph into the output stage, depending on
the configuration (a graph query).
1128
'graph query' and 'additional properties' behave as in other source processors. Importantly, the
expression in the 'value' field yields a value per each item. Thus, unique to this source processor, values
come from the graph rather than from device telemetry.
Parameter Description
Data Type Type of data the service collects: numbers (ns) (such as device temperature), discrete
states (dss) (such as device status), text or tables
Graph Query One or more queries on graph specified as strings, or a list of such queries. (String will
(graph_query) be deprecated in a future release.) Multiple queries should provide all the named nodes
referenced by the expression fields (including additional_properties). Graph query is
executed on the "operation" graph. Results of the queries can be accessed using the
"query_result" variable with the appropriate index. For example, if querying property
set nodes under name "ps", the result will be available as "query_result[0]["ps"]".
Query Expansion For every path, originally returned by graph queries, passed to each generator the latter
one produces a set of items and for each item it produces a new path extended by a
corresponding property name which value is set of a value of the produced item.
1129
(Continued)
Parameter Description
Query Group by List (of strings) of node and relationship names used in the graph query to group query
(query_group_by) results by. Each element in this list represents a named node or relationship matcher in
the graph_query field.It is not an expression to be consistent with existing group_by field
in grouping processors. Non-expression is simple and more intuitive.
When grouping is active (query_group_by is not null), query results are d by the specified
list of names, where one output item is created per each group. In this case, the
expressions can only access matcher names specified in query_group_by and the query
results for each group are accessed using a new group_items variable. The group_items
variable is a list of query results, where each result has named nodes/relationships, not
present in query_group_by.
The following list describes the behavior for various values of this field:
• Omitted or provided as json null (ala None in Python) - No grouping is done. This is
equivalent to current behavior of extensible_data_collector. Using ‘group_items’ in
this case is not permitted and results in probe error state.
• Empty list ([]) - Produces one group containing all the query results.
• One or more matcher names - The query results are grouped by the specified nodes
or relationships. If this list covers all available matchers in the query, the number of
groups is equal to the number of query results.
Query Tag Filter Filters named nodes in the graph queries by assigned tags.
(query_tag_filter)
Value Map A mapping of discrete-state values to human readable strings. A dictionary with all
possible Discrete-State-Set states mapped to human-readable representation;
applicable for Discrete-State-Set data (that is, when data_type is 'dss') only.
{
"0": "unknown",
"1": "down",
"2": "up",
"3": "missing"
}
1130
(Continued)
Parameter Description
Value (value) Expression evaluated per query result to collect value. (integer for NS and string for TS/
DSS)
Additional Keys Each additional key/value pair is used to extend properties of output stages where
value is considered as an expression executed in context of the graph query and its
result is used as a property value with respective key. The value of this property is
evaluated for each item to associate items with metrics provided by a corresponding
collector service. The association is done by keys because each collector reports a set
of metrics where each metric is identified by a key in a format that is specific for each
collector.
Enable Streaming Makes samples of output stages streamed if enabled. An optional boolean that defaults
(enable_streaming) to False. If set to True, all output stages of this processor are streamed in the generic
protobuf schema.
[system_id=leaf1,interface=eth0]: "ip"
[system_id=leaf1,interface=eth1]: "ip"
The Generic Service Data Collector processor collects data supplied by a custom service that is not 'lldp',
'bgp' or 'interface'. Service name is specified as 'service_name', service specific key is specified as 'key',
1131
'data_type' to specifies if the collected data is numbers or discrete state values, and 'value_map' for the
specific data could be specified as well.
Parameter Description
Data Type Type of data the service collects: numbers (ns) (such as device temperature), discrete
states (dss) (such as device status), text or tables
Graph Query One or more queries on graph specified as strings, or a list of such queries. (String will
(graph_query) be deprecated in a future release.) Multiple queries should provide all the named
nodes referenced by the expression fields (including additional_properties). Graph
query is executed on the "operation" graph. Results of the queries can be accessed
using the "query_result" variable with the appropriate index. For example, if querying
property set nodes under name "ps", the result will be available as "query_result[0]
["ps"]".
Query Expansion For every path, originally returned by graph queries, passed to each generator the
latter one produces a set of items and for each item it produces a new path extended
by a corresponding property name which value is set of a value of the produced item.
1132
(Continued)
Parameter Description
Query Group by List (of strings) of node and relationship names used in the graph query to group query
(query_group_by) results by. Each element in this list represents a named node or relationship matcher
in the graph_query field.It is not an expression to be consistent with existing group_by
field in grouping processors. Non-expression is simple and more intuitive.
When grouping is active (query_group_by is not null), query results are d by the
specified list of names, where one output item is created per each group. In this case,
the expressions can only access matcher names specified in query_group_by and the
query results for each group are accessed using a new group_items variable. The
group_items variable is a list of query results, where each result has named nodes/
relationships, not present in query_group_by.
The following list describes the behavior for various values of this field:
• Omitted or provided as json null (ala None in Python) - No grouping is done. This is
equivalent to current behavior of extensible_data_collector. Using ‘group_items’ in
this case is not permitted and results in probe error state.
• Empty list ([]) - Produces one group containing all the query results.
• One or more matcher names - The query results are grouped by the specified
nodes or relationships. If this list covers all available matchers in the query, the
number of groups is equal to the number of query results.
Query Tag Filter Filters named nodes in the graph queries by assigned tags.
(query_tag_filter)
Value Map A mapping of discrete-state values to human readable strings. A dictionary with all
possible Discrete-State-Set states mapped to human-readable representation;
applicable for Discrete-State-Set data (that is, when data_type is 'dss') only.
{
"0": "unknown",
"1": "down",
"2": "up",
"3": "missing"
}
1133
(Continued)
Parameter Description
Key (key) Expression mapping from graph query to whatever key is necessary for the service.
Additional Keys Each additional key/value pair is used to extend properties of output stages where
value is considered as an expression executed in context of the graph query and its
result is used as a property value with respective key. The value of this property is
evaluated for each item to associate items with metrics provided by a corresponding
collector service. The association is done by keys because each collector reports a set
of metrics where each metric is identified by a key in a format that is specific for each
collector.
Enable Streaming Makes samples of output stages streamed if enabled. An optional boolean that
(enable_streaming) defaults to False. If set to True, all output stages of this processor are streamed in the
generic protobuf schema.
IN THIS SECTION
The Interface Counters processor selects interfaces according to the configuration and outputs counter
stats of the specified types (such as 'tx_bytes').
Parameter Description
Graph Query One or more queries on graph specified as strings, or a list of such queries. (String will
(graph_query) be deprecated in a future release.) Multiple queries should provide all the named nodes
referenced by the expression fields (including additional_properties). Graph query is
executed on the "operation" graph. Results of the queries can be accessed using the
"query_result" variable with the appropriate index. For example, if querying property
set nodes under name "ps", the result will be available as "query_result[0]["ps"]".
Query Expansion For every path, originally returned by graph queries, passed to each generator the
latter one produces a set of items and for each item it produces a new path extended
by a corresponding property name which value is set of a value of the produced item.
1135
(Continued)
Parameter Description
Query Group by List (of strings) of node and relationship names used in the graph query to group query
(query_group_by) results by. Each element in this list represents a named node or relationship matcher in
the graph_query field.It is not an expression to be consistent with existing group_by field
in grouping processors. Non-expression is simple and more intuitive.
When grouping is active (query_group_by is not null), query results are d by the specified
list of names, where one output item is created per each group. In this case, the
expressions can only access matcher names specified in query_group_by and the query
results for each group are accessed using a new group_items variable. The group_items
variable is a list of query results, where each result has named nodes/relationships, not
present in query_group_by.
The following list describes the behavior for various values of this field:
• Omitted or provided as json null (ala None in Python) - No grouping is done. This is
equivalent to current behavior of extensible_data_collector. Using ‘group_items’ in
this case is not permitted and results in probe error state.
• Empty list ([]) - Produces one group containing all the query results.
• One or more matcher names - The query results are grouped by the specified
nodes or relationships. If this list covers all available matchers in the query, the
number of groups is equal to the number of query results.
Query Tag Filter Filters named nodes in the graph queries by assigned tags.
(query_tag_filter)
Interface (interface) Expression mapping from graph query to interface name, e.g. "iface.if_name" if "iface"
is a name in the graph query.
Additional Keys Each additional key/value pair is used to extend properties of output stages where
value is considered as an expression executed in context of the graph query and its
result is used as a property value with respective key. The value of this property is
evaluated for each item to associate items with metrics provided by a corresponding
collector service. The association is done by keys because each collector reports a set
of metrics where each metric is identified by a key in a format that is specific for each
collector.
1136
(Continued)
Parameter Description
Enable Streaming Makes samples of output stages streamed if enabled. An optional boolean that defaults
(enable_streaming) to False. If set to True, all output stages of this processor are streamed in the generic
protobuf schema.
In this example, we create a NSS that has an entry for rx_bytes (per second) per every interface in the
system. Each entry is implicitly tagged by "system_id" and "interface". Furthermore, as we have specified
an additional property, each entry is also tagged by role of the system.
[system_id=spine1,role=spine,key=eth0]: 10
[system_id=spine2,role=spine,key=eth1]: 11
[system_id=leaf0,role=leaf, key=swp1]: 12
(New in version 4.0) The Logical Operator processor calculates the logical operation of inputs. It takes
two or more inputs that represent boolean values.
The property 'operation' specifies the logical operation. The property 'input_columns' specifies column
names that input items should be taken from.
Parameter Description
Input Types Tables that contain discrete_state type column according to the 'input_columns'
property or Table (discrete_state) if the 'input_columns' is not specified.
1137
(Continued)
Parameter Description
Operation Logical operation type that is used for processing the input data
Significant Keys List of keys to map items from the inputs for applying the specified operation. It is
(significant_keys) typically used by processors that take multiple inputs and perform operations on
them. When inputs have the same sets of keys it does not need to be specified.
When inputs have different sets of keys, it must be specified and it must allow only
1:1 items mapping from the given inputs, otherwise the probe will go into error state.
Enable Streaming Makes samples of output stages streamed if enabled. An optional boolean that
(enable_streaming) defaults to False. If set to True, all output stages of this processor are streamed in the
generic protobuf schema.
IN THIS SECTION
For each input group, the Match Count processor creates a single output that is the number of items in
the input group that are equal to the reference. The 'total_count' key is added into output item keys
where the value is a number of items in an input group.
Parameter Description
Output Types NS
1138
(Continued)
Parameter Description
Group by (group_by) Accepts a list of property names to group input items into output items, produces
only one output group for the empty list.Most processors take input and produce
output. Many of them produce one output per input (for example, if input is a DSS,
output is a DSS of same size). However, some processors reduce the size of the
output relative to the size of the input. Effectively, they partition the input into
groups, run some calculation on each of the groups that produce a single value per
each group, and use that as output. Clearly, the size of the output set depends on
the grouping scheme. We call such processors grouping processors and they all take
the Group by configuration parameter.
In the case of an empty list, the input is considered to be a single group; thus, the
output is of size 1 and either N, DS, or TS. If a list of property names is specified, for
example ["system_id", "iface_role"], or a single property is specified, for example
["system_id"], we divide the input into groups such that for each group, every item in
the group has the same values for the given list of property names. See the standard
deviation processor<processor_standard_deviation> example for how this works.
The output type of a processor depends on a value of the group_by parameter; for
an empty list, a processor produces a single value result, such as N, DS, or T, and for
grouping by one or more properties it returns a set result, such as NS, DSS, or TS.
Reference State DS or TS value which is used as a reference state to match input samples. discrete-
(reference_state) state value
Enable Streaming Makes samples of output stages streamed if enabled. An optional boolean that
(enable_streaming) defaults to False. If set to True, all output stages of this processor are streamed in the
generic protobuf schema.
reference_state: "false"
group_by: []
1139
Sample Input:
[if_name=eth0] : "true"
[if_name=eth1] : "true"
[if_name=eth3] : "false"
Sample Output:
[] : 1
In the above example, we have 1 as the output because 1 element of the input group matches the
reference value of "false".
IN THIS SECTION
For each input group, the Match Percentage processor creates a single output that is the percentage of
items in the input group that are equal to the reference.
Parameter Description
(Continued)
Parameter Description
Group by (group_by) Accepts a list of property names to group input items into output items, produces
only one output group for the empty list.Most processors take input and produce
output. Many of them produce one output per input (for example, if input is a DSS,
output is a DSS of same size). However, some processors reduce the size of the
output relative to the size of the input. Effectively, they partition the input into
groups, run some calculation on each of the groups that produce a single value per
each group, and use that as output. Clearly, the size of the output set depends on
the grouping scheme. We call such processors grouping processors and they all take
the Group by configuration parameter.
In the case of an empty list, the input is considered to be a single group; thus, the
output is of size 1 and either N, DS, or TS. If a list of property names is specified, for
example ["system_id", "iface_role"], or a single property is specified, for example
["system_id"], we divide the input into groups such that for each group, every item in
the group has the same values for the given list of property names. See the standard
deviation processor <processor_standard_deviation> example for how this works.
The output type of a processor depends on a value of the group_by parameter; for
an empty list, a processor produces a single value result, such as N, DS, or T, and for
grouping by one or more properties it returns a set result, such as NS, DSS, or TS.
Reference State DS or TS value which is used as a reference state to match input samples.
(reference_state)
Enable Streaming Makes samples of output stages streamed if enabled. An optional boolean that
(enable_streaming) defaults to False. If set to True, all output stages of this processor are streamed in the
generic protobuf schema.
reference_state: "false"
group_by: []
1141
Sample Input:
[if_name=eth0] : "true"
[if_name=eth1] : "true"
[if_name=eth3] : "false"
Sample Output:
[] : 33
In the above example, we have 33% as the output because 33% of the input group match the reference
value of "false".
IN THIS SECTION
The Max String processor checks that a string matches a regular expression. It accepts text series on
input, for each series it configures a check that verifies if the input value matches the configured regular
expression. Regular expression syntax is PCRE-compatible. Note that regexp matching is done in a
partial mode, so if the full match is needed, regular expression needs to be specified accordingly. The
output series contains anomaly values, such as 'false' and 'true'.
Parameter Description
(Continued)
Parameter Description
Graph Query One or more queries on graph specified as strings, or a list of such queries. (String
(graph_query) will be deprecated in a future release.) Multiple queries should provide all the
named nodes referenced by the expression fields (including additional_properties).
Graph query is executed on the "operation" graph. Results of the queries can be
accessed using the "query_result" variable with the appropriate index. For example,
if querying property set nodes under name "ps", the result will be available as
"query_result[0]["ps"]".
(Continued)
Parameter Description
Anomaly MetricLog Retain anomaly metric data in MetricDb for specified duration in seconds
Retention Duration
Anomaly MetricLog Maximum allowed size, in bytes of anomaly metric data to store in MetricDB
Retention Size
Enable Streaming Makes samples of output stages streamed if enabled. An optional boolean that
(enable_streaming) defaults to False. If set to True, all output stages of this processor are streamed in
the generic protobuf schema.
Raise Anomaly Outputs “true” and “false” values, “true” meaning an appropriate item is anomalous,
(raise_anomaly) and "false" meaning the item is not anomalous. When Raise Anomaly is set to True,
an actual anomaly is generated in addition to a sample in the output.
regexp: "os_version_pattern"
[device=leaf1,os_version_pattern=^4.[7-9].[0-9]+$] : 4.1
[device=leaf2,os_version_pattern=^4.[7-9].[0-9]+$] : 4.7
1144
[device=leaf1,os_version_pattern=^4.[7-9].[0-9]+$,regex=^4.[7-9].[0-9]+$] : "true"
[device=leaf2,os_version_pattern=^4.[7-9].[0-9]+$,regex=^4.[7-9].[0-9]+$] : "false"
Processor: Max
IN THIS SECTION
The Max processor groups as described by Group by, then finds the maximum value and outputs it for
each group.
Parameter Description
(Continued)
Parameter Description
Group by (group_by) Accepts a list of property names to group input items into output items, produces only
one output group for the empty list.Most processors take input and produce output.
Many of them produce one output per input (for example, if input is a DSS, output is a
DSS of same size). However, some processors reduce the size of the output relative to
the size of the input. Effectively, they partition the input into groups, run some
calculation on each of the groups that produce a single value per each group, and use
that as output. Clearly, the size of the output set depends on the grouping scheme.
We call such processors grouping processors and they all take the Group by
configuration parameter.
In the case of an empty list, the input is considered to be a single group; thus, the
output is of size 1 and either N, DS, or TS. If a list of property names is specified, for
example ["system_id", "iface_role"], or a single property is specified, for example
["system_id"], we divide the input into groups such that for each group, every item in
the group has the same values for the given list of property names. See the "standard
deviation processor" on page 1162 example for how this works.
The output type of a processor depends on a value of the group_by parameter; for an
empty list, a processor produces a single value result, such as N, DS, or T, and for
grouping by one or more properties it returns a set result, such as NS, DSS, or TS.
Enable Streaming Makes samples of output stages streamed if enabled. An optional boolean that
(enable_streaming) defaults to False. If set to True, all output stages of this processor are streamed in the
generic protobuf schema.
Example: Max
group_by: ["system_id"]
Sample Input:
[system_id=leaf0,if_name=swp40] : 10
[system_id=leaf0,if_name=swp41] : 11
[system_id=leaf0,if_name=swp42] : 15
[system_id=spine0,if_name=eth15] : 32
1146
[system_id=spine0,if_name=eth16] : 30
[system_id=spine0,if_name=eth17] : 36
Output "out":
[system_id=leaf0] : 15
[system_id=spine0] : 36
Processor: Min
IN THIS SECTION
The Min processor groups as described in Group by, then finds the minimum value and outputs it for
each group.
Parameter Description
(Continued)
Parameter Description
Group by (group_by) Accepts a list of property names to group input items into output items, produces only
one output group for the empty list.Most processors take input and produce output.
Many of them produce one output per input (for example, if input is a DSS, output is a
DSS of same size). However, some processors reduce the size of the output relative to
the size of the input. Effectively, they partition the input into groups, run some
calculation on each of the groups that produce a single value per each group, and use
that as output. Clearly, the size of the output set depends on the grouping scheme.
We call such processors grouping processors and they all take the Group by
configuration parameter.
In the case of an empty list, the input is considered to be a single group; thus, the
output is of size 1 and either N, DS, or TS. If a list of property names is specified, for
example ["system_id", "iface_role"], or a single property is specified, for example
["system_id"], we divide the input into groups such that for each group, every item in
the group has the same values for the given list of property names. See the "standard
deviation processor" on page 1162 example for how this works.
The output type of a processor depends on a value of the group_by parameter; for an
empty list, a processor produces a single value result, such as N, DS, or T, and for
grouping by one or more properties it returns a set result, such as NS, DSS, or TS.
Enable Streaming Makes samples of output stages streamed if enabled. An optional boolean that
(enable_streaming) defaults to False. If set to True, all output stages of this processor are streamed in the
generic protobuf schema.
Example: Min
group_by: ["system_id"]
Sample Input:
[system_id=leaf0,if_name=swp40] : 10
[system_id=leaf0,if_name=swp41] : 11
[system_id=leaf0,if_name=swp42] : 15
[system_id=spine0,if_name=eth15] : 32
1148
[system_id=spine0,if_name=eth16] : 30
[system_id=spine0,if_name=eth17] : 36
Output "out":
[system_id=leaf0] : 10
[system_id=spine0] : 30
IN THIS SECTION
One number is created on output for each input. Each <period>, the output is set to the average of the
input over the last <period>. This is not a weighted average.
Parameter Description
Period Size of the averaging period. (time in seconds, integer, or an expression that evaluates
to time in seconds integer value)
1149
(Continued)
Parameter Description
Graph Query One or more queries on graph specified as strings, or a list of such queries. (String will
(graph_query) be deprecated in a future release.) Multiple queries should provide all the named
nodes referenced by the expression fields (including additional_properties). Graph
query is executed on the "operation" graph. Results of the queries can be accessed
using the "query_result" variable with the appropriate index. For example, if querying
property set nodes under name "ps", the result will be available as "query_result[0]
["ps"]".
(Continued)
Parameter Description
Enable Streaming Makes samples of output stages streamed if enabled. An optional boolean that
(enable_streaming) defaults to False. If set to True, all output stages of this processor are streamed in the
generic protobuf schema.
period: 2
[if_name=eth0] : 10
[if_name=eth1] : 20
[if_name=eth3] : 30
[if_name=eth0] : 20
[if_name=eth1] : 30
[if_name=eth3] : 40
[if_name=eth0] : 40
[if_name=eth1] : 50
[if_name=eth3] : 60
1151
[if_name=eth0] : 15
[if_name=eth1] : 25
[if_name=eth3] : 35
This output is the average over the last discrete period of 2 seconds (time=0 to time=2). Notice that the
average is not weighted by time; frequently-occuring closely-spaced samples will bias the average.
The next time the output would be updated would be at time t=4, in which case it would contain the
average of the input over the range [t=2, t=4], a period of the configured two seconds.
Processor: Range
IN THIS SECTION
The Range processor checks that a value is in a range. According to the specified range, it configures a
check for the input series. This check returns an anomaly value if a series aggregation value, such as a
last value, sum, avg etc., is in the range. This aggregation type is configured by the 'property' attribute,
which is set to 'value' if not specified. The output series contains anomaly values, such as 'true' and
'false'. (Previously called 'not_in_range' and 'range_check'.) The range processor generates the output of
True when the input matches the specified criteria.
Parameter Description
Property A property of input items which is used to check against the range. Enum of either
value, sample_count, sum, avg
Anomalous Range (range) Numeric range, either min or max is optional. Float type is acceptable only with
property "std_dev", other property values require integers. Min and max can be
expressions evaluated into numeric values.
1152
(Continued)
Parameter Description
Graph Query One or more queries on graph specified as strings, or a list of such queries. (String
(graph_query) will be deprecated in a future release.) Multiple queries should provide all the named
nodes referenced by the expression fields (including additional_properties). Graph
query is executed on the "operation" graph. Results of the queries can be accessed
using the "query_result" variable with the appropriate index. For example, if
querying property set nodes under name "ps", the result will be available as
"query_result[0]["ps"]".
(Continued)
Parameter Description
Anomaly MetricLog Retain anomaly metric data in MetricDb for specified duration in seconds
Retention Duration
Anomaly MetricLog Maximum allowed size, in bytes of anomaly metric data to store in MetricDB
Retention Size
Enable Streaming Makes samples of output stages streamed if enabled. An optional boolean that
(enable_streaming) defaults to False. If set to True, all output stages of this processor are streamed in
the generic protobuf schema.
Raise Anomaly Outputs “true” and “false” values, “true” meaning an appropriate item is anomalous,
(raise_anomaly) and "false" meaning the item is not anomalous. When Raise Anomaly is set to True,
an actual anomaly is generated in addition to a sample in the output.
Example: Range
[if_name=eth0] : 23
[if_name=eth1] : 55
[if_name=eth3] : 37
1154
[if_name=eth0] : "false"
[if_name=eth1] : "false"
[if_name=eth3] : "true"
If expressions are used for min or max fields of the range property, then they are evaluated for each
input item which results into item-specific thresholds. Properties of the respective output item are
extended by range_min or range_max properties with calculated values.
[if_name=eth0,speed=10000000000] : 800000000
[if_name=eth1,speed=1000000000] : 800000000
if_name=eth0,speed=10000000000,range_max=7000000000] : "false"
[if_name=eth1,speed=1000000000,range_max=700000000] : "true"
Processor: Ratio
IN THIS SECTION
The Ratio processor calculates the ratio of inputs. It takes two inputs: numerator and denominator.
Denominator is optional and could be specified as 'denominator' configuration property instead. It could
be either an integer or an expression that evaluates to an integer. It should not be '0'.
1155
When 'denominator' is specified as an input, 'numerator' and 'denominator' input items must allow only
1:1 mapping. If that is not the case, 'significant_keys' configuration property should be specified to list
keys that will allow such mapping.
It also supports 'multiplier' configuration property, which is an integer value greater than one to multiply
numerator by before calculating ratio. This allows it to overcome limitations of dealing with integers.
Default value is 100.
Parameter Description
Significant Keys List of keys to map items from the inputs for applying the specified operation. It is
(significant_keys) typically used by processors that take multiple inputs and perform operations on
them. When inputs have the same sets of keys it does not need to be specified.
When inputs have different sets of keys, it must be specified and it must allow only
1:1 items mapping from the given inputs, otherwise the probe will go into error state.
Multiplier Multiply numerator by a given value before calculating ratio. Optional. Default is
100.
Enable Streaming Makes samples of output stages streamed if enabled. An optional boolean that
(enable_streaming) defaults to False. If set to True, all output stages of this processor are streamed in the
generic protobuf schema.
denominator: 100
multiplier: 1
1156
Input 'numerator':
[system_id=spine1,role=spine,interface=eth0]: 300
[system_id=spine2,role=spine,interface=eth1]: 500
Output:
[system_id=spine1,role=spine,interface=eth0]: 3
[system_id=spine1,role=spine,interface=eth1]: 5
Configuration where numerator and denominator are coming from inputs, and 'multiplier' value is the
default 100:
Input 'numerator':
[system_id=spine1,role=spine,interface=eth0]: 300
[system_id=spine2,role=spine,interface=eth1]: 750
Input 'denominator':
[system_id=spine1,role=spine,interface=eth0]: 150
[system_id=spine2,role=spine,interface=eth1]: 250
Output:
[system_id=spine1,interface=eth0]: 200
[system_id=spine1,interface=eth1]: 300
IN THIS SECTION
The Service Data Collector processor collects data from the specified service. For example, 'bgp' service
would be the status of BGP sessions. Objects to be monitored are configured via the graph query and
key. In the BGP example, key should evaluate to locallp, localAs, remoteIp, or remote As. For interface-
based services such as 'interface' and 'lldp', key is an interface name.
Parameter Description
Graph Query One or more queries on graph specified as strings, or a list of such queries. (String will
(graph_query) be deprecated in a future release.) Multiple queries should provide all the named nodes
referenced by the expression fields (including additional_properties). Graph query is
executed on the "operation" graph. Results of the queries can be accessed using the
"query_result" variable with the appropriate index. For example, if querying property
set nodes under name "ps", the result will be available as "query_result[0]["ps"]".
Keys List of property names which values will be used as a key parameters for the service.
Expression mapping from graph query to whatever key is necessary for the service. For
lldp it's a string with interface name. For bgp it's a tuple like (src_addr, src_asn, dst_addr,
dst_asn, vrf_name, addr_family), where addr_family should be one of ipv4, ipv6, or
evpn. For interface it is a string with interface name.
1158
(Continued)
Parameter Description
Query Expansion For every path, originally returned by graph queries, passed to each generator the latter
one produces a set of items and for each item it produces a new path extended by a
corresponding property name which value is set of a value of the produced item.
Query Group by List (of strings) of node and relationship names used in the graph query to group query
(query_group_by) results by. Each element in this list represents a named node or relationship matcher in
the graph_query field.It is not an expression to be consistent with existing group_by field
in grouping processors. Non-expression is simple and more intuitive.
When grouping is active (query_group_by is not null), query results are d by the specified
list of names, where one output item is created per each group. In this case, the
expressions can only access matcher names specified in query_group_by and the query
results for each group are accessed using a new group_items variable. The group_items
variable is a list of query results, where each result has named nodes/relationships, not
present in query_group_by.
The following list describes the behavior for various values of this field:
• Omitted or provided as json null (ala None in Python) - No grouping is done. This is
equivalent to current behavior of extensible_data_collector. Using ‘group_items’ in
this case is not permitted and results in probe error state.
• Empty list ([]) - Produces one group containing all the query results.
• One or more matcher names - The query results are grouped by the specified nodes
or relationships. If this list covers all available matchers in the query, the number of
groups is equal to the number of query results.
Query Tag Filter Filters named nodes in the graph queries by assigned tags.
(query_tag_filter)
(Continued)
Parameter Description
Additional Keys Each additional key/value pair is used to extend properties of output stages where
value is considered as an expression executed in context of the graph query and its
result is used as a property value with respective key. The value of this property is
evaluated for each item to associate items with metrics provided by a corresponding
collector service. The association is done by keys because each collector reports a set
of metrics where each metric is identified by a key in a format that is specific for each
collector.
Enable Streaming Makes samples of output stages streamed if enabled. An optional boolean that defaults
(enable_streaming) to False. If set to True, all output stages of this processor are streamed in the generic
protobuf schema.
ode("system", name="system").out("hosted_interfaces").
node("interface", name="iface").out("link").
node("link", role="spine_leaf")"
system_id: "system.system_id"
key: "interface.if_name"
role: "system.role"
In this example, we create a DSS that has an entry for every fabric interface in the system. Each entry is
implicitly tagged by "system_id" and "key" (where key happens to be the interface name for the interface
service). Furthermore, as we have specified an additional property "role", each entry is also tagged by
system role.
[system_id=spine1,role=spine,key=eth0]: "up"
[system_id=spine2,role=spine,key=eth1]: "down"
[system_id=leaf0,role=leaf, key=swp1]: "up"
1160
IN THIS SECTION
Accept two DS or NS inputs, called "A" and "B". There are three outputs: A stage "A - B" that contains
the items that are only in stage "A," a stage "B - A" that contains the items that are only in stage "B," and
a stage "A & B" that contains the items that are in both stage "A" and stage "B."
When conducting the above operations, we first normalize all items in each stage by dropping all the
keys that are not in "significant_keys." It is an error if a key in "significant_keys" is not present in either
stage "A" or "B."
Furthermore, only the keys of each normalized item are considered; values are preserved (and kept from
stage "A" in the intersection output), but not considered in the comparison operations.
Results are undefined if, when normalizing items in either stage_A or stage_B, there is more-than-one
item with a given set of key-value pairs.
Parameter Description
Significant Keys List of keys to map items from the inputs for applying the specified operation. It is
(significant_keys) typically used by processors that take multiple inputs and perform operations on
them. When inputs have the same sets of keys it does not need to be specified.
When inputs have different sets of keys, it must be specified and it must allow only
1:1 items mapping from the given inputs, otherwise the probe will go into error state.
Enable Streaming Makes samples of output stages streamed if enabled. An optional boolean that
(enable_streaming) defaults to False. If set to True, all output stages of this processor are streamed in the
generic protobuf schema.
Input A:
[system_id=leaf1]: 45
[system_id=leaf2]: 52
[system_id=leaf3]: 61
Input B:
[system_id=leaf2]: 52
[system_id=leaf4]: 64
A - B:
[system_id=leaf1]: 45
[system_id=leaf3]: 61
B - A:
[system_id=leaf4]: 64
A & B:
[system_id=leaf2]: 52
IN THIS SECTION
The Set Count processor groups as described in Group by, then calculates the number of items in each
group.
1162
Parameter Descripton
Group by (group_by) Accepts a list of property names to group input items into output items, produces
only one output group for the empty list.Most processors take input and produce
output. Many of them produce one output per input (for example, if input is a DSS,
output is a DSS of same size). However, some processors reduce the size of the
output relative to the size of the input. Effectively, they partition the input into
groups, run some calculation on each of the groups that produce a single value per
each group, and use that as output. Clearly, the size of the output set depends on the
grouping scheme. We call such processors grouping processors and they all take the
Group by configuration parameter.
In the case of an empty list, the input is considered to be a single group; thus, the
output is of size 1 and either N, DS, or TS. If a list of property names is specified, for
example ["system_id", "iface_role"], or a single property is specified, for example
["system_id"], we divide the input into groups such that for each group, every item in
the group has the same values for the given list of property names. See the "standard
deviation processor" on page 1162 example for how this works.
The output type of a processor depends on a value of the group_by parameter; for an
empty list, a processor produces a single value result, such as N, DS, or T, and for
grouping by one or more properties it returns a set result, such as NS, DSS, or TS.
Enable Streaming Makes samples of output stages streamed if enabled. An optional boolean that
(enable_streaming) defaults to False. If set to True, all output stages of this processor are streamed in the
generic protobuf schema.
See "standard deviation" on page 1162 example. It's the same except we calculate the number of stage
items.
IN THIS SECTION
The Standard Deviation processor groups as described by Group by, calculates the standard deviation,
then outputs one standard deviation per group.
Parameter Description
Group by (group_by) Accepts a list of property names to group input items into output items, produces only
one output group for the empty list.Most processors take input and produce output.
Many of them produce one output per input (for example, if input is a DSS, output is a
DSS of same size). However, some processors reduce the size of the output relative to
the size of the input. Effectively, they partition the input into groups, run some
calculation on each of the groups that produce a single value per each group, and use
that as output. Clearly, the size of the output set depends on the grouping scheme.
We call such processors grouping processors and they all take the Group by
configuration parameter.
In the case of an empty list, the input is considered to be a single group; thus, the
output is of size 1 and either N, DS, or TS. If a list of property names is specified, for
example ["system_id", "iface_role"], or a single property is specified, for example
["system_id"], we divide the input into groups such that for each group, every item in
the group has the same values for the given list of property names. See the "standard
deviation processor" on page 1162 example for how this works.
The output type of a processor depends on a value of the group_by parameter; for an
empty list, a processor produces a single value result, such as N, DS, or T, and for
grouping by one or more properties it returns a set result, such as NS, DSS, or TS.
DDoF (ddof) Delta Degrees of Freedom, standard deviation correction value, is used to correct
divisor (N - DDoF) in calculations, e.g. DDoF=0 - uncorrected sample standard
deviation, DDoF=1 - corrected sample standard deviation.
Enable Streaming Makes samples of output stages streamed if enabled. An optional boolean that
(enable_streaming) defaults to False. If set to True, all output stages of this processor are streamed in the
generic protobuf schema.
Processor: State
IN THIS SECTION
The State processor checks that a value is one of the specified anomalous states. It outputs DSS with
anomaly values, such as 'true' if the value is in the specified states, and otherwise, it returns 'false'.
(previously called 'state_check' and 'in_state'). The State processor supports multiple reference states
and output is 'true' when input is in any of the specified states.
Parameter Description
(Continued)
Parameter Description
Graph Query One or more queries on graph specified as strings, or a list of such queries. (String
(graph_query) will be deprecated in a future release.) Multiple queries should provide all the named
nodes referenced by the expression fields (including additional_properties). Graph
query is executed on the "operation" graph. Results of the queries can be accessed
using the "query_result" variable with the appropriate index. For example, if querying
property set nodes under name "ps", the result will be available as "query_result[0]
["ps"]".
(Continued)
Parameter Description
Anomalous States Expression that evaluates to DS value or list of DS values which is used for the
check. For example, it can be: "'true'" (expression evaluating to a string) or "['missing',
'unknown', 'down']" (expression evaluating to a list of strings).
Anomaly MetricLog Retain anomaly metric data in MetricDb for specified duration in seconds
Retention Duration
Anomaly MetricLog Maximum allowed size, in bytes of anomaly metric data to store in MetricDB
Retention Size
Enable Streaming Makes samples of output stages streamed if enabled. An optional boolean that
(enable_streaming) defaults to False. If set to True, all output stages of this processor are streamed in
the generic protobuf schema.
Raise Anomaly Outputs “true” and “false” values, “true” meaning an appropriate item is anomalous,
(raise_anomaly) and "false" meaning the item is not anomalous. When Raise Anomaly is set to True,
an actual anomaly is generated in addition to a sample in the output.
Example: State
state: '"up"'
[if_name=eth0] : "up"
[if_name=eth1] : "down"
[if_name=eth3] : "up"
1167
[if_name=eth0] : "false"
[if_name=eth1] : "true"
[if_name=eth3] : "false"
If expression is used for the state field, then it's evaluated for each input item, and it results into item-
specific state value. Properties of the respective output item are extended by the state property with
value of the evaluated expression.
state: expected_if_state
[if_name=eth0,expected_if_state=up] : "up"
[if_name=eth1,expected_if_state=down] : "down"
[if_name=eth3,expected_if_state=up] : "down"
[if_name=eth0,state=up] : "false"
[if_name=eth1,state=down] : "false"
[if_name=eth3,state=up] : "true"
Processor: Subtract
One number is created on output for each number with the same properties in both inputs. For each
input item the processor leaves only significant keys, drops the others and puts the result. If there is no
common set of properties between both inputs, the output is the empty set.
Parameter Description
(Continued)
Parameter Description
Significant Keys List of keys to map items from the inputs for applying the specified operation. It is
(significant_keys) typically used by processors that take multiple inputs and perform operations on
them. When inputs have the same sets of keys it does not need to be specified.
When inputs have different sets of keys, it must be specified and it must allow only
1:1 items mapping from the given inputs, otherwise the probe will go into error
state.
Enable Streaming Makes samples of output stages streamed if enabled. An optional boolean that
(enable_streaming) defaults to False. If set to True, all output stages of this processor are streamed in
the generic protobuf schema.
Processor: Sum
IN THIS SECTION
The Sum processor groups as described by Group by property, then calculates sum and outputs one for
each group.
Parameter Description
(Continued)
Parameter Description
Group by (group_by) Accepts a list of property names to group input items into output items, produces only
one output group for the empty list.Most processors take input and produce output.
Many of them produce one output per input (for example, if input is a DSS, output is a
DSS of same size). However, some processors reduce the size of the output relative to
the size of the input. Effectively, they partition the input into groups, run some
calculation on each of the groups that produce a single value per each group, and use
that as output. Clearly, the size of the output set depends on the grouping scheme.
We call such processors grouping processors and they all take the Group by
configuration parameter.
In the case of an empty list, the input is considered to be a single group; thus, the
output is of size 1 and either N, DS, or TS. If a list of property names is specified, for
example ["system_id", "iface_role"], or a single property is specified, for example
["system_id"], we divide the input into groups such that for each group, every item in
the group has the same values for the given list of property names. See the "standard
deviation processor" on page 1162 example for how this works.
The output type of a processor depends on a value of the group_by parameter; for an
empty list, a processor produces a single value result, such as N, DS, or T, and for
grouping by one or more properties it returns a set result, such as NS, DSS, or TS.
Enable Streaming Makes samples of output stages streamed if enabled. An optional boolean that
(enable_streaming) defaults to False. If set to True, all output stages of this processor are streamed in the
generic protobuf schema.
See "standard deviation" on page 1162 example. It's the same except we calculate sum instead of std
deviation.
Interface Counters Utilization Per System processor groups detailed interface counter data by system ID
and then calculates aggregate TX and RX bits, their aggregate utilization and identifies the highest TX
and RX utilizations among the interfaces.
1170
Parameter Description
Enable Streaming Makes samples of output stages streamed if enabled. An optional boolean that
(enable_streaming) defaults to False. If set to True, all output stages of this processor are streamed in
the generic protobuf schema.
IN THIS SECTION
The Time in State processor measures time when a value is in the range. For each input DS, monitor it
over the last time_window seconds. If at any moment, for the state in state_range, the amount of time
we have been in that state over the last time_window seconds falls into a range specified in the
corresponding state_range entry, we set the corresponding output DS to 'true'. Otherwise, the output
DS for a given input DS is nominally 'false'. (previously called 'time_in_state_check')
Parameter Description
Time Window How long to monitor state. (seconds or an expression that evaluates to integer)
(time_window)
State Range (state_range) Map state value to its allowed time range in seconds. dict mapping from a single
possible state to a single range of time during the most recent time_window seconds
that the value from input state is allowed to be in that state. At least one of the
range object's two fields must be specified. The omitted field is regarded as "infinity".
The fields are numbers (integers or floats) or expressions evaluated into numbers.
State is a string or an expression that evaluates to string.
1171
(Continued)
Parameter Description
Graph Query One or more queries on graph specified as strings, or a list of such queries. (String
(graph_query) will be deprecated in a future release.) Multiple queries should provide all the named
nodes referenced by the expression fields (including additional_properties). Graph
query is executed on the "operation" graph. Results of the queries can be accessed
using the "query_result" variable with the appropriate index. For example, if querying
property set nodes under name "ps", the result will be available as "query_result[0]
["ps"]".
(Continued)
Parameter Description
Anomaly MetricLog Retain anomaly metric data in MetricDb for specified time period
Retention Duration
Anomaly MetricLog Maximum allowed size, in bytes of anomaly metric data to store in MetricDB
Retention Size
Enable Streaming Makes samples of output stages streamed if enabled. An optional boolean that
(enable_streaming) defaults to False. If set to True, all output stages of this processor are streamed in the
generic protobuf schema.
Raise Anomaly Outputs “true” and “false” values, “true” meaning an appropriate item is anomalous,
(raise_anomaly) and "false" meaning the item is not anomalous. When Raise Anomaly is set to True,
an actual anomaly is generated in addition to a sample in the output.
time_window : 2 seconds
state_range: { "down" : [{"max": 1},] }
The above configuration means that for the input DS, we will set output to True and optionally raise an
anomaly if the input is in the "down" state for more-than one second out of the last two seconds.
In the sample below, certain values are capitalized to indicate what has changed from the previous time.
1173
[if_name=eth0] : "up"
[if_name=eth1] : "up"
[if_name=eth3] : "up"
[if_name=eth0] : "false"
[if_name=eth1] : "false"
[if_name=eth3] : "false"
[if_name=eth0] : "up"
[if_name=eth1] : "down"
[if_name=eth3] : "up"
[if_name=eth0] : "false"
[if_name=eth1] : "false"
[if_name=eth3] : "false"
[if_name=eth0] : "up"
[if_name=eth1] : "down"
[if_name=eth3] : "up"
[if_name=eth0] : "false"
[if_name=eth1] : "true"
[if_name=eth3] : "false"
1174
[if_name=eth0] : "up"
[if_name=eth1] : "up"
[if_name=eth3] : "up"
[if_name=eth0] : "false"
[if_name=eth1] : "True"
[if_name=eth3] : "false"
[if_name=eth0] : "up"
[if_name=eth1] : "up"
[if_name=eth3] : "up"
[if_name=eth0] : "false"
[if_name=eth1] : "false"
[if_name=eth3] : "false"
If expressions are used for min or max fields for states specified in the state property, then they are
evaluated for each input item which results into item-specific thresholds. Properties of the respective
output items are extended by range_min or range_max keys with calculated values.
If state key is an expression, output items are extended with state key. The same applies for
time_window property.
Configuration:
time_window : int(100/context.severity)
state_range: { context.ref_state : [{"max": "int(20*(context.severity/5.0))"}] }
1175
[if_name=eth0,severity=1,ref_state=down] : "down"
[if_name=eth1,severity=2,ref_state=down] : "down"
[if_name=eth0,range_max=4,time_window=100,state=down] : "true"
[if_name=eth1,range_max=8,time_window=50,state=down] : "false"
The Traffic Monitor processor selects interfaces according to the configuration and outputs all available
interface-related counters (e.g tx_bits, rx_bits etc) and interface utilization.
Parameter Description
(Continued)
Parameter Description
Graph Query One or more queries on graph specified as strings, or a list of such queries. (String will be
(graph_query) deprecated in a future release.) Multiple queries should provide all the named nodes
referenced by the expression fields (including additional_properties). Graph query is
executed on the "operation" graph. Results of the queries can be accessed using the
"query_result" variable with the appropriate index. For example, if querying property set
nodes under name "ps", the result will be available as "query_result[0]["ps"]".
Query Expansion For every path, originally returned by graph queries, passed to each generator the latter
one produces a set of items and for each item it produces a new path extended by a
corresponding property name which value is set of a value of the produced item.
1177
(Continued)
Parameter Description
Query Group by List (of strings) of node and relationship names used in the graph query to group query
(query_group_by) results by. Each element in this list represents a named node or relationship matcher in
the graph_query field.It is not an expression to be consistent with existing group_by field in
grouping processors. Non-expression is simple and more intuitive.
When grouping is active (query_group_by is not null), query results are d by the specified
list of names, where one output item is created per each group. In this case, the
expressions can only access matcher names specified in query_group_by and the query
results for each group are accessed using a new group_items variable. The group_items
variable is a list of query results, where each result has named nodes/relationships, not
present in query_group_by.
The following list describes the behavior for various values of this field:
• Omitted or provided as json null (ala None in Python) - No grouping is done. This is
equivalent to current behavior of extensible_data_collector. Using ‘group_items’ in
this case is not permitted and results in probe error state.
• Empty list ([]) - Produces one group containing all the query results.
• One or more matcher names - The query results are grouped by the specified nodes
or relationships. If this list covers all available matchers in the query, the number of
groups is equal to the number of query results.
Query Tag Filter Filters named nodes in the graph queries by assigned tags.
(query_tag_filter)
Interface Expression mapping from graph query to interface name, e.g. “iface.if_name” if “iface” is
a name in the graph query.
Port Speed Expression mapping from graph query to link speed in bits per second, e.g.
"functions.speed_to_bits(link.speed)" if "link" is a name in the graph query.
System ID Expression mapping from graph query to a system_id, e.g. "system.system_id" if "system"
is a name in the graph query.
(Continued)
Parameter Description
Additional Keys Each additional key/value pair is used to extend properties of output stages where value
is considered as an expression executed in context of the graph query and its result is
used as a property value with respective key. The value of this property is evaluated for
each item to associate items with metrics provided by a corresponding collector service.
The association is done by keys because each collector reports a set of metrics where
each metric is identified by a key in a format that is specific for each collector.
Enable Streaming Makes samples of output stages streamed if enabled. An optional boolean that defaults
(enable_streaming) to False. If set to True, all output stages of this processor are streamed in the generic
protobuf schema.
Processor: Union
IN THIS SECTION
The Union processor merges all input items into one set of items. For each input item the processor
leaves only signification keys, drops the others and puts the result.
Parameter Description
Significant Keys List of keys to map items from the inputs for applying the specified operation. It is
(significant_keys) typically used by processors that take multiple inputs and perform operations on
them. When inputs have the same sets of keys it does not need to be specified.
When inputs have different sets of keys, it must be specified and it must allow only
1:1 items mapping from the given inputs, otherwise the probe will go into error
state.
1179
(Continued)
Parameter Description
Enable Streaming Makes samples of output stages streamed if enabled. An optional boolean that
(enable_streaming) defaults to False. If set to True, all output stages of this processor are streamed in
the generic protobuf schema.
Example: Union
significant_keys: ["system_id"]
Input "in_1":
[system_id=leaf1,interface=eth1]: 45
[system_id=leaf2,interface=eth0]: 52
[system_id=leaf3,interface=eth0]: 61
Input "in_2":
[system_id=leaf4,interface=eth2]: 52
[system_id=leaf5,interface=eth3]: 64
Input "in_3":
[system_id=leaf6,interface=eth3]: 41
Output "out":
[system_id=leaf1]: 45
[system_id=leaf2]: 52
[system_id=leaf3]: 61
[system_id=leaf4]: 52
1180
[system_id=leaf5]: 64
[system_id=leaf6]: 41
The VXLAN Floodlist processor generates a configuration containing expectations of vxlan floodlist
routes.
Parameter Description
Service input (service_input) Data to pass to telemetry collectors, if any. Can be an expression.
Enable Streaming Makes samples of output stages streamed if enabled. An optional boolean
(enable_streaming) that defaults to False. If set to True, all output stages of this processor are
streamed in the generic protobuf schema.
IN THIS SECTION
Juniper Junos Configlet Example on 4.0.2: MTU (section Interface-Level: Delete) | 1182
Juniper Junos Configlet Example on 4.0.2 Example: SNMP (multiple sections) | 1182
Juniper Junos Configlet Example on 4.0.1 and 4.0.0: NTP (section SYSTEM) | 1183
When you're creating an interface-level configlet during the design phase, you won't know interface
names. It's not until you're working in the blueprint that you'll have that information. Interface-level
configlets for Junos are designed for you to enter details without including the set interface command.
For example, to change Junos interface "gigether-options", you can use a interface-level hierarchical or
set configlet.
gigether-options no-auto-negotiation
gigether-options fec none
gigether-options {
no-auto-negotiation;
fec none;
}
When you import the configlet into your blueprint, you'll specify interfaces such as xe-0/0/0. For a Junos
Interface-Level set configlet Apstra software will prepend the set commands:
For a Junos Interface-Level hierarchical configlet Apstra software will load Junos structured
configuration:
interfaces {
xe-0/0/0 {
gigether-options {
no-auto-negotiation;
fec none;
1182
}
}
}
If you want to use a Junos interface-level configlet to remove an existing configuration, you can use an
interface level delete configlet. Like the interface level set configlet, when you are creating the configlet
during the design phase, you won't know interface names. It's not until you're working in the blueprint
that you'll have that information. Interface-level delete configlets for Junos are designed for you to enter
details without including the delete interface command. For example, to remove the Junos interface
"mtu" configuration.
mtu
When you import the configlet into your blueprint, you'll specify interfaces such as xe-0/0/0. For a Junos
Interface-Level delete configlet Apstra software will prepend the delete commands:
You can create a configlet with a generator at the Top-Level to enable SNMP. To avoid SNMP alarms on
server-facing interfaces, for example, you can create a second generator at the Interface-Level to set up
no-traps.
Top-Level template text is validated to begin with ‘set’ or ‘delete’. See below for example text.
Interface-Level template text is not validated because it's not a complete CLI command. See below for
example text.
no-traps
When you import the configlet into your blueprint, you'll specify interfaces such as ex-0/0/0 and Apstra
software will prepend the set command as .
Juniper Junos Configlet Example on 4.0.1 and 4.0.0: NTP (section SYSTEM)
Sample text for configuring NTP servers on Junos devices. (On Apstra version 4.0.2 SYSTEM is called
Top-Level/Hierarchical.)
system {
ntp {
boot-server 10.1.4.1;
server 10.1.4.2;
}
}
Sample text for configuring NTP servers on EOS devices. This configlet uses property sets for the NTP
server IP addresses.
Sample text for applying 'speed auto' to an interface. (You specify devices and interfaces when you
import the configlet into a blueprint.)
speed auto
no speed auto
Sample text for using the config command to set up an NTP server to use mgmt VRF on SONiC devices.
Sample text for using the config command to set up an SNMP snmptrap to use mgmt VRF on SONiC
devices.
Sample text for using the config command to set the Syslog server for SONiC devices.
Sample text for using the sonic-cli command to set up the delay-restore option for SONiC mclag. You must
use sudo -u admin at the beginning, and surround terms that contain spaces with single quotes in each sonic-cli
command, and < /dev/console at the end.
sudo -u admin sonic-cli -c config -c 'mclag domain 1' -c 'delay-restore 600' < /dev/console
sudo -u admin sonic-cli -c config -c 'mclag domain 1' -c 'no delay-restore' < /dev/console
change-device-password
To comply with security requirements and best practices you may need to change root passwords and
local user passwords on device system agents on a regular basis. Prior to Apstra version 4.2.0 you had to
run the command repeatedly, once for every device that needed to be updated. The process has been
streamlined starting with Apstra version 4.2.0. You can now change passwords on all devices in a
blueprint by running a single command. Instead of entering a specific system ID you would enter all.
• Commit blueprint
• Commit blueprint
With the config-syntax-check command, you can verify configuration syntax on your Juniper devices
before committing your blueprint. This check is useful when working with configlets in Datacenter
blueprints and when working with config templates in Freeform blueprints.
This command works only with hierarchical configuration to verify whether configuration syntax is
correct. It doesn't work for set commands.
RELATED DOCUMENTATION
IN THIS SECTION
Limitations | 1189
When deploying EVPN on Apstra-supported devices and NOSs, be aware of several caveats and
limitations. Even though EVPN is a standard, vendors implement protocols in very different manners.
Also, different ASICs support varying feature sets that impact EVPN BGP VXLAN implementations
(Routing In and Out of Tunnels (RIOT) for example). The following sections describe supported EVPN
deployment implementations.
• Cisco Cloudscale
• Mellanox Spectrum A1
• Juniper Q5
1189
Arista Trident2 Arista DCS-7050 Can use as Spine, Leaf, or Border Leaf. Must set up EOS
Recirculation interface(s) to use as a Layer3 Leaf (see
Arista VXLAN documentation for more information).
Cisco Cloudscale Cisco 93180YC-EX Can use as Spine, Leaf, or Border Leaf
Cisco Trident2 with Cisco 9396PX, 9372PX, Can use as Spine, Leaf, or Border Leaf (see TCAM Carving
ALE 9332PQ, 9504 in NXOS section).
Juniper Trident2+ Juniper QFX5110 Can use as Spine, Leaf, or Border Leaf
Juniper Trident3 Juniper QFX5120 Can use as Spine, Leaf, or Border Leaf
For recommended NOS versions, refer to Qualified Devices and NOS <device_support>.
Limitations
IN THIS SECTION
• VxLAN (Inter-rack) Virtual networks can't be part of the default routing zone.
• Generic systems with BGP peering to non-default routing zones must connect to leaf devices.
• Generic systems with BGP peering only to the default routing zone can connect to leaf devices, spine
devices or superspine devices.
• Multi-zone security segmentations only support up to 16 routing zones (VRFs) on Arista (HW
Limitation)
• Inter routing zone (VRF) routing must be handled on a generic system (EVPN type 5 route leaking)
• All BGP sessions and loopback addresses are part of the default routing zone.
Before installing the device agent, we recommend that you apply TCAM Carving during device
management setup or during Cisco Power-on Auto Provisioning (POAP). TCAM Carving requires a
device reboot.
Alternatively, you can apply TCAM Carving with configlets when you deploy the blueprint. You must
manually reboot devices.
Use show hardware access-list tcam region to show and verify TCAM allocation on Cisco NX-OS.
1191
IN THIS SECTION
VxLAN Routing for Trident2 devices (for example, 7050QX-32) is supported but requires assigning EOS
recirculation interfaces to unused physical interfaces on the device. You can use configlets to deploy this
to all devices that require this configuration.
interface Recirc-Channel501
switchport recirculation features vxlan
interface Ethernet35
traffic-loopback source system device mac
channel-group recirculation 501
interface Ethernet36
traffic-loopback source system device mac
channel-group recirculation 501
interface Ethernet35
no traffic-loopback source system device mac
no channel-group recirculation 501
interface Ethernet36
no traffic-loopback source system device mac
no channel-group recirculation 501
no interface Recirc-Channel501
We recommend when using VxLAN Routing for Jericho devices (for example, 7280SR-48C6) that you
assign EOS VxLAN Routing System Profile on the device.
Before installing the device agent, we recommend that you apply the Arista TCAM system profile during
the device management setup or during Arista Zero-Touch Provisioning (ZTP). TCAM system profile
requires a device reboot.
1193
Alternatively, you can use configlets to deploy this to all devices requiring this configuration and
manually reboot the devices.
hardware tcam
system profile vxlan-routing
hardware tcam
no system profile vxlan-routing
We recommend when using VxLAN Routing for Arista Arad devices (for example, on 7280SE platform)
that you assign EOS VxLAN Routing Profile on the device.
Before installing the device agent, we recommend that you apply the Arista TCAM system profile during
the device management setup or during Arista Zero-Touch Provisioning (ZTP). TCAM system profile
requires a device reboot.
Alternatively, you can use configlets to deploy this to all devices requiring this configuration and
manually reboot the devices.
hardware tcam
profile vxlan-routing
IN THIS SECTION
Unicast VTEPs
Apstra IP Allocation
MLAG Configuration
interface loopback1
IP address 10.0.0.1/32
IP address 10.0.0.3/32 secondary
interface nve1
source-interface loopback1
interface loopback1
IP address 10.0.0.2/32
IP address 10.0.0.3/32 secondary
interface nve1
source-interface loopback1
interface loopback1
IP address 10.0.0.1/32
interface nve1
source-interface loopback1
Logical VTEPs
Apstra IP Allocation
Logical VTEP configured as primary IP on loopback1 interface for both MLAG and singleton switches
MLAG Configuration
interface loopback1
IP address: 10.0.0.1/32
IP address: 10.0.0.4/32 secondary
interface vxlan1
vxlan source-interface loopback1
interface loopback1
IP address: 10.0.0.1/32
IP address: 10.0.0.4/32 secondary
interface vxlan1
vxlan source-interface loopback1
interface loopback1
IP address: 10.0.0.5/32
IP address 10.0.0.4/32 secondary
interface vxlan1
vxlan source-interface loopback1
Anycast VTEP
Apstra IP Allocation
One anycast VTEP for entire blueprint, shared between all Arista leaf devices
MLAG Configuration
interface loopback1
IP address 10.0.0.1/32
1196
interface loopback1
IP address 10.0.0.1/32
IP address 10.0.0.5/32 secondary
interface vxlan1
vxlan source-interface loopback1
interface loopback1
IP address 10.0.0.5/32
IP address 10.0.0.4/32 secondary
interface vxlan1
vxlan source-interface loopback1
IN THIS SECTION
Controller | 1197
Security | 1197
Authentication | 1202
Statistics | 1204
Enterprise | 1204
Syslog | 1204
/etc/aos/aos.conf
Controller
# Role for the controller. Set the option to "slave" in order to setup AOS as a
# slave AOS. The options "metadb" and "node_id" should be also set while
# setting "role" to "slave"
role = controller
# Id of the slave node. Empty in case the server is the controller. The ID is
# generated by the controller.
node_id =
Security
[security]
Log Rotate
[logrotate]
# AOS has builtin log rotate functionality. You can disable it by setting
# <enable_log_rotate> to 0 if you want to use linux logrotate utility to manage
# your log files. AOS agent reopens log file on SIGHUP
enable_log_rotate = 1
# Log file will be rotated when its size exceeds <max_file_size>
max_file_size = 1M
# The most recent <max_kept_backups> rotated log files will be saved. Older
# ones will be removed. Specify 0 to not save rotated log files, i.e. the log
# file will be removed as soon as its size exceeds limit.
max_kept_backups = 5
# Interval, specified as <hh:mm:ss>, at which log files are checked for
# rotation.
check_interval = 1:00:00
# Maximum number of recent invalid persistence group kept
max_kept_invalid_persistence_groups = 3
[auth_sysdb_log_rotator]
# AOS has builtin auth sysdb persistence file rotation functionality. Default
# value is 1 which means sysdb retention policy is enabled. You can disable it
# by setting it to 0 and you also can enable it again by setting it to 1. All
# retention policy parameters will be reloaded by restarting AOS service, or
# sending SIGHUP signal to SysdbResourceManager agent via "sudo kill -s 1
# $(pgrep -f SysdbResourceManager)"
enable_auth_sysdb_rotate = 1
# Maximum number of backup copies of valid auth sysdb persistence file groups
# in /var/lib/aos/db. AOS will remove all the older groups. Default value is 5,
# which means AOS will keep the latest 5 groups. Min value is 3. It should be
# specified as a positive number or empty. Leaving it empty means no groups
1199
Four parameters for configuring the main graph datastore retention policy.
[main_sysdb_log_rotator]
# AOS has builtin main sysdb persistence file rotation functionality. Default
# value is 1 which means sysdb retention policy is enabled. You can disable it
# by setting it to 0 and you also can enable it again by setting it to 1. All
# retention policy parameters will be reloaded by restarting AOS service, or
# sending SIGHUP signal to SysdbResourceManager agent via "sudo kill -s 1
# $(pgrep -f SysdbResourceManager)"
enable_main_sysdb_rotate = 1
# Maximum number of backup copies of valid main sysdb persistence file groups
# in /var/lib/aos/db. AOS will remove all the older groups. Default value is 5,
# which means AOS will keep the latest 5 groups. Min value is 3. It should be
# specified as a positive number or empty. Leaving it empty means no groups
# number limitation. It will be set to default value if it is configured in
# invalid format. It will be set to minimum value if it is configured to a
# smaller value.
max_kept_backups = 5
# Maximum total size of valid main sysdb persistence file groups in
# /var/lib/aos/db. Default value is empty, which means no size limitation. It
1200
• Set to 1 to enable the retention policy (default). If you enable the policy after it has been disabled,
you must restart the Apstra server for it to be enabled again.
• Set to 0 to disable the retention policy and keep all backups. AOS VM file disk utilization issues may
occur. The policy will be disabled during the next retention check (check_interval). There is no need to
restart the Apstra server unless you want to disable the policy immediately.
• Setting to a number smaller than 3 (the minimum) results in the minimum value of 3.
The effect of max_kept_backups and max_total_files_size is cumulative. For security, Apstra keeps a minimum
of three groups of valid Main Graph Datastore persistence files.
check_interval = 1:00:00 time between retention checks and parameter updates (if file has been updated)
(format: <hh:mm:ss>).
• Setting to a number smaller than 00:01:00 (the minimum) results in the minimum value of 1:00:00.
1201
[anomaly_sysdb_log_rotator]
[device_image_management]
enable_version_check = 1
# Enable AOS device agent image auto upgrade. By default auto image upgrade is
# disabled. With this option enabled a device can download an image from the
# controller and upgrade itself if needed.
enable_auto_upgrade = 0
# A device will retry in specified timeout (in seconds) if it fails version
# compatibility check or to download/install new image.
retry_timeout = 600
Authentication
[authentication]
[device_config_management]
Telemetry Init
[telemetry_init]
1203
[telemetry_global_config]
Task API
[task_api]
# Default maximum time in seconds a task can stay in its current state.
default_timeout = 600.0
# Time in seconds a blueprint.create task can stay in its current state.Format:
# "timeout_<task_type>"
timeout_blueprint.create = 360.0
# Time in seconds a blueprint.deploy task can stay in its current state.Format:
# "timeout_<task_type>"
1204
timeout_blueprint.deploy = 300.0
# Time in seconds blueprint.facade.* tasks can stay in their current state.
# Specific facade task overrides prevail over this one.Format:
# "timeout_<task_type>"
timeout_blueprint.facade = 600.0
# Maximum number of tasks, which allowed in the queue. When number of tasks
# becomes higher this value, task rotation will be started.
max_tasks_in_queue = 100
# Maximum number of Bytes in data field which does not require compression. If
# data size is greater than threshold data will be compressed before storing it
# in sysdb.
max_uncompressed_data_size = 1000
Statistics
[statistics]
# Enable or disable full validation for pod statistics. Disable if Racks and/or
# Pods tabs load times are excessive
pod_full_validation = enabled
Enterprise
[enterprise]
Syslog
[syslog]
[builtin_telemetry_disable]
# Disable telemetry service lldp for the specified set of system IDs. System
# IDs can be provided as a comma seperated list(eg: a, b, c, d). In order to
# disable the service for all devices, specify the value "all".
lldp_disable_devices =
# Disable telemetry service arp for the specified set of system IDs. System IDs
# can be provided as a comma seperated list(eg: a, b, c, d). In order to
# disable the service for all devices, specify the value "all".
arp_disable_devices =
# Disable telemetry service hostname for the specified set of system IDs.
# System IDs can be provided as a comma seperated list(eg: a, b, c, d). In
# order to disable the service for all devices, specify the value "all".
hostname_disable_devices =
# Disable telemetry service mac for the specified set of system IDs. System IDs
# can be provided as a comma seperated list(eg: a, b, c, d). In order to
# disable the service for all devices, specify the value "all".
mac_disable_devices =
# Disable telemetry service xcvr for the specified set of system IDs. System
# IDs can be provided as a comma seperated list(eg: a, b, c, d). In order to
# disable the service for all devices, specify the value "all".
xcvr_disable_devices =
# Disable telemetry service interface for the specified set of system IDs.
# System IDs can be provided as a comma seperated list(eg: a, b, c, d). In
# order to disable the service for all devices, specify the value "all".
interface_disable_devices =
# Disable telemetry service interface_counters for the specified set of system
# IDs. System IDs can be provided as a comma seperated list(eg: a, b, c, d). In
# order to disable the service for all devices, specify the value "all".
interface_counters_disable_devices =
# Disable telemetry service bgp for the specified set of system IDs. System IDs
# can be provided as a comma seperated list(eg: a, b, c, d). In order to
# disable the service for all devices, specify the value "all".
bgp_disable_devices =
# Disable telemetry service mlag for the specified set of system IDs. System
# IDs can be provided as a comma seperated list(eg: a, b, c, d). In order to
# disable the service for all devices, specify the value "all".
mlag_disable_devices =
# Disable telemetry service route for the specified set of system IDs. System
# IDs can be provided as a comma seperated list(eg: a, b, c, d). In order to
1206
# disable the service for all devices, specify the value "all".
route_disable_devices =
# Disable telemetry service lag for the specified set of system IDs. System IDs
# can be provided as a comma seperated list(eg: a, b, c, d). In order to
# disable the service for all devices, specify the value "all".
lag_disable_devices =
Agent Management
[agent_management]
Show Tech
[show_tech]
[system_operation_filesystem_thresholds]
1207
[system_operation_memory_thresholds]
Graph
IN THIS SECTION
Graph Overview
Apstra uses the Graph model to represent a single source of truth regarding infrastructure, policies,
constraints etc. This Graph model is subject to constant change and we can query it for various reasons.
It is represented as a graph. All information about the network is modeled as nodes and relationships
between them.
Every object in a graph has a unique ID. Nodes have a type (a string) and a set of additional properties
based on a particular type. For example, all switches in our system are represented by nodes of type
system and can have a property role which determines which role in the network it is assigned (spine/
leaf/server). Physical and logical switch ports are represented by an interface node, which also has a
property called if_type.
Relationships between different nodes are represented as graph edges which we call relationships.
Relationships are directed, meaning each relationship has a source node and a target node. Relationships
also have a type which determines which additional properties particular relationship can have. E.g.
system nodes have relationships of type hosted_interfaces towards interface nodes.
A set of possible node and relationship types is determined by a graph schema. The schema defines
which properties nodes and relationships of particular type can have along with types of those
properties (string/integer/boolean/etc) and constraints. We use and maintain an open source schema
library, Lollipop, that allows flexible customization of value types.
Going back to the graph representing a single source of truth, one of the most challenging aspects was
how to reason about it in the presence of change, coming from both the operator and the managed
system. In order to support this we developed what we call Live Query mechanism which has three
essential components:
• Query Specification
• Change Notification
• Notification Processing
Having modeled our domain model as a graph, you can run searches on the graph specified by graph
queries to find particular patterns (subgraphs) in a graph. The language to express the query is
conceptually based on Gremlin, an open source graph traversal language. We also have parsers for
queries expressed in another language - Cypher, which is a query language used by popular graph
database neo4j.
1209
Query Specification
You start with a node() and then keep chaining method calls, alternating between matching relationships
and nodes:
The query above translated in english reads something like: starting from a node of type system,
traverse any outgoing relationship that reaches node of type interface, and from that node traverse all
outgoing relationship that lead to node of type `link.
Notice role=`spine` argument, it will select only system nodes that have role property set to spine.
That query will select all system nodes that have role either spine or leaf and interface nodes that have
if_type anything but ip (ne means not equal).
You can also add cross-object conditions which can be arbitrary Python functions:
node('system', name='system')
.out().node('interface', name='if1')
.out().node('link')
.in_().node('interface', name='if2')
.in_().node('system', name='remote_system')
.where(lambda if1, if2: if1.if_type != if2.if_type)
Name objects to refer to them and use those names as argument names for your constraint function (of
course you can override that but it makes a convenient default behavior). So, in example above it will
take two interface nodes named if1 and if2, pass them into given where function and filter out those
1210
paths, for which function returns False. Don't worry about where you place your constraint: it will be
applied during search as soon as all objects referenced by constraint are available.
Now, you have a single path, you can use it to do searches. However, sometimes you might want to have
a query more complex than a single path. To support that, query DSL allows you to define multiple paths
in the same query, separated by comma(s):
match(
node('a').out().node('b', name='b').out().node('c'),
node(name='b').out().node('d'),
)
This match() function creates a grouping of paths. All objects that share the same name in different paths
will actually be referring to the same object. Also, match() allows adding more constraints on objects with
where(). You can do a distinct search on particular objects and it will ensure that each combination of
values is seen only once in results:
match(
node('a', name='a').out().node('b').out().node('c', name='c')
).distinct(['a', 'c'])
This matches a chain of a -> b -> c nodes. If two nodes a and c are connected through more than one
node of type b, the result will still contain only one (a, c) pair.
There is another convenient pattern to use when writing queries: you separate your structure from your
criteria:
match(
node('a', name='a').out().node('b').out().node('c', name='c'),
node('a', foo='bar'),
node('c', bar=123),
)
match(
node('a', name='a', foo='bar')
.out().node('b')
.out().node('c', name='c', bar=123)
)
1211
Change Notification
Ok, now you have a graph query defined. What does a notification result look like? Each result will be a
dictionary mapping a name that you have defined for a query object to object found. E.g. for following
query
results will look like {'a': <node type='a'>, 'c': <node type='c'>}. Notice, only named objects are present
(there is no <node type='b'> in results, although that node is present in query because it does not have a
name).
You register a query to be monitored and a callback to execute if something will change. Later, if
someone will modify the graph being monitored, it will detect that new graph updates caused new
query results to appear, or old results to disappear or update. The response executes the callback that is
associated with the query. The callback receives the whole path from the query as a response, and a
specific action (added/updated/removed) to execute.
Notification Processing
When the result is passed to the processing (callback) function, from there you can specify reasoning
logic. This could really be anything, from generating logs, errors, to rendering configurations, or running
semantic validations. You could also modify the graph itself, using graph APIs and some other piece of
logic may react to changes you made. This way, you can enforce the graph as a single source of truth
while it also serves as a logical communication channel between pieces of your application logic. The
Graph API consists of three parts:
Graph management APIs are self-explanatory. add_node() creates new node, set_node() updates properties
of existing node, and del_node() deletes a node.
commit() is used to signal that all updates to the graph are complete and they can be propagated to all
listeners.
The observable interface allows you to add/remove observers - objects that implement notification a
callback interface. Notification callback consists of three methods:
The Query API is the heart of our graph API and is what powers all searching. Both get_nodes() and
get_relationships() allow you to search for corresponding objects in a graph. Arguments to those
functions are constraints on searched objects.
E.g. get_nodes() returns you all nodes in a graph, get_nodes(type='system') returns you all system nodes,
get_nodes(type='system', role='spine') allows you to constrain returned nodes to those having particular
property values. Values for each argument could be either a plain value or a special property matcher
object. If the value is a plain value, the corresponding result object should have its property equal to the
given plain value. Property matchers allow you to express a more complex criterias, e.g. not equal, less
than, one of given values and so on:
NOTE: The example below is for directly using Graph python. For demonstration purposes, you
can replace graph.get_nodes with node in the Graph explorer. This specific example will not work
on the Apstra GUI.
graph.get_nodes(
type='system',
role=is_in(['spine', 'leaf']),
system_id=not_none(),
)
In your graph schema you can define custom indexes for particular node/relationship types and the
methods get_nodes() and get_relationships() pick the best index for each particular combination of
constraints passed to minimize search time.
Results of get_nodes()/get_relationships() are special iterator objects. You can iterate over them and they
will yield all found graph objects. You can also use APIs that those iterators provide to navigate those
result sets. E.g. get_nodes() returns you a NodeIterator object which has methods out() and in_(). You can
use those to get an iterator over all outgoing or incoming relationship from each node in the original
result set. Then, you can use those to get nodes on the other end of those relationships and continue
from them. You can also pass property constraints to those methods the same way you can do for
get_nodes() and get_relationships().
graph.get_nodes('system', role='spine') \
.out('interface').node('interface', if_type='loopback')
1213
The code in the example above finds all nodes with type system and role spine and then finds all their
loopback interfaces.
@rule(match(
node('system', name='spine_device', role='spine')
.out('hosted_interfaces')
.node('interface', name='spine_if')
.out('link')
.node('link', name='link')
.in_('link')
.node('interface', name='leaf_if')
.in_('hosted_interfaces')
.node('system', name='leaf_device', role='leaf')
))
def process_spine_leaf_link(self, path, action):
"""
Process link between spine and leaf
"""
spine = path['spine_device']
leaf = path['leaf_device']
if action in ['added', 'updated']:
# do something with added/updated link
pass
else:
# do something about removed link
pass
Convenience Functions
To avoid creating complex where() clauses when building a graph query, use convenience functions,
available from the Apstra GUI.
1. From the blueprint navigate to the Staged view or Active view, then click the GraphQL API Explorer
button (top-right >_). The graph explorer opens in a new tab.
4. Click the Execute Query button (looks like a play button) to see results.
Functions
match(*path_queries)
This function returns a QueryBuilder object containing each result of a matched query. This is generally a
useful shortcut for grouping multiple match queries together.
These two queries are not a 'path' together (no intended relationship). Notice the comma to separate out
arguments. This query will return all of the leaf devices and spine devices together.
match(
node('system', name='leaf', role='leaf'),
node('system', name='spine', role='spine'),
)
• Parameters
• name (str or None) - Sets the name of the property matcher in the results
• properties (dict or None) - Any additional keyword arguments or additional property matcher
convenience functions to be used
While both a function, this is an alias for the PathQueryBuilder nodes -- see below.
iterate()
• Returns - generator
Iterate gives you a generator function that you can use to iterate on individual path queries as if it were
a list. For example:
def find_router_facing_systems_and_intfs(graph):
return q.iterate(graph, q.match(
q.node('link', role='to_external_router')
.in_('link')
.node('interface', name='interface')
.in_('hosted_interfaces')
.node('system', name='system')
))
PathQueryBuilder Nodes
This function describes specific graph node, but is also a shortcut for beginning a path query from a
specific node. The result of a `node() call returns a path query object. When querying a path, you usually
want to specify a node `type`: for example node('system') would return a system node.
• Parameters
• name (str or None) - Sets the name of the property matcher in the results
• properties (dict or None) - Any additional keyword arguments or additional property matcher
convenience functions to be used
1216
If you want to use the node in your query results, you need to name it --node('system', name='device').
Furthermore, if you want to match specific kwarg properties, you can directly specify the match
requirements -
Traverses a relationship in the 'out' direction according to a graph schema. Acceptable parameters are
the type of relationship (for example, interfaces), the specific name of a relationship, the id of a
relationship, or other property matches that must match exactly given as keyword arguments.
• Parameters
For example:
node('system', name='system') \
.out('hosted_interfaces')
Traverses a relationship in the 'in' direction. Sets current node to relationship source node. Acceptable
parameters are the type of relationship (for example, interfaces), the specific name of a relationship, the
id of a relationship, or other property matches that must match exactly given as keyword arguments.
• Parameters
• type (str or None) - Type of node relationship to search for
node('interface', name='interface') \
.in_('hosted_interfaces')
where(predicate, names=None)
Allows you to specify a callback function against the graph results as a filter or constraint. The predicate
is a callback (usually lambda function) run against the entire query result. where() can be used directly on
an a path query result.
• Parameters
• names (str or None) - If names are given they are passed to callback function for match
node('system', name='system') \
.where(lambda system: system.role in ('leaf', 'spine'))
enure_different(*names)
Allows a user to ensure two different named nodes in the graph are not the same. This is helpful for
relationships that may be bidirectional and could match on their own source nodes. Consider the query:
• Parameters
• names (tuple or list) - A list of names to ensure return different nodes or relationships from the
graph
The last line could be functionally equivalent to the where() function with a lambda callback function
Property matchers
Property matches can be run on graph query objects directly - usually used within a node() function.
Property matches allow for a few functions.
eq(value)
Ensures the property value of the node matches exactly the results of the eq(value) function.
• Parameters
• value - Property to match for equality
Returns:
{
"count": 4,
"items": [
{
"system": {
"tags": null,
"hostname": "l2-virtual-mlag-2-leaf1",
"label": "l2_virtual_mlag_2_leaf1",
"system_id": "000C29EE8EBE",
"system_type": "switch",
1219
"deploy_mode": "deploy",
"position": null,
"role": "leaf",
"type": "system",
"id": "391598de-c2c7-4cd7-acdd-7611cb097b5e"
}
},
{
"system": {
"tags": null,
"hostname": "l2-virtual-mlag-2-leaf2",
"label": "l2_virtual_mlag_2_leaf2",
"system_id": "000C29D62A69",
"system_type": "switch",
"deploy_mode": "deploy",
"position": null,
"role": "leaf",
"type": "system",
"id": "7f286634-fbd1-43b3-9aed-159f1e0e6abb"
}
},
{
"system": {
"tags": null,
"hostname": "l2-virtual-mlag-1-leaf2",
"label": "l2_virtual_mlag_1_leaf2",
"system_id": "000C29CFDEAF",
"system_type": "switch",
"deploy_mode": "deploy",
"position": null,
"role": "leaf",
"type": "system",
"id": "b9ad6921-6ce3-4d05-a5c7-c31d96785045"
}
},
{
"system": {
"tags": null,
"hostname": "l2-virtual-mlag-1-leaf1",
"label": "l2_virtual_mlag_1_leaf1",
"system_id": "000C297823FD",
"system_type": "switch",
"deploy_mode": "deploy",
1220
"position": null,
"role": "leaf",
"type": "system",
"id": "71bbd11c-ed0f-4a38-842f-341781c01c24"
}
}
]
}
ne(value)
Not-equals. Ensures the property value of the node does NOT match results of ne(value) function
• Parameters
Similar to:
gt(value)
Greater-than. Ensures the property of the node is greater than the results of gt(value) function.
• Parameters
ge(value)
Greater-than or Equal To. Ensures the property of the node is greater than or equal to results of ge().
• Parameters: value - Ensure property function is greater than or equal to this value
lt(value)
Less-than. Ensures the property of the node is less than the results of lt(value).
1221
• Parameters
Similar to:
le(value)
Less-than or Equal to. Ensures the property is less than, or equal to the results of le(value) function.
• Parameters
Similar to:
is_in(value)
Is in (list). Check if the property is in a given list or set containing items is_in(value).
• Parameters
Similar to:
not_in(value)
Is not in (list). Check if the property is NOT in a given list or set containing items not_in(value).
• Parameters
1222
Similar to:
is_none()
A query that expects is_none expects this particular attribute to be specifically None.
Similar to:
not_none()
Similar to:
Valid graph datastore persistence file groups contain four files: log, log-valid, checkpoint, and checkpoint-
valid. Valid files are the effective indicators for log and checkpoint files. The name of each persistence
file has three parts: basename, id, and extension.
• id - a unix timestamp obtained from gettimeofday. Seconds and microseconds in the timestamp are
separated by a "-". A persistence file group can be identified by id. The timestamp can also help to
determine the generated time sequence of persistence file groups.
Tech Previews give you the ability to test functionality and provide feedback during the development
process of innovations that are not final production features. The goal of a Tech Preview is for the
feature to gain wider exposure and potential full support in a future release. Customers are encouraged
to provide feedback and functionality suggestions for a Technology Preview feature before it becomes
fully supported.
Tech Previews may not be functionally complete, may have functional alterations in future releases, or
may get dropped under changing markets or unexpected conditions, at Juniper’s sole discretion. Juniper
recommends that you use Tech Preview features in non-production environments only.
Juniper considers feedback to add and improve future iterations of the general availability of the
innovations. Your feedback does not assert any intellectual property claim, and Juniper may implement
your feedback without violating your or any other party's rights.
These features are "as is" and voluntary use. Juniper Support will attempt to resolve any issues that
customers experience when using these features and create bug reports on behalf of support cases.
However, Juniper may not provide comprehensive support services to Tech Preview features. Certain
features may have reduced or modified security, accessibility, availability, and reliability standards
relative to General Availability software. Tech Preview is not supported under existing service
agreements, SLAs, or support service.
For additional details, please contact "Juniper Support " on page 893or your local account team.
1224
Juniper Networks, the Juniper Networks logo, Juniper, and Junos are registered trademarks of Juniper
Networks, Inc. in the United States and other countries. All other trademarks, service marks, registered
marks, or registered service marks are the property of their respective owners. Juniper Networks assumes
no responsibility for any inaccuracies in this document. Juniper Networks reserves the right to change,
modify, transfer, or otherwise revise this publication without notice. Copyright © 2023 Juniper Networks,
Inc. All rights reserved.