ACI Introduction

Download as pdf or txt
Download as pdf or txt
You are on page 1of 140

Introduction to ACI

Agenda

• Fabric Basics
• Policy Model
• Architectural Deployments
• Day 2 and beyond
• Conclusion

4
Fabric Basics
ACI: One Network, any location

Virtual Networks Physical Switches

100M/1/10/25/40/50/100/400G

ACI
Cloud

Containers

* *

#CiscoLive BRKDCN-1601 © 2023 Cisco and/or its affiliates. All rights reserved. Cisco Public 4
What is “Application Centric”?
• Traditional networks use ACL’s to
classify traffic
• Usually based on L3 or L2 addresses
• Makes security decisions (permit,
Host1
deny, log, etc) EPG1

• Makes forwarding decisions (policy


App
based routing) Host2
EPG4
EPG2

• ACI can classify traffic based on its


EPG Host3
EPG3

• Traffic inherits the forwarding and


security policy of the EPG

5
ACI Anywhere

Edge / Remote Core Data Centers Hybrid Cloud & Multicloud

IP WAN IP WAN

ACI ACI ACI ACI Cloud


Remote Leaf Single-POD Multi-POD Multisite ACI
The easiest Data Center and Cloud Interconnect Solution in the Market Try it today!

6
App Centric Design
Its about Continuous Improvement
• Zero Trust is a journey
• Different Environments have different requirements
• Few Customers go all-in from Day1
• Any level of improved security is beneficial
• Never too late to start!

Intelligent Fabric Security Visibility Automation & Beyond

7
The DC network before The DC network NOW
Classic modular switching ACI

Supervisors (1 or 2)
APICs
(3 or more)
Fabric Modules (3-6)
Up to 18 RUs Scale-up

SPINE
(1 to 6)
Linecards (Copper, Fiber, 1/10G)
Zero-touch VXLAN
No STP

LEAVES
(1 to 400 or more*) Scale as you need

Single VXLAN Network**


Single chassis (e.g. Nexus 7000) Evolution from Nexus 5000 and Nexus 7000
* 500+ Leaves with MultiPod/Multi-Site
** Other topologies available (e.g. 3-tier, etc) 8
Application Centric Infrastructure Building Blocks
Built on the Nexus 9000

Centralized Policy Model,


Network Automation

Single Open API Flexible - Modular and Fixed


for Entire System Spine Options
(Terraform, Non-Blocking 40/100/400G
Ansible, Python, Fabric, CLOS Fabric
Etc) Integrated Overlay, Distributed Gateway
(Industry Leading: Price, Performance, Port-Density,
Programmability, Power Efficiency)
Integrated security - Built-in
Distributed Stateless Firewall,
Multi-Tenant Security
IP Storage
Physical, Virtual WAN Network
and Container Interconnect (iSCSI, NFS, Service
Workloads NVMEoF, etc) Appliances
(VMW, HyperV, (F5, ASA/FTD,
Hadoop, AIX, K8S Etc)
etc)
9
All nodes are managed and operated
independently, and the actual topology dictates a
lot of configuration
• Device basics: AAA, syslog, SNMP, PoAP, hash
seed, default routing protocol bandwidth …
• Interface and/or Interface Pairs: UDLD, BFD,
MTU, interface route metric, channel hashing,
Queuing, LACP, …
• Fabric and hardware specific design: HW Tables,

• Switch Pair/Group: HSRP/VRRP, VLANs, vPC,
STP, HSRP sync with vPC, Routing peering,
Routing Policies, …
• Application specific: ACL, PBR, static routes,
QoS, ...
• Fabric wide: MST, VRF, VLAN, queuing,
CAM/MAC & ARP timers, COPP, route protocol
defaults
10
ACI: How difficult was it to bring up?
What tasks & configuration did ACI just saved me from doing manually on every switch

BEFORE

SSH to every switch, Assign IP Address, Enable


Telnet/SSH, Add users on every switch/Create ACLs
(optional)

11
ACI: How difficult was it to bring up?
What tasks & configuration did ACI just saved me from doing manually on every switch
BEFORE

SSH to every switch, Assign IP Address, Enable


Telnet/SSH, Add users on every switch/Create ACLs
(optional)
(Times X Switches & Y VNIs)

12
ACI: How difficult was it to bring up?
What tasks & configuration did ACI just saved me from doing manually on every switch
BEFORE NOW
External to Internal Route redistribution
& Control Plane (MP-BGP, QoS, etc)

Multicast (BD GIPo Addressing)

Overlay Network (VXLAN)

Underlay Routed Network (IS-IS)

Switch management & Best Practices

SSH to every switch, Assign IP Address, Enable


Telnet/SSH, Add users on every switch/Create ACLs
ACI Automated tasks
(optional) From HOURS to seconds!
(Times X Switches & Y VNIs)

13
ACI Policy Model
Simplified
The ACI Policy Model
Tenant ≈ VDC

VRF ≈ VRF Contracts≈ Access Lists


Bridge Domain ≈ Subnet/SVI/Default GW

End Point Group ≈ Broadcast Domain/VLAN EPG1 EPG2


Any-Any (Replicates
Private VLAN a Traditional
Switch*)

L2 External EPG≈ 802.1q Trunk


L3 External EPG≈ L3 Routed Link

15
The ACI Policy Model – Migrating into ACI
Tenant
VRF

VLAN 30 BD VLAN 30 BD
10.10.10.1/24 10.10.30.1/24

VLAN 30 EPG

Any-Any Contract* Any-Any Contract*

16
The ACI Policy Model – Migrating into ACI
Tenant
Global VRF/Routing Table and Protocol Connect
To External
Switch
VLAN 10 BD VLAN 20 BD VLAN 30 BD
10.10.10.1/24 10.10.20.1/24 10.10.30.1/24
L2 External
(802.1q Trunk)
VLAN 10 EPG VLAN 20 EPG VLAN 30 EPG

L3 External
(Routed Interface)

Any-Any Contract* Any-Any Contract*

* Preferred group or vzAny achieve the


same outcome 17
The ACI Policy Model – Extending the configuration
Endpoint Groups

Tenant
Global VRF/Routing Table and Protocol Connect
To External
Switch
VLAN 10 BD
10.10.10.1/24
L2 External
AD_SVR Prod_SQL Print Svc (802.1q Trunk)
XenApp
VLAN 10 EPG
VM VM VM
VM VM L3 External
VM
(Routed Interface)
VM VM

Any-Any Contract* Any-Any Contract*

* Preferred group or vzAny achieve the


same outcome 18
The ACI Policy Model – Extending the configuration
Endpoint Security Groups - ACI 5.0 and greater

Tenant
Global VRF/Routing Table and Protocol Connect
To External
Switch
VLAN 10 BD VLAN 20 BD VLAN 30 BD
10.10.10.1/24 10.10.20.1/24 10.10.30.1/24
L2 External
VLAN 10 EPG VLAN 20 EPG (802.1q Trunk)

AD EpSG
VLAN 30 EPG

L3 External
Print Svc EpSG (Routed Interface)

EPG to EpSG requires vzAny or


Preferred Group for communication

19
Advancing the ACI Configuration

App 1 - App 1 -
App Tier Web Tier L2/L3
External
To DB
Only SQL
Only tcp/2048 Only HTTPS

Copy to IPS + Load Firewall + Load


Balancer Insertion Balancer Insertion

Policy Based Redirect with Service Graphs


20
ACI Deployment
Options
ACI Anywhere

Edge / Remote Core Data Centers Hybrid Cloud & Multicloud

IP WAN IP WAN

ACI ACI ACI ACI Cloud


Remote Leaf Single-POD Multi-POD Multisite ACI
The easiest Data Center and Cloud Interconnect Solution in the Market Try it today!

22
ACI MultiPod
The evolution of a stretched fabric

Inter-Pod IP Network*
w/PIM BiDir support

Site A Site B

*ACI 5.2(3) adds support for 2


pods in a back-to-back spine
configuration

Active-Active Virtual Metro Stretch VRF, EPG, BD Up to 50ms Latency


Datacenters Clusters Across PoDs with VXLAN / 500 Switches

23
ACI: Physical Remote Leaf
Extend ACI to Satellite Data Centers

IP Network (WAN Core – IPv4, MPLS, SR, etc …)

Port Speed:
1-400G

Site A Remote
Location
VM VM VM VM VM VM VM VM VM VM VM VM VM VM

Zero Touch Auto Two switches per site Stretch EPG, BD, VRF, DC Migration /
Discovery of Remote Leaf Up To 200 Remote Leaf Tenant, Contract OTV replacement
Switches (ACI 6.0) – 300ms
24
ACI Multi-Site Nexus Dashboard
Orchestrator Consistent Policy across sites
Single Point of Orchestration
Fault Isolation

Scale

Site A

Site D
Site B
VM VM VM

VM VM VM

VM VM VM

Policy Single Point Of Availability Scale


Consistency Orchestration Fault Isolation
25
ACI Multi-Site Nexus Dashboard
Cloud Integration Orchestrator Consistent Policy across sites
Single Point of Orchestration
Fault Isolation

Scale

Site A
Site C

Site D
Site B
VM VM VM

VM VM VM

VM VM VM

Policy Single Point Of Availability Scale


Consistency Orchestration Fault Isolation
26
Nexus
ACI Policy in the Cloud Dashboard
Orchestrator

On-Premises DC Public Clouds


Cloud Network Controller

IP SG
Web
SG Rule
SG
APP SG Rule
SG
DB
EPG
Contract
EPG
Contract
EPG Network
Web APP DB
AWS Region

Cloud Network Controller

IP
Network ASG ASG ASG
NSG NSG
Web APP DB
VM VM VM

Azure Region

Consistent Policy Enforcement Automated Inter-connect Simplified Operations


on-Premises & Public Cloud provisioning with end-to-end visibility

27
The network-admin challenge
Provisioning and monitoring complexity = Risk

NX- OS ACI

Subscription/
Separate Infrastructure + Tenant Account Account/Project
Resource Group
VXLAN

Data Center Site/Pod Region Region Region

VRF VRF VPC VNet VPC

Bridge Domain/
VLAN CIDR/Subnet Subnet Subnet
Subnet

Endpoint Groups / Security Groups Application/Network Firewall


VLAN Tag
Endpoint Security Groups Security Groups

Access-list (ACL) Contracts & Filters Security Group Rules Security Rules Firewall Rules

28
For your info
& reference

Policy Mapping - AWS


User Account Tenant
Virtual Private Cloud VRF

VPC subnet BD Subnet

Tag / Label EP to EPG Mapping


Security Group EPG
Network Access List Taboo
Security Group Rule Contracts, Filters
Outbound rule Consumed contracts
Source/Destination: Subnet or IP or Any or ‘Internet’
Protocol
Port
Inbound rule Provided contracts
EC2 Instance

Network Adapter End Point (fvCEp)

29
For your info
& reference

Policy Mapping - Azure


Resource Group Tenant
Virtual Network VRF
Subnet BD Subnet

Application Security Group EPG


(ASG)

Network Security Group


(NSG) Filters

Outbound rule Consumed contracts


Source/Destination: ASG or Subnet or IP or Any or ‘Internet’
Protocol
Port

Inbound rule Provided contracts


Virtual Machine

Network Adapter

30
ACI Day 2 and Beyond
Making ACI Hum
Cloud Networking: Challenges
Connectivity and management
Workloads are increasingly distributed and diverse.
Complex to connect workloads across multiple
public cloud providers, data centers and edge
locations.

Visibility and automation


Cloud Container Hypervisor Troubleshooting challenges due to more
Networking decentralized architectures with different
environments.
Data
center

Colocation
Zero trust and security
Workload migration and mobility of users imposes
Private
cloud IoT significant challenges to enforce right security
edge policies across different environments.

Need for homogenous experience across heterogenous cloud environments


32
Powering automation
Cisco Nexus Unified agile platform
Dashboard

Cisco
Nexus
Insights Fabric Discovery

Orchestrator Fabric Controller

Dashboard
Simple to automate, Data Broker
Consume all services in one place
SAN Controller

simple to consume
Private cloud Public cloud Third-party Connectors

33
SPAN and Tap Aggregation with Data Broker
Production Network Packet Broker Network Tools
Production Network
Types
IDS

Cisco NX-OS fabrics

Cisco ACI fabrics


Analytics

Cisco Enterprise networks

Packet
Sniffer

1 2 3

Capture Traffic using Aggregate, Filter, Provide Sanitized data


SPAN/TAPs Load balance to Monitoring Tools

Benefits

Nexus switch functions as Turnkey automation with Supports Tap Aggregation


Cost effective NDDB Controller
packet broker and inline redirection
#CiscoLive © 2023 Cisco and/or its affiliates. All rights reserved. Cisco Public 34
Intelligent operations powered by telemetry

Data
enrichment

Event correlation
Cisco Nexus
Dashboard Insights

Artificial
intelligence and
machine learning
Software and
hardware telemetry -
from switches and APIC

35
Cisco Nexus Dashboard Insights
Use cases and benefits
Identify, locate, root Upgrade
cause, remediate impact advisories

Error detection,
Mitigate
latency, packet drops
Prevent outages
Control plane issue

Hardening checks
Automated alerts
Availability Software hardware
Explorer
Insights recommendations

Pre-change analysis* PSIRT notices


Compliance alerts EoS/EoL notices

End-to-end workflows TAC assist


Guided remediation Topology checker

36
Key Takeaways
• Consistent SDN enabled network policy across all the switches
within a fabric

• The Multi-site architecture allows the same network policy to be


applied across multiple sites, even cloud

• Nexus Dashboard Insights enables proactive day 2 operations for


ACI to give a better understanding of how the applications interact
with network

37
Application Centric Design
App Centric Design
Its about Continuous Improvement
• Zero Trust is a journey
• Different Environments have different requirements
• Few Customers go all-in from Day1
• Any level of improved security is beneficial
• Never too late to start!

Intelligent Fabric Security Visibility Automation & Beyond

39
Importance of Application Segmentation

• Perimeter security is not enough


• If breached, lateral movement can allow attackers
to compromise additional assets
• Segmentation improves security inside the DC
• Micro-Segmentation can minimize the size of the segments and
provide lesser exposure for lateral attacks

40
ACI’s Path to Improved Security

41
Application Security Enforcement Points

Host-based - Centrally manage host-based firewalls


• Pros: distributed, network independent, very granular policies possible,
process-level visibility and correlation
• Cons: Guest-OS dependent, , Agent-based

Network-based - Centrally manage access rules at the network edge


(Virtual Switch, Physical Switch or both)
• Pros: distributed, guest independent, agent-less, group-based policies for
best scale, endpoint-level visibility and correlation
• Cons: requires network hardware resources (memory, TCAM, etc) for policy

42
Review: Logical Policy in ACI

Tenant 1

VRF1
Routing Domain (IP
forwarding)

BD1 BD2 L3OUT


Switching Domain (MAC Subnet 1 Subnet 2
forwarding) Core Router
Connectivity
EPG1 EPG2 EPG3 EPG4
Security
Domain

EP1-1 EP2-1 EP3-1 EP4-1


WAN
EP1-2 EP2-2 EP3-2 EP4-2

43
Do your ACI Policies look like this?
Subnet Aligned
CL2023_tn

Prod_vrf

Prod_ap

1.1.10.0_bd 1.1.11.0_bd 1.1.12.0_bd 1.1.13.0_bd 1.1.14.0_bd 1.1.15.0_bd

1.1.10.0_epg 1.1.11.0_epg 1.1.12.0_epg 1.1.13.0_epg 1.1.14.0_epg 1.1.15.0_epg

any:any

extEPG

44
Or this?
VLAN Aligned
CL2023_tn

Prod_vrf

Prod_ap

vlan10_bd vlan11_bd vlan12_bd vlan13_bd vlan14_bd vlan15_bd

vlan10_epg vlan11_epg vlan12_epg vlan13_epg vlan14_epg vlan15_epg

any:any

extEPG

45
Or perhaps like this?
VLAN Aligned
CL2023_tn

Prod_vrf

Prod_ap

vlan10_bd vlan11_bd vlan12_bd vlan13_bd vlan14_bd vlan15_bd

vlan10_epg vlan11_epg vlan12_epg vlan13_epg vlan14_epg vlan15_epg

any:any

vzAny extEPG

46
Different Approaches to EPG Design in ACI

EPG/BD =
EPG = App Tier Hybrid
VLAN/Subnet
• EPG and BD for each • EPG per Application Tier, • Combination Approach
VLAN/Subnet sharing common BD
• Supports both Legacy &
• Most Commonly • Ideal for well-understood New Apps on same
Deployed Apps and/or Flat fabric
Network deployments
• Ease of Legacy • Introduces a path to an
Migration, Limited • Works well with Improved Security
Segmentation automation tools Model

• VLANs/Subnets define • Most Flexible & Granular • Limited increase


security groupings Security operational complexity

• Increases operational
complexity

47
ACI Security
Features Toolbox
What’s in our ACI toolbox
vzAny
Endpoint Security Groups
L4-7 Service Graphs
ExGs
Intra-EPG Isolation
Endpoint Groups

Filters
Preferred Groups

Contracts Inheritance Intra-EPG Contracts

Contracts

20
Endpoint Group
Endpoint Group (EPG)
Collection of endpoints, such as VMs, hosts, servers, physical devices
Internally represented by pcTag
Endpoint Group (EPG)
Use contracts to communicate to other EPGs
Can represent:
• Subnet/VLAN
• VMware port-group
• Application Tier
• Security zone

Contracts ≈ Access Lists

EPG1 EPG2

51
Endpoint Group Classification

Static Attachment (EPG)

• Physical Domain (Port/VLAN Instance)


• VMM Domain (i.e. Port-Group/VM Network)

Dynamic Attachment (uEPG)

• Physical Domain (IP/MAC)


• VMM Domain (IP/MAC/VM ATTRTRIBUTE)

52
Contracts
Contracts Review

• Traditional access lists are built between subnets, hosts, VLANs, MACs, and applied
to interfaces in a particular direction.
• ACI applies security to Endpoint Groups (EPGs) or Endpoint Security Groups (ESGs)
• Contracts use a Provider/Consumer model
• ACI is a whitelist model by default. That is, only communication which is explicitly
defined will be allowed.
• Any endpoint (EP) in an ExG can communicate by default with any other endpoint
inside the same ExG.
• When an EP needs to communicate to something outside of its ExG, a contract is
required

54
Contract Structure

Contract

Subject 1 (L4-7 Service Insertion/QoS)

Filter 1 Filter Entry 1 Filter Entry 2

Lowest Level ACL


(Port & Protocol)

55
Contract Scope

• Global – Provider/Consumer Relationships


apply across all tenants
(required for cross-tenant communication)

• Tenant - Provider/Consumer Relationships


restricted within tenant

• VRF - Provider/Consumer Relationships


restricted to specific VRFs of tenants

• Application Profile - Provider/Consumer


Relationships restricted to specific AP within
tenants

56
vzAny
Any ExG in VRF = vzAny

Tenant VRF1
• vzAny represents the collection of
EPGs/ESGs that belong to the same VRF, BD1
including L3 external.
EPG1
• Instead of associating contracts to each
individual ExG you can configure a contract
EPG2
to the vzAny
• With cross-VRF contracts, vzAny can be a BD2
consumer, not provider vzAny
EPG3
• Can also be used with Service Graphs
EPG4

ESG1

58
vzAny Example - Simplicity and TCAM Savings

EPG1
Five TCAM Entries *

EPG2

EPG
EPG Shared vzAny
Shared EPG3 Services
Services

EPG4

One TCAM Entry *


EPG5
* assuming a single filter
59
Preferred
Groups
Preferred Groups

• Allows multiple different ExGs to freely communicate without the need for contracts

Preferred - Include Preferred - Exclude


ExGs in PG can ExGs out of PG
talk to other ExGs require contracts
ESG
in PG without 1
EPG to talk to others
4
requiring a EPG ExG
contract 2 EPG (either in or out
5 PG)

EPG EPG
3 6

VRF

61
Preferred Group - Config

• Enable Preferred Group under VRF


• Include any EPG/ESG as a Preferred Group Member

1. 2a . 2b.

62
Intra-Group Isolation
(ESG/EPG)
Intra-ExG isolation & Intra-ExG Contract

Intra Isolation Intra Contract


• Communication between EPs within an • Only flows allowed in the contract are
EPG/ESG not permitted. allowed between EP in EPG/ESG

Cluster Sync Only

Web Web Web Web Web Web


VM VM VM VM VM VM

backup-client DB-cluster
(EPG) Cluster Sync Only (EPG)

64
Intra-ExG Isolation & Intra-ExG Contracts

• Considerations:
• Requires Gen2+ HW & proxy-arp
• Supported on:
• Physical Domains (Baremetal Endpoints)
• VMware VMM vDS
• Microsoft Hyper-V VMM
• For VMM, PVLANs are leveraged
• Same applies for baremetal with intermediate switch
(External Switch App can automate this if using UCSM)

65
uSeg EPG
(Micro EPG / Microsegment EPG)
Understanding Micro EPGs
• A MicroEPG (uEPG) is equivalent to a regular EPG for all purposes, but
classification is based on endpoint attributes (and dynamic in nature)
• Endpoints assigned to the uEPG regardless of the encapsulation/port
• The endpoint must be first known to a regular
EPG, called “base EPG” Regular ‘Base’ EPG based on
port and encapsulation (P,V)

vlan100_epg

BM-01 BM-02 VM-MyApp1


10.10.100.11 10.10.100.12 10.10.100.13
f4:5c:89:b2:bf:cb f4:5c:89:b2:ab:cd (OS: WIN2016)

67
Understanding Micro EPGs
• A MicroEPG (uEPG) is equivalent to a regular EPG for all
purposes, but classification is based on endpoint attributes (and
dynamic in nature)
• Endpoints assigned to the uEPG regardless of the vlan100_epg
encapsulation/port
• The endpoint must be first known to a regular
VM-MyApp1
EPG, called “base EPG” BM-01
10.10.100.11
f4:5c:89:b2:bf:cb
BM-02
10.10.100.12 10.10.100.13
(OS: WIN2016)
f4:5c:89:b2:ab:cd

Define uEPG based on Network


attribute
uEPG MyDB

68
Understanding Micro EPGs

• A MicroEPG (uEPG) is equivalent to a regular EPG for all


purposes, but classification is based on endpoint attributes (and
dynamic in nature) vlan100_epg

• Endpoints assigned to the uEPG regardless of the


encapsulation/port
BM-02 VM-MyApp1
10.10.100.12 10.10.100.13
• The endpoint must be first known to a regular f4:5c:89:b2:ab:cd (OS: WIN2016)

EPG, called “base EPG”


uEPG MyDB uEPG Quarantine

Define uEPG based on VM Attribute


BM-01
10.10.100.11
f4:5c:89:b2:bf:cb

69
Attributes for Micro-Segmentation

• Network-based attributes are VM-Based


applicable to both baremetal and
VM workloads • VMM Domain
• VM-based attributes are applicable • Operating System
to VM workloads only, and requires • Hypervisor Identifier
VMM integration • Datacenter
• VM Identifier
Network-Based • VM Name
• VM Folder / Folder Path
• IP • vNIC DN
• MAC • Custom Attribute
• Tag

70
Logical Operators

• Logical operators OR/AND enable multiple rules to match various attributes.

• Rules can be combined into blocks.

• Blocks are sequentially matched using Logical Operators.

RULE 1 AND RULE 2 AND RULE 3 Match ALL

RULE 1 AND RULE 2 AND RULE 3


OR
Match
ANY RULE 1 AND RULE 2 AND RULE 3
OR
71
Logical Operators - Example

• Any endpoints within either subnet will be matched

• VMs within the VMM domain called ACI-VDS and who’s name is prefixed with ‘Prod’

Match ANY
IP Equals 1.1.1.0/24
IP Equals 2.2.2.0/24

Match All
OR VM Name Starts With ‘Prod’
VMM Domain = ACI-VDS

72
Attribute Precedence

Attribute Precedence Operator Precedence


IP Sets 1 Equals 1
MAC Sets 2 Contains 2
VNIC (DN) 3 Starts With 3
VM (ID) 4 Ends With 4
VM Name 5
Hypervisor 6 • These precedence rules can be
Domain (DVS) 7 overwritten using the EPG Match
Datacenter 8 Precedence attribute in the uEPG
Custom Attribute 9 • Higher order wins
Guest OS 10
Tag 11

73
Endpoint
Security Group
What is an ESG (Endpoint Security Group)?

• Introduced with ACI version 5.0


• ESG is a Security Group across BDs (EPG is across VLANs, within one BD)
• Uses “EP Selectors” to classify endpoints into each

VRF A

BD 1 BD 2 BD 3 BD 4
192.168.1.254/24 192.168.2.254/24 192.168.3.254/24 192.168.4.254/24 Policies Needed: 6
EPG 1 EPG 2 EPG 3 EPG 4
(VLAN 2011) (VLAN 2022) (VLAN 2031) (VLAN 2041)

192.168.1.11 192.168.2.11 192.168.3.11 192.168.4.11


WebServer1 WebServer2 AppServer1 AppServer2

75
EPG vs. ESG
• ESG is a Security Group construct that can span BDs

VRF A

BD 1 BD 2 BD 3 BD 4
192.168.101.254/24192.168.102.254/24192.168.103.254/24192.168.104.254/24
Policies Needed: 1
EPG 1 EPG 2 EPG 3 EPG 4
(VLAN 111) (VLAN 102) (VLAN 103) (VLAN 104)

192.168.101. 192.168.102. 192.168.103. 192.168.104.


11 11 11 11
ESG web ESG app

76
ESG Matching

Endpoints can be classified into ESGs using a variety of attributes:


• IPv4/v6 Address or Subnets
• EPG Selector
• Policy Tags (MACs, VM tags, VM Names, Static Endpoint)

77
Example Design using ESGs
Network Centric Hybrid Application Centric Security Group (ESG)

A security group in Multiple security groups Security groups across Security groups across
1 subnet in 1 subnet subnets bridge domains

Bridge Domain
Bridge Domain Bridge Domain BD BD
10.0.0.254/24
10.0.0.254/24 10.0.0.254/24 10.0.0.254/24 20.0.0.254/24
20.0.0.254/24

EPG (VLAN 10) EPG EPG EPG EPG EPG EPG EPG EPG
(VLAN 11) (VLAN 12) (VLAN 13) (VLAN 11) (VLAN 12) (11) (12) (VLAN 20)

10.0.0.1 10.0.0.3 10.0.0.1 10.0.0.3 10.0.0.1 20.0.0.1 10.0.0.1 20.0.0.1


10.0.0.2 10.0.0.2 10.0.0.2 10.0.0.2
ESG ESG
What if multiple subnets Sharing a broadcast
Need more granular
need to share the same domain brings another
security group
security rules? security concern
Flexible security grouping
78
ESG Considerations

• Security can SPAN BDs (within VRF)


➢ Simpler than EPGs (i.e. per BD)
➢ Great for Network Centric Deployments

• EPG is still used to bind VLANs and interfaces


➢ No changes in VRF/BD/EPG from network perspective

• ESG contracts and BD subnets are deployed on all nodes where the VRF is deployed
• No automatic route leaking based on contracts
➢ No more subnets under a provider EPG
➢ Manual but simple route leaking config

79
ESG Considerations cont’d
• Only IP selector in 6.0. (/32, /128 or LPM such as /24)
➢ ESG can be applied only for routed traffic
➢ To prevent L2 traffic to bypass ESG security, Allow Micro-Segmentation, Intra EPG Isolation with Proxy-
ARP, or Intra EPG Contract needs to be enabled on each EPG where the endpoints originally belonged to.

• No ESG <–> EPG contract/communication


• Includes no ESG <–> uSeg EPGs as well

• vzAny or Preferred Group can be used for ESG-EPG communication

• ESG <–> L3Out_EPGs contracts are supported

80
ESG Contract Support Summary

Supported Not Supported


• Contracts between: • Contracts between:
○ ESG ⇔ ESG ○ ESG ⇔ EPG
○ ESG ⇔ L3Out EPG ○ ESG ⇔ uSeg EPG
○ ESG ⇔ inband-EPG ○ ESG ⇔ Cloud EPG - ESGs not yet
○ ESG ⇔ vzAny supported in NDO
○ ESG ⇔ service-EPG (internally created shadow EPG)
• Taboo Contracts
• Preferred Group
• Intra ESG Contract
• Contract Inheritance

Note:
Any contract features that are supported in uSeg EPG are supported
in ESG unless it’s explicitly mentioned as not supported on the right

81
Application Segmentation &
Putting it all together
Segmentation vs. Micro-Segmentation

Segmentation

Segment 2

Segment 3 Segment 1
Segment 4

Segment = Broadcast domain / VLAN / Subnet

83
Segmentation vs. Micro-Segmentation

Segmentation Micro Segmentation


Micro Segment 4 Micro Segment 2

Segment 2
Segment 2

Segment 1
Segment 3 Segment 1
Segment 4

Micro Segment 1 Micro Segment 3

Segment = Broadcast domain / VLAN / Subnet Micro Segment = Endpoint or Group of Endpoints

84
Zone Micro Segmentation
Segmentation Level: Low

172.16.10.11 172.16.10.12 172.16.10.13 172.16.10.14 172.16.10.15 172.16.10.16

VM1 VM2 VM3 VM4 VM5 VM6

172.16.10.0/24

Prod QA Dev

85
Application Segmentation
Segmentation Level: Medium

172.16.10.11 172.16.10.12 172.16.10.13 172.16.10.14 172.16.10.15 172.16.10.16

VM1 VM2 VM3 VM4 VM5 VM6

172.16.10.0/24

App1 App2 App3

86
Application Tier Segmentation
Segmentation Level: High

172.16.10.11 172.16.10.12 172.16.10.13 172.16.10.14 172.16.10.15 172.16.10.16

VM1 VM2 VM3 VM4 VM5 VM6

172.16.10.0/24

Web App DB

87
Balancing App Segmentation vs. Complexity

Goldilocks Zone
Complexit
y

vzAny Preferred Endpoint Regular uSeg Intra-EPG


Groups Security Contracts EPGs Contracts
Groups

Granularity/Security

88
Sample Case
Study
Greenfield
Greenfield Case Study – Acme Inc.

• Acme Inc the industry leading seller of Anvils


• They are planning on deploying a net-new Application for their e-
commerce site and wish to do so using an Application Centric approach.
• The application tiers are well understood as are the communication
requirements between the tiers.
• The CIO has requested a maximum focus on Segmentation & Security
• New IPs/Subnets will be allocated for the new application endpoints which
will be a mix of baremetal & virtual endpoints.

91
Acme Inc. e - Com Application – EPG Deployment

web

auth-service e - com_frontend cart mongobd

load_balancer

apache1 apache2

EPG
net_services mysql nfs_server

L3EPG

ServiceGraph
92
Brownfield
Brownfield Case Study – Acme Inc.

• Acme Inc the industry leading seller of Anvils


• They have deployed ACI in a network Centric manner and wish to
apply better security starting with their e - Com application
• The application tiers are well understood, but the specific
communication rules between the tiers are not.
• ACME’s Ops team have limited cycles and wish to limit any
increased complexity any design changes may involve.
• They must not impact any existing applications

94
Acme Inc. e - Com Application Summary

web

auth-service e - com_frontend cart mongobd

load_balancer

apache1 apache2

net_services mysql nfs_server

95
Acme Inc. e - Com Application Summary

web

vlan10 auth-service e - com_frontend cart mongobd

vlan101 load_balancers

vlan102 apache1 apache2

vlan201 net_services mysql nfs1 vlan103

96
Acme Inc. e - Com App on ACI
ACME_tn
Prod_vrf

Prod_ap

vlan10_bd vlan101_bd vlan102_bd vlan103_bd vlan201_bd

vlan10_epg vlan101_epg vlan102_epg vlan103_epg vlan201_epg

? ? ?
? ?
?

auth- lb1 apache1 mysql net_svc


e-
serv1
com_
front
auth- mong ? lb2 ? apache2 nfs1 ? ?
serv2 odb1

97
Acme Inc. e - Com App on ACI
ACME_tn
Prod_vrf

Prod_ap

vlan10_bd vlan101_bd vlan102_bd vlan103_bd vlan201_bd

vlan10_epg vlan101_epg vlan102_epg vlan103_epg vlan201_epg

? ? ?
? ?
?

auth- lb1 apache1 mysql net_svc


e-
serv1
com_
front
auth- mong ? lb2 ? apache2 nfs1 ? ?
serv2 odb1

98
Acme Inc. e - Com App on ACI
ACME_tn
Prod_vrf

Prod_ap

vlan10_bd vlan101_bd vlan102_bd vlan103_bd vlan201_bd

vlan10_epg vlan101_epg vlan102_epg vlan103_epg vlan201_epg

E-Com_ap
Application Security Group: E-Com
auth- lb1 apache1 mysql net_svc
e-
serv1
com_
front
auth- mong lb2 apache2 nfs1
serv2 odb1

99
vCenter View
Tag & Category Assignment

100
ACI View 1 of 5
VMM Domain – Tag Collection

101
ACI View 2 of 5
Base EPGs VMM Domain Binding

102
ACI View 3 of 5
ESG – Tag Selector

103
ACI View 4 of 5
Base EPG – Learned Endpoints

104
ACI View 5 of 5
ESG – Matched Endpoints

105
Brownfield Migration of Net-Centric Apps to ESGs

1. Create new application-specific App Profile


2. Create ESG named as App, bind to appropriate VRF
3. Apply Contract between ESG and L3out (for external
connectivity)
4. Create Selectors for ESG
• For VMs, you can use VM Tags, VM Names, VM Folders etc
• For baremetal & VMs you can use MAC or IP (LPM) selectors
5. Enable “Allow for uSeg” on Base EPG VMM Domain binding

106
Result
• Application-Level Health Visibility
• Application Segmentation – Increased Security
• No changes to legacy EPG mappings/VM Port Groups
• Optimized Policy TCAM
• Potential reduction of load on external FWs
• Ability to further segment Application into ‘tiers’

107
Nexus-as-Code
Kickstart your automation with ACI
• Infrastructure as Code
• Introduction to Nexus-as-Code
• Pre-Change Validation
• Automated Testing
Agenda
• CI/CD Integration
Infrastructure as Code
Infrastructure as code (IaC) is the process of managing and provisioning computer data
centers through machine-readable definition files, rather than physical hardware
configuration or interactive configuration tools.

Infrastructure as Code (IaC) is the management of infrastructure in a descriptive model,


using the same versioning as DevOps team uses for source code.

Infrastructure as Code (IaC) is the managing and provisioning of infrastructure through


code instead of through manual processes.

Practicing infrastructure as code means applying the same rigor of application code
development to infrastructure provisioning. All configurations should be defined in a
declarative way and stored in a source control system.

Infrastructure as Code is a process, not a single tool or application


IaC Strategy
Nexus-as-Code

Data sources (YAML, JSON, CSV, Excel, etc.)

Terraform HCL Ansible YAML

Terraform Modules Ansible Roles


(L3Out, Service Graph, System settings, …) (L3Out, Service Graph, System settings, …)

ACI NDO NDFC ACI NDO NDFC ND


Provider Provider Provider Collection Collection Collection Collection
Nexus-as-Code
• Nexus-as-Code aims to reduce time to value by apic:
lowering the barrier of entry to network tenants:
- name: CiscoLive
orchestration through simplification, abstraction, vrfs:
and curated examples. - name: VRF1
- name: VRF2
• It allows users to instantiate network fabrics in
minutes using an easy to use, opinionated data
model. It takes away the complexity of having to
deal with references, dependencies or loops.
• Users can focus on describing the intended
configuration while using a set of maintained and
tested Terraform Modules without the need to
understand the low-level ACI object model.

113
Comparison
Native Terraform
resource "aci_tenant" "tenant_CiscoLive" {
name = "CiscoLive"
}

variable "vrfs" {
default = {
Nexus-as-Code
VRF1 = {
name = "VRF1" apic:
}, tenants:
VRF2 = { - name: CiscoLive
name = "VRF2" vrfs:
} - name: VRF1
} - name: VRF2
}

resource "aci_vrf" "vrfs" {


for_each = var.vrfs
tenant_dn = aci_tenant.tenant_CiscoLive.id
name = each.value.name
}

114
Comparison
Native Terraform
resource "aci_tenant" "tenant_CiscoLive" {
name = "CiscoLive"
}

variable "vrfs" {
default = {
Nexus-as-Code
VRF1 = {
name = "VRF1" apic:
}, tenants:
VRF2 = { - name: CiscoLive
name = "VRF2" vrfs:
} - name: VRF1
} - name: VRF2
}

resource "aci_vrf" "vrfs" {


for_each = var.vrfs
tenant_dn = aci_tenant.tenant_CiscoLive.id
name = each.value.name
}

115
Node Policies
• The data model is organized in a way that apic:
configurations are grouped around where the node_policies:
actual configuration (policy) is applied. nodes:
- id: 101
pod: 2
• All the configurations that are applied at the role: leaf
node level can be found under: serial_number: FDO13026BEN
apic - > node_policies - > nodes name: leaf-101
oob_address: 10.103.5.101/24
• This includes configurations typically found in oob_gateway: 10.103.5.254
different places in the ACI object tree, like for update_group: group-1
fabric_policy_group: all-leafs
example the OOB node management address, access_policy_group: all-leafs
which is configured under the mgmt tenant.
- id: 1
• Consolidating all node level configurations in a pod: 2
single place eases maintenance, as for example role: apic
oob_address: 10.103.5.1/24
we only have to update this single section when oob_gateway: 10.103.5.254
adding a new node.

116
Access Policies
• A number of profiles and selectors can be auto- apic:
generated by providing a naming convention. auto_generate_switch_pod_profiles: true

• There is no need to worry about any of the interface_policies:


nodes:
profiles and selectors as they will be - id: 101
added/deleted automatically according to the interfaces:
node and interface configuration. - port: 1
description: Linux Server 1
• As nodes are added under policy_group: linux-servers
apic - > node_policies - > nodes - port: 2
description: Linux Server 2
the corresponding profiles will be created policy_group: linux-servers
automatically. - port: 47
description: N7K Core
• Once interface configurations are added under policy_group: n7000-a
apic - > interface_policies - > nodes - > interfaces - port: 48
description: N7K Core
the corresponding interface selectors will be policy_group: n7000-b
created

117
Access Policies
• A number of profiles and selectors can be auto- apic:
generated by providing a naming convention. auto_generate_switch_pod_profiles: true

• There is no need to worry about any of the interface_policies:


nodes:
profiles and selectors as they will be - id: 101
added/deleted automatically according to the interfaces:
node and interface configuration. - port: 1
description: Linux Server 1
• As nodes are added under policy_group: linux-servers
apic - > node_policies - > nodes - port: 2
description: Linux Server 2
the corresponding profiles will be created policy_group: linux-servers
automatically. - port: 47
description: N7K Core
• Once interface configurations are added under policy_group: n7000-a
apic - > interface_policies - > nodes - > interfaces - port: 48
description: N7K Core
the corresponding interface selectors will be policy_group: n7000-b
created

118
Separate Data from Code
In order to ease maintenance we separate data (variable definition) from logic (infrastructure
declaration), where one can be updated independently from the other.

apic:
module "aci" {
tenants:
source = "netascode/nac-aci/aci"
- name: CiscoLive
version = "0.7.0"
vrfs:
- name: CiscoLive
yaml_directories = ["data"]
bridge_domains:
- name: vlan-100
manage_access_policies = true
vrf: CiscoLive
manage_fabric_policies = true
application_profiles:
manage_pod_policies = true
- name: dev
manage_node_policies = true
endpoint_groups:
manage_interface_policies = true
- name: vlan-100
manage_tenants = true
bridge_domain: vlan-100
}
physical_domains: ["l2"]

apic.yaml main.tf

119
ACI Terraform Provider
• Nexus-as-Code heavily relies on the resource "aci_rest_managed" "fvTenant" {
generic aci_rest_managed resource of the dn = "uni/tn-EXAMPLE_TENANT"
ACI Terraform provider. class_name = "fvTenant"

• This fully-featured resource is able to content = {


name = "EXAMPLE_TENANT"
manage any ACI object.
descr = "Example description"
}
• The resource is not only capable of
pushing a configuration but also reading child {
its state and reconcile configuration drift. rn = "ctx-VRF1"
class_name = "fvCtx"
content = {
name = "VRF1"
}
}
}

120
ACI Modules
• Terraform Modules allow us to introduce a level of abstraction similar to functions in
programming languages
• Where a Terraform resource typically represents a single ACI object, a Terraform module can
represent a branch in the object tree

Tenant

Bridge ACI Module App


Domain Profile

DHCP ACI Module


Subnet L3out EPG
Label

DHCP Static
Contract Domain
Option Path

121
ACI Module Example
• Modules allow us to break a configuration module "aci_endpoint_group" {
source = "netascode/endpoint-group/aci"
into more manageable pieces which can version = ">= 0.1.0"
be developed and tested independently
tenant = "ABC"
• Modules can be versioned and released application_profile = "AP1"
independently name = "EPG1"
bridge_domain = "BD1"
contract_consumers = ["CON1"]
• Modules enable easier shareability and cut physical_domains = ["PHY1"]
down on duplicate work as they can be vmware_vmm_domains = [{
shared with the wider community name = "VMW1"
(Terraform Registry) }]
static_ports = [{
node_id = 101
• Terraform recently introduced a testing
vlan = 123
experiment, which enables writing port = 10
integration tests for modules directly in }]
Terraform }

122
Nexus-as-Code Module
• Fabric Policies: Configurations applied at the fabric • Node Policies: Configurations applied at the node level
level (e.g., fabric BGP route reflectors) (e.g., OOB node management address)

• Access Policies: Configurations applied to external • Interface Policies: Configurations applied at the interface
facing (downlink) interfaces (e.g., VLAN pools) level (e.g., assigning interface policy groups to ports)

• Pod Policies: Configurations applied at the pod • Tenants: Configurations applied at the tenant level (e.g.,
level (e.g., TEP pool addresses) VRFs and Bridge Domains)

Nexus-as-Code Configuration Area


Tenant

Bridge ACI Module App


Domain Profile

DHCP ACI Module


Subnet L3out EPG
Label

DHCP Static
Contract Domain
Option Path

123
YAML Layout
• As different teams might be responsible
for different parts of the infrastructure, it is $ tree -L 2
.
of paramount importance to allow enough ├── data
flexibility when defining and maintaining │ ├── apic.yaml
the ACI configuration. │ ├── access_policies.yaml
│ ├── fabric_policies.yaml
• The configuration can be split into multiple │ ├── node_policies.yaml
YAML files each for example covering a │ ├── pod_policies.yaml
specific logical section of the │ ├── node_1001.yaml
│ ├── node_101.yaml
configuration. │ ├── node_102.yaml
│ ├── tenant_PROD.yaml
• Nexus-as-Code does not dictate a │ └── defaults.yaml
specific schema, but instead allows for full └── main.tf
flexibilty to divide the configuration as
needed.

124
Deep Merge YAML Content
YAML files can be split at arbitrary points, meaning the Nexus-as-Code Module will combine and
deep merge the contents of YAML files, where data of two elements with the same keys will be
combined. This for example enables splitting the configuration of a single tenant in two YAML files.

Management Service HR Service

apic: apic:
tenants: tenants:
- name: PROD - name: PROD
vrfs: vrfs:
- name: MANAGEMENT - name: HR
bridge_domains: bridge_domains:
- name: VLAN100 - name: VLAN200
vrf: MANAGEMENT vrf: HR

125
Sensitive Information
The configuration might contain sensitive information that should not be stored in cleartext in the
configuration. One common approach to handling secrets in the context of CI/CD Platforms is by
injecting sensitive values as environment variables during runtime.

apic:
access_policies:
mcp:
key: !env MCP_KEY

CI/CD Platform Runtime Environment

apic:
Secrets Storage access_policies:
mcp:
MCP_KEY: Cisco123Cisco123 key: Cisco123Cisco123

126
Data Model Documentation

127
Default Values
defaults:
• Nexus-as-Code comes with pre-defined apic:
default values based on common best tenants:
practices. bridge_domains:
name_suffix: _bd
• In some cases, those default values might unicast_routing: false
not be the best choice for a particular
deployment and can be overwritten if
defaults.yaml
needed.
• Appending suffixes to object names is a apic:
common practice that introduces room for tenants:
human errors. Using default values, such - name: CiscoLive
bridge_domains:
suffixes can be defined once and then - name: vlan_101
consistently appended to all objects of a - name: vlan_102
specific type including its references. - name: vlan_103

tenants.yaml
Unmanaged Parent Objects
In some cases you might only want to manage objects within a container.
The managed flag indicates if an object should be created/modified/deleted
or is assumed to exist already and just acts a container for other objects.

Infrastructure Team manages Tenants Developers manage Tenant Objects

apic: apic:
tenants: tenants:
- name: Dev - name: Dev
- name: Stage managed: false
- name: Prod vrfs:
- name: VRF1
- name: VRF2
Pre-Change Validation iac-validate

A CLI tool to perform format, syntactic, semantic and compliance validation of Nexus-as-Code
YAML files.

$ iac-validate -h
Usage: iac-validate [OPTIONS] [PATHS]...

A CLI tool to perform syntactic and semantic validation of YAML files.

Options:
--version Show the version and exit.
-v, --verbosity LVL Either CRITICAL, ERROR, WARNING, INFO or DEBUG
-s, --schema FILE Path to schema file. (optional, default:
'.schema.yaml', env: IAC_VALIDATE_SCHEMA)
-r, --rules DIRECTORY Path to semantic rules. (optional, default:
'.rules/', env: IAC_VALIDATE_RULES)
-o, --output FILE Write merged content from YAML files to a new YAML
file. (optional, env: IAC_VALIDATE_OUTPUT)
-h, --help Show this message and exit.
Semantic Validation iac-validate

Semantic validation is about verifying specific data model related constraints like referential
integrity. It can be implemented using a rule based model like commonly done with linting tools.
Examples are:
• Check uniqueness of key values (eg. Node IDs)
• Check references/relationships between objects (eg. Interface Policy Group referencing a CDP
Policy)

Rule 101: Verify unique keys ['apic.node_policies.nodes.id – 102']


Rule 201: Verify references ['apic.node_policies.nodes.update_group – GROUP1']
Rule 205: Verify Access Spine Interface Policy Group references
['apic.interface_policies.nodes.interfaces.policy_group – SERVER1']
Compliance Validation
NDI Pre-Change Analysis
Nexus Dashboard Insights (NDI) is continuously pulling the entire policy, every configuration, and
the network-wide state, along with the operator intent, and building from these comprehensive
and mathematically accurate models of network behavior. It combines this with codified Cisco
domain knowledge to generate “smart events” that pinpoint deviations from intent and offer
remediation recommendations.
The Pre-Change Analysis feature can be used to assess the impact of a particular change
before applying it to the infrastructure. This is done by applying the planned changes to the
model and then analysing the impact.
NDI Pre-Change Validation nexus-pcv

A CLI tool to perform a pre-change analysis on Nexus Dashboard Insights or Network Assurance
Engine. It can either work with provided JSON file(s) or a terraform plan output from a Nexus-as-
Code project. It waits for the analysis to complete and evaluates the results.

$ nexus-pcv -h
Usage: nexus-pcv [OPTIONS]

A CLI tool to perform a pre-change validation on Nexus Dashboard Insights or


Network Assurance Engine.

Options:
-i, --hostname-ip TEXT NAE/ND hostname or IP (required, env:
PCV_HOSTNAME_IP).
-u, --username TEXT NAE/ND username (required, env: PCV_USERNAME).
-p, --password TEXT NAE/ND password (required, env: PCV_PASSWORD).
-d, --domain TEXT NAE/ND login domain (optional, default: 'Local',
env: PCV_DOMAIN).
-g, --group TEXT NAE assurance group name or NDI insights group
name (required, env: PCV_GROUP).
-s, --site TEXT NDI site or fabric name (optional, only required
for NDI, env: PCV_SITE).
Testing
There are certain aspects we can only verify after deployment like for example operational state.
Various testing frameworks can be used for that, one example would be Robot Framework.
Robot’s language agnostic syntax with libraries like Requests and JSONLibrary can be used to
write tests against REST APIs.
In combination with templating languages like Jinja we can render test cases dynamically based
on the desired state.
Tests can typically be categorized in three groups:
• Configuration Tests: verify if the desired configuration is in place
• Health Tests: leverage the in-built APIC fault correlation to retrieve faults and health scores and
compare them against thresholds and/or previous state
• Operational Tests: verify operational state according to input data, eg. BGP peering state
Testing iac-test

A CLI tool to render and execute Robot Framework tests using Jinja templating.

$ iac-test -h
Usage: iac-test [OPTIONS]

A CLI tool to render and execute Robot Framework tests using Jinja
templating.

Options:
-d, --data PATH Path to data YAML files. (env: IAC_TEST_DATA)
[required]
-t, --templates DIRECTORY Path to test templates. (env: IAC_TEST_TEMPLATES)
[required]
-f, --filters DIRECTORY Path to Jinja filters. (env: IAC_TEST_FILTERS)
--tests DIRECTORY Path to Jinja tests. (env: IAC_TEST_TESTS)
-o, --output DIRECTORY Path to output directory. (env: IAC_TEST_OUTPUT)
[required]
-i, --include TEXT Selects the test cases by tag (include). (env:
IAC_TEST_INCLUDE)
-e, --exclude TEXT Selects the test cases by tag (exclude). (env:
IAC_TEST_EXCLUDE)
--render-only Only render tests without executing them. (env:
IAC_TEST_RENDER_ONLY)
Robot/Jinja Example iac-test

*** Settings ***


Documentation Verify Tenant Health
Suite Setup Login APIC
Default Tags apic day2 health tenants non-critical
Resource ../../apic_common.resource

*** Test Cases ***


{% for tenant in apic.tenants | default([]) %}
Verify Tenant {{ tenant.name }} Faults
${r}= GET On Session apic /api/mo/uni/tn-{{ tenant.name }}/fltCnts.json
${critical}= Get Value From Json ${r.json()} $..faultCountsWithDetails.attributes.crit
Run Keyword If ${critical} > 0 Run Keyword And Continue On Failure
... Fail "{{ tenant.name }} has ${critical} critical faults"

Verify Tenant {{ tenant.name }} Health


${r}= GET On Session apic /api/mo/uni/tn-{{ tenant.name }}/health.json
${health}= Get Value From Json ${r.json()} $..healthInst.attributes.cur
Run Keyword If ${health} < 100 Run Keyword And Continue On Failure
... Fail "{{ tenant.name }} health score: ${health}"

{% endfor %}
CI/CD Workflow Example

1
Push to Workflow Syntax and Webex
Dev Branch feature branch triggered Semantic Validation notification

2
Operator Open GitHub Workflow Terraform NDI Pre-Change Webex
Pull Request Pull Request triggered Plan Analysis notification

3
Merge into Workflow Deployment Testing Run NDI Delta Webex
Main Branch main branch triggered Analysis notification
Scalability
By adding more and more objects to your $ tree -L 2
configuration a few problems can arise: .
├── data
• The Terraform state file becomes bigger and │ ├── apic.yaml
making changes with Terraform takes much │ ├── access_policies.yaml
│ ├── fabric_policies.yaml
longer. │ ├── node_policies.yaml
│ ├── pod_policies.yaml
• A single shared statefile is a risk. Making a │ ├── node_1001.yaml
change in a Development tenant could have │ ├── node_101.yaml
implications to a Production tenant. │ ├── node_102.yaml
│ ├── tenant_PROD.yaml
• No ability to run changes in parallel. Only one │ ├── tenant_DEV.yaml
concurrent plan may run at any given time as │ └── defaults.yaml
└── workspaces
the statefile is locked during the operation. ├── tenant_PROD
│ └── main.tf
• With Nexus-as-Code, state can be split into └── tenant_DEV
multiple workspaces while retaining a single set └── main.tf
of YAML files.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy