Vectra-DC-Design-Guide
Vectra-DC-Design-Guide
Using the information in this guide, you will be able to accomplish the following planning and deployment tasks:
NOTE: The Vectra solution supports physical Sensors and virtual Sensors (vSensors). In text that refers specifically to vSensors, the term
“vSensors” is used. In text that applies to both types of Sensors, the term “Sensor” is used. In text that applies specifically to physical
Sensors, this is stated in the text.
Additional Information
See the resources listed in Table 1 for additional information or to give feedback.
Information Access
Detection Models Navigate to https://<your_Vectra_Brain’s_Hostname>/resources/
Then click “download” for the Understanding Vectra Detections PDF.
Additional Information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Deploy vSensors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Pairing the vSensors to the Brain. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Pin the vSensor to its Hypervisor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Changing vSensor CLI Password. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
• Brain: The Brain is the central controller, which is paired with distributed Sensors.
• Sensors: Sensors capture network traffic flowing through the physical and virtual network, then distill the captured traffic into metadata
about each traffic session. A session consists of the standard 5-tuple: source and destination IP addresses, Layer 4 protocol (TCP or
UDP), and source and destination protocol ports.
Each Sensor sends its distilled metadata to the Brain. The Brain then analyzes and correlates the metadata to detect suspicious activity or
potential threats. The Sensors are a set of eyes distributed throughout the network that provide the Brain with a relatively complete view
of the hosts and compute machines in the environment. Figure 1 shows a high-level, logical view of how the Vectra System components
relate to each other, and how they receive packet copies from the monitored network.
Each Sensor ingests network traffic when (one of) its capture interface(s) connects to a switch port configured as a Switch Port Analyzer
(SPAN), to a Test Access Port (TAP), or to a similar construct in a virtual environment such as a VMware port group configured in
Promiscuous Mode.
Each Sensor also has a second interface, its management interface, over which it talks to the Brain.
In order to properly analyze network connections for suspicious behavior, each Sensor must see bidirectional traffic flows of the
hosts/compute nodes for which monitoring is desired.
When Sensor functionality is enabled, physical Ethernet ports that are dedicated to packet capture functions are plugged into SPANs
or TAPs.
The following types of Sensor appliances are available: physical Sensors and virtual Sensors (vSensors). Physical Sensor functions can be
performed by all X-series platforms when configured in Sensor-only mode, and by the S-Series dedicated Sensor appliances.
Figure 2 depicts a Vectra Data Center Solution that uses the following X-series components:
While this example shows model X80, any X-series platform model can serve as a Brain. Likewise, in addition to the S2 platform, any
X-series platform model also can serve as a physical Sensor.
Figure 2 also shows the Vectra cloud services to which a Brain will need to connect, as well as a VMware vCenter server to which the Brain
will make Application Programming Interface (API) calls.
Every Vectra System component, Brain and Sensor, has a dedicated management port called “mgt1”. Table 3 describes how the Brain and
Sensors use the mgt1 port.
Sensor • Management by Brain (liveness detection, updates, pairing, and so on over SSH)
• Metadata and PCAP delivery to Brain
• CLI for bootstrap configuration, pairing and provisioning (over SSH)
• DNS and ping for connectivity verification
Understanding these message types helps during planning for Vectra System insertion into the data center network, as described in the
Select Network Insertion section.
NOTE: When the term “host” is used in this document, it refers to a node on the network that originates and terminates connections, such
as end points and servers. Under this definition, “hosts” may be either physical or virtual machines.
When the term “physical host” is used, it refers to server hardware on which virtual machines reside.
DC host to DC host • Management by Brain (liveness detection, updates, pairing, and so on over SSH)
• Physical node to virtual node • DNS and ping for connectivity verification
Consider where in your network each of these interactions occurs. The next section will explore the physical placement of the Brain device
and Sensors, in order to capture an appropriate amount and appropriate cross-section of traffic.
These are the interfaces that need to be inserted into the network topology:
Figure 3 shows an example of a DC topology. Each potential insertion point for Vectra components is labeled (1-9). Each of these insertion
points is summarized in Table 5 and described in more depth in the following subsections.
The following subsections provide more information about each of the “insertion points” listed above.
Management Network
Each Vectra component—the Brain and every Sensor and vSensor—has a management port, labeled “mgt1”. The mgt1 interfaces are
layer 3, IP interfaces. Each mgt1 interface will require a unique unicast IP address.
The IP addresses of the mgmt1 ports are not required to be on the same Layer 2 Ethernet segment, in the same virtual LAN (VLAN), or in
the same IP subnet. They are designed to reach one another over routed networks.
If East-West traffic between VMs on the same physical host travels only through that host’s virtual switch and does not traverse a link
outside of the physical host, using a vSensor installed on the host hypervisor is the only way to capture the traffic between those two VMs.
If the East-West traffic is forwarded through a router that is outside the VM, a Sensor inserted in the traffic path between the router and the
VM can capture the traffic. However, the only way to ensure that 100% of the traffic that passes through a virtual switch is captured is to
install a vSensor on the VM’s hypervisor.
If a vSensor is used, it must be installed on the hypervisor using an Open Virtualization Format (OVA) file downloaded from the Brain that
will manage the vSensor. (Typically, the network will have a single Brain. The OVA files for all vSensors deployed within that network must
come from this Brain.)
The vSensor has a management port (mgmt1), and one or more capture ports. Place mgt1 in the port group used by the virtual switch for
management traffic.
Place the capture port in a port group that has Promiscuous Mode enabled. Promiscuous Mode enables the capture port to copy all
packets traversing the virtual switch to the port group. Any ports attached to the port group also will receive copies of the captured traffic.
For environments that make use of multiple virtual switches, up to three virtual switches can be managed using a single vSensor
An example is depicted in Figure 4. The vSensor’s mgt1 interface’s vNIC (above in red) will attach to the management port group,
“Mgmt_Servers”, while the capture interface of the vSensor’s vNIC (above in green) will connect to the port group dedicated for monitoring,
“Monitor_US.SJ.07”.
Additionally, to capture all traffic, assign the capture port group to all VLANs. If using VMware Virtual Standard Switch (VSS), define the port
group’s VLAN setting as “All (4095)”. If using Virtual Distributed Switch (VDS), as depicted in Figure 4 above, configure the port group’s
VLAN setting as “0-4094”. If the port group is assigned to only a subset of VLANs, traffic for only those VLANs will be captured.
It also is the case in some environments that two VMs on the same hypervisor speak to each other, but their respective ports are on
different VLANs or subnets configured such that traffic between the two VM’s ports must first leave the physical host’s NIC in order to
be routed to the destination subnet, only to come right back to the destination VM on the same physical host, but on a different VLAN
or subnet.
If there are little or no intra-hypervisor flows, or if the network topology normally routes packets outside the physical host before returning
to a different VM on the same hypervisor, then vSensors may not be necessary. In this case, physical Sensors may adequately monitor the
VM’s traffic.
In such a case, Vectra recommends inserting the physical Sensors’ capture ports on the internal side of the DC network, behind the DC
edge firewall and proxy, and as close to the compute machines’ physical access switches as possible.
It is advantageous to place the physical Sensor as close as possible to the physical host to maximize visibility of all relevant traffic.
The next best option is to place the Sensor in front of a spine switch. In this case, make sure that the switch fabric can send any traffic from
any switch comprising the spine to the SPAN port connected to the Sensor. The entire spine must be visible in a leaf-spine architecture.
Also make sure that the SPAN configuration delivers bidirectional traffic, both the sent and received packets for any (optimally, all)
network connections.
Bandwidth Capacity
Bandwidth capacity involves the following:
• Traffic Volume: The switch where the Sensor is placed must be able to copy the volume of traffic desired at the SPAN port without
slowing down the forwarding function fundamental to the switch’s operation. Make sure to not oversubscribe a SPAN port when
spanning VLANs.
• Traffic Spikes: The Sensor must be rated to handle the volume of traffic that may be emitted from a second- or third-tier switch. This
includes capacity to handle traffic spikes that may be emitted from the switch.
Traffic volumes greater than the supported level will not cause harm to the Sensor. They also will not disrupt the surrounding network.
However, the excess packets will be dropped, causing a black-out period for some sessions. The Brain will not have complete traffic flow
information to analyze for threats.
In this case, the traffic can be aggregated from anywhere in the multi-switch fabric, and pumped through a specific port. Proximity of the
Sensor is no longer a consideration. Instead, the TAP aggregator must have the capacity to handle the traffic.
The RAC assigns IP addresses from a pool to the VPN NICs of the remote devices. The IP addresses belong to the internal network.
Usually, the address pool is in a dedicated subnet for the RAC, and the IPs are re-used regularly as the remote devices pin up and tear
down VPNs to the RAC.
Placing a Sensor between the inside interface of the RAC and the rest of the DC allows the Vectra System to see all the remote host traffic
into the network, which is important because remote hosts are a common attack vector.
This Sensor placement also the Vectra System to view the authentication activity from end-point devices and use the information to
clearly identify the devices, even if a device is using an IP address that was used recently by several other devices. This visibility is highly
recommended, as the ability to monitor authentication activity will benefit a variety of Vectra detection models.
Place the Sensor on the inside of the DC, where the Sensor can capture the VPN traffic after it is decapsulated by the RAC.
The visibility provided by the authentication traffic significantly aids the Vectra System’s mapping of sessions to specific machines/devices,
and their owners. This may be accomplished with either vSensors or physical Sensors.
Some data centers have only one Internet ingress/egress point that internal systems’ communications must cross, while others have many.
Place Sensors capture ports just inside any such Internet ingress/egress points, such that 100% of North-South traffic traveling through the
DC firewall perimeters is inspected by the Vectra System.
This Sensor placement ensures the ability to observe any unauthorized or suspicious connections involving outside attackers, regardless of
where other Sensors are placed internally.
Usually, a Physical Sensor is placed at the internal side of a perimeter firewall but a vSensor can be used, depending on your environment.
In the case of a perimeter employing NAT, place the capture function on the internal segment. Because most DC use private IP addresses
(as described in RFC 1918), perimeter firewalls typically use network address translation (NAT) to rewrite the Layer 3 (IP) and Layer 4
(TCP/UDP) information so that communication can exit to the Internet. In order to identify compromised systems inside your network, insert
capture functionality where the Sensors can see the internal, private addresses of the machines that make outbound connections. This
enables you to pinpoint the exact compromised machine when a detection is made.
A Sensor placed outside the firewall would see all detections from the same IP address(es), the firewall’s external and routable IP
interface(s), and would have no idea which internal machine actually was involved in a connection.
Fluctuations in unacceptable connection attempts can indicate compromised machines or misconfigurations. A Sensor placed outside the
firewall sees only those internally initiated connections that succeed past the firewall. Moreover, a Sensor placed outside the firewalll would
see all ingress, Internet-side initiated connections, including those that are dropped by the firewall. These ingress connections are likely to
be very large in number due to the constant onslaught of attackers against high-value DC assets, and will likely create far more noise in the
Vectra System than your operations are prepared to wade through.
For this reason, even in routed, non-NAT environments, Vectra recommends placing the Sensor on the network segment inside the firewall.
Tight and highly monitored access controls are placed on connections to/from machines in the DMZ to machines inside the DC.
For the same reasons as noted for Sensor placement at the internal side of perimeter firewalls, Vectra recommends placing Sensors on the
DC side of DMZ borders, where the internal DC connects to any DMZ security zones.
Capturing data from the management subnets allows the Vectra system to watch administrative connections for suspicious activity and
report anomalies in management patterns.
When selecting Sensor insertion points, make sure that each Sensors’ capture port(s) are placed where it can see bidirectional traffic (both
the ingress and egress legs of a connection flow).
Sensor receives both legs of any flow. Figure 5 shows an example of a fully redundant, activeactive perimeter with asymmetric forwarding.
The Sensor in this example, an X-series platform model X24 in Sensor-only mode, is placed for full, bidirectional flow capture.
This Sensor receives SPAN traffic from both switch 1 (S1) and switch 2 (S2) on its two 10 GE optical ports, eth5 and eth6. The SPAN rules
on the switches are set to copy and forward packets from the same VLANs.
When the Server’s flow exits the network through S1, firewall 1 (FW1), and router 1 (R1), the X24 Sensor sees the beginning of the flow on
its port eth5.
When the response from the vendor’s market data feed service arrives on router 2 (R2), firewall 2 (FW2), and then S2, the X24 Sensor sees
the response on its port eth6, and can construct the full flow.
More detail on this topic is found in the Vectra System Design Guide, in the section “Best Practices for Configuring SPAN”.
For example, session duplication occurs when a VM is talking to an Internet-dwelling system, and Sensors are capturing packets at the
virtual switch, at a spine switch, and also at the DC firewall egress. In this example, the Vectra System will observe the same flow at three
different locations.
You do not need to worry about session duplication when planning for Vectra insertion. The Vectra System is designed to identify and
handle these cases automatically.
Within the Vectra System, the following types of Sensors are available. The best Sensor type to deploy in a given DC network depends
upon the environment and on scale requirements.
Throughput is the primary resource to keep in mind when determining the type of Sensor to deploy.
Some of the Sensor types available are dedicated Sensors while the remaining two are X-series platforms that can be run in a Brain-only,
Sensor-only, or Mixed mode.
• vSensors integrate with VMware vSphere, are deployed on each hypervisor, and have a full view of the visualized infrastructure.
• Physical Sensor deployment requires more in-depth planning. This is due to the more distributed nature of the physical infrastructure
(core, distribution, and access) when compared to a virtual infrastructure.
If you have not already, please review the above section Select Network Insertion Points for Vectra Components to determine which of the
insertion points described you want to mimic in your environment. Once this determination is made, analyze the traffic and determine what
size and model of physical Sensor needs to be deployed.
X24 in Sensor Mode Although the X24 is typically deployed as a Brain only or in Mixed mode, it can also be deployed as a stand-alone Sensor.
Deployed in Sensor Mode, the X24 supports a total of 8 GB sustained throughput.
Interfaces:
• One management port
• Six capture ports:
• Four 10/100/1000 Mbps copper ports
• Two optical 10 GB SFP+ ports
X80 in Sensor Mode The X80 typically is deployed in Brain-only or Mixed mode, but also can be deployed as a stand-alone Sensor.
Deployed in Sensor Mode, the X80 supports a total of 20 GB sustained throughput.
Interfaces:
• One management port
• Four optical 10 GB SFP+ capture ports
The benefit of this deployment model is that it removes the need for physical device racking and stacking in order to expand detection scope.
• Partial visibility: Only traffic traversing the physical switch will be seen. Any intra-hypervisor traffic on this or other hypervisors will not
be captured. Also, any traffic flows where the path between the source and destination passes switches, topologically excluding the
spanning switch, will not be captured.
• Total bandwidth consumable from the physical SPAN / TAP is the performance limit of the vSensor. Currently, the limit is 2 Gbps with 8
vCore vSensor.
Deployment details for this use of a vSensor can be found in the Sensor Installation Guide.
This section describes the system and environment requirements for inserting vSensors into your DC network.
With a DNS entry, each Sensor may be configured to reach the Brain using the Brain’s hostname, for example: vectra01.company.com.
This allows the Brain’s IP address to be changed at will, and, as long as the DNS record is updated with the new IP address, the
Sensors will continue to connect to the Brain without issue. Otherwise, if the Sensors’ configurations use the Brain’s IP address instead
of the hostname, and the Brain’s IP changes, each Sensor would then need to be accessed via SSH to manually change its Brain
configuration setting.
NOTE: For the network identities of devices, Vectra recommends using hostnames instead of IP addresses. If the IP address of the Brain
ever needs to be changed, you will not need to manually log in and change all of the vSensors’ Brain settings.
Capture Interfaces 2 2 3
Table 7 lists the resources required on each VM host machine where a vSensor will be installed.
Firewall/Access Control
Access control mechanisms may be in effect in your network environment that, left unchanged, would prevent the Vectra System from
operating correctly once the components are deployed.
NOTE: Use hostnames instead of IP addresses if your access control systems/firewalls support it. This allows the IP address of the
interface to be renumbered, the DNS entry to be updated accordingly, and the Vectra System’s access control to work automatically
through such a change, without reconfiguration of the access control system.
Table 8: Access control connections used by X-series (sourced from Brain mgt1 hostname)
or UDP 9970 Initiated from Brain to Vectra Allows authenticated and authorized Vectra staff access to the Brain, and from the Brain to
Cloud Service the vSensors.
Note: UDP will operate faster than TCP.*
vSensors’ mgt1 hostnames Troubleshooting For connection from the Brain CLI to the CLI on a vSensor.
TCP 22 (SSH)
May be used by customer support, or during the product Beta cycle.
* Such access may be needed for technical support, professional services, or Beta programs.
Table 8 lists the connections used by the Brain. Table 9 lists the connections used by Sensors.
For optimal coverage, all VLANs carrying application or management traffic should be captured.
Exceptions
VLANs dedicated to network-based I/O traffic, such as iSCSI or vMotion, do not need to be monitored. If possible, such VLANs should
NOT be captured, for the following reasons:
• The Vectra System does not detect threats at an I/O level, so there is no use to consume such packets.
• The volume of these connections is orders of magnitude larger than that of application and management traffic. Excluding these
connections therefore will improve the chances that the capture volume arriving at the vSensor’s capture port will be within the
supported volume.
NOTE: In the case of a VMware VSS switch, VLANs cannot be isolated from the promiscuous setting – the configuration is either one
VLAN or all VLANs (4095). For VSS, the I/O-related VLANs will need to be taken into account when assessing the bandwidth requirements.
To determine the network utilization on a hypervisor, you can use either of the following:
1. Login to vSphere.
2. Navigate to Hosts and Clusters.
3. On the Hosts and Clusters page, select the host to be analyzed in the left pane. In the right pane, select the Monitor tab.
4. Once the main pane updates, select Performance.
5. Select Overview and scroll down until the network statistics are visible. (For a more detailed view of the traffic, select Advance.)
Vectra UI
If the Vectra appliance is configured and is synchronized with vSphere, the same information can be gathered from the Vectra UI. (See
Table 10 in Brain Initial Configuration Settings for Data Center Deployment.)
• VSS: This is the basic Virtual Switch that comes with a VMware installation
• VDS: More robust networking capabilities come along with the ability to define one Virtual Switch with port groups and networking
objects that will then stretch across multiple physical hosts. VDS is available with VMware’s Enterprise Plus feature pack.
NOTE: A vSensor is a virtual appliance. Though its VM hardware resources (including processor, memory, and drive) and can be
reconfigured (increased or decreased) through vSphere like other VMs, such changes will not be recognized by the vSensor’s OS and
potentially will have an adverse effect on the vSensor. DO NOT CHANGE THE vSENSOR’s HARDWARE RESOURCES from those set
by the OVA at installation.
The following instructions apply to vSphere 6.0.0. While the exact windows and options may differ slightly in your version of vSphere, the
concepts will be very similar.
1. From within the vSphere Web Client, select the host that needs to be configured from the navigator pane on the left.
2. Once the details for this host have populated in the primary pane on the right, select the Manage tab.
3. Within the Manage view, select Networking > Virtual Switches and select the virtual switch that you want to monitor from within the
Virtual Switch table.
4. Click on the Add networking icon.
6. Next will be the assignment of the port group to the appropriate virtual switch. Assuming the appropriate virtual switch was selected in
the step above, the correct virtual switch should be defined.
If you accidentally select an incorrect virtual switch, click Browse and select the appropriate switch.
Click Next.
7. In Connection Settings, enter a Network label of your choice to serve as the name for this port group (example: “Capture” or
“Monitor”), and set the VLAN ID to “All (4095)”.
Click Next.
8. Finally, review the configuration and click Finish to create the Capture port group.
Exception: As mentioned in Exceptions, if any VLANs are dedicated to I/O traffic or vMotion traffic, exclude those VLANs from the port
group. In this case, in the port group configuration, a list of VLANs or VLAN ranges (example: “200-500”) can be provided that will allow for
selective interaction between the port group and the VLANs.
1. From within the vSphere Web Client, select the host that needs to be configured from the navigator pane on the left.
2. Once the details for this host have been populated into the primary pane on the right, select the Manage tab.
3. In the Networking tab, select the distributed virtual switch that the port group will be assigned to. This should be the VDS that you
want to monitor.
4. Select Actions > Distributed Port Group > New Distributed Port Group. This will launch a window to walk you through the
configuration of the port group.
5. Assign a name to the Capture port group and click OK.
6. Configure the port group. In the VLAN section, select a type of VLAN Trunking and provide a list of VLANs to be monitored. VLANs can
be entered as single VLAN IDs or as ranges. Separate the VLAN IDs or ranges with commas (example: 10,12,15-25,27-4094.)
For all possible VLANs (both now and for anything that may be added in the future) use 0-4094.)
Though Vectra provides CLI on the X-series for rapid creation and provisioning of vSensors, Vectra recommends that you create and
deploy the first three vSensors through the vSphere client interface, one at a time, so that you understand the constructs and settings
at play.
After deploying and verifying the first 3 vSensors manually, you may want to use the vSensor deployment CLI to quickly create and
provision the additional vSensor virtual appliances required for your monitoring design. Details about the vSensor deployment CLI are
located in the Sensor Installation Guide.
Sheet 2 on the preparation worksheet (see Appendix: Data Center Deployment Worksheet Hard Copy) contains spaces to fill in the above
list of configuration settings for each additional vSensor to be deployed, so that many vSensors’ details can easily be added, copied, and
moved. Gathering and noting this information before you begin deployment will significantly ease and hasten the deployment.
Capture port group Virtual switch port group that will be used to copy the packets from the virtual switch to the Vectra vSensor port connected to the
Capture Port Group (the vSensor capture port).
If you already have a port group configured on this virtual switch for monitoring, you can use that same port group. If not, you will
need to create a new port group.
Set the port group to Promiscuous Mode, and set the port group’s VLANs to match the VLANs to be monitored.
VMware VSS allows either a single VLAN ID or all VLANs (VLAN ID 4095).
NOTE: The deployment planning spreadsheet already has “4095” as the default for VSS. Vectra recommends this setting, unless
there is a reason why only one VLAN should be monitored. See Deploy vSensors for a more detailed discussion and instructions on
VLAN configuration for the monitor port group.
After determining which VLANs to monitor, enter them into the deployment planning spreadsheet.
vSensors’ management Name of the port group to use for the vSensors’ management interface (mgt1).
interfaces port group
Make a note of the VLAN associated with the port group. If using VMware VDS, this port group will be the same for all hypervisors.
Otherwise, if using VMware VSS, you will need to create the port group on each hypervisor’s virtual switch. Nonetheless, Vectra
recommends using the same port group object name across all relevant hypervisors.
Ensure this port group’s VLAN is configured on the corresponding ports of all upstream switches to which the target physical hosts’
physical uplink ports are connected. See Enable Sensor Management VLAN on Upstream Switches for more information.
Virtual switch on hypervisor Name of the virtual switch on the hypervisor where the vSensor is installed
If using VMware VDS, this will be the same for all hypervisors. If using VMware VSS, the name may be different for each hypervisor.
Datastore Name of the datastore on the physical host’s VM disk. This should be determined by your VMware administrator.
Management IP address IP address of the management interface (not the data collection interface) on the vSensor
Management default gateway IP address of the default gateway through which traffic from the vSensor’s management interface will travel to reach other Layer 3
IP subnets
DNS servers IP address of each local DNS server to be used by the vSensor for resolution of DNS hostnames into their corresponding
IP addresses
Pin vSensor VMware affinity rule that “pins” the vSensor to its physical host
If VMware Distributed Resource Scheduler (DRS) and vMotion is used to dynamically rebalance VM load across hypervisors, the
vSensor should be pinned to the hypervisor on which you install it, to prevent the vSensor from automatically being migrated to
another hypervisor.
Have the VMware team create an affinity rule that pins the vSensor to its physical host, and ask them to not add it to vMotion.
Only move the vSensor if you no longer want monitoring of the physical host. In this case, Vectra recommends destroying the vSensor
and creating another one from scratch on the new hypervisor.
Each physical host on which a vSensor resides will have many guest VMs, many port groups, and their many corresponding VLANs. These
many VLANs likely will pass through the physical Ethernet ports (NICs, VM NICs) to an upstream switch(es) with a trunk configuration.
Be sure to enable the vSensor’s management interface (mgt1) port group’s VLAN on the upstream switch(es) port trunk connected to
the physical host where the vSensor will reside. Without the VLAN enabled thus, the newly created vSensor will fail to connect and pair to
the Brain.
NOTE: For each host where a vSensor is placed, make sure the ports on the upstream switches connected to the target host’s physical
uplink ports are configured to pass the VLAN of the vSensor’s mgt1 subnet.
For example, assume three vSensors are to be placed on each of three physical hosts named “ussjcesx010”, “ussjcesx011” and
“ussjcesx012”. For each of those physical hosts, the port associated with the vSensors’ mgt1 interface, Network adaptor 1 in vCenter, will
be placed into a port group called “Infrastructure_Mgmt”.
Making this switch configuration before the vSensors are created helps deliver a smooth vSensor deployment and pairing process.
Sensors > Password When Sensors first come online they have a default user name This is a security versus convenience trade-off regarding CLI
(vectra) and default password (youshouldchangethis) for access on Sensors.
their CLI access over SSH. It is strongly recommended that you
Specify this password configuration if you prefer the convenience
change the password as soon as possible.
of a single password for all your Vectra Sensors.
This feature allows you to change the password for all Sensors
Leave this empty if you prefer the more secure option, to
at once. After the new password is specified here, the Brain will
manually log in and set a unique password for the vectra user
change the vectra user password for CLI access on all attached
on each Sensor.
Sensors, and all subsequently paired Sensors.
Note: If you do neither option, the vectra user password will
remain at its factory default setting.
VMware Vectra System will query the VMware vCenter for device Enable.
information, using the vCenter API.
Strongly recommended.
Enabling this feature provides a read-only view into the vSphere
Helps Vectra operator to know more precisely what they need as
state that can otherwise be obtained only by logging into
they engage VMware operational teams.
vSphere itself.
(See Prepare the Network and Virtual Environment for
Using the VMware option helps with vSensor deployment
vSensor Insertion.)
planning by indicating where vSensor coverage currently
exists and where it does not exist (where monitoring currently is
not occurring).
This option also provides information about available resources
on physical VMware hosts, and indicates configuration errors
that might be affecting packet capture.
Enabling the vCenter API query connectivity helps with vSensor deployment planning by identifying the physical hosts, clusters and data
centers that currently have vSensor coverage, and those that do not have coverage.
Enabling the vCenter connection also shows available resources on physical VMware hosts, and exposes any configuration errors that
might be affecting packet capture. This view, seen in the Vectra UI Manage > Physical Hosts page, helps the Vectra System operator
identify the exact requirements that need to be conveyed to VMware operational teams.
Once this setting is enabled, the Manage > Physical Hosts page appears in the Vectra UI.
• New physical server where a vSensor may be needed is added to the network
• vSensor has been moved or powered down
• VM is moved from a host that is monitored by a Sensor to a host that is not monitored by a Sensor
NOTE: Vectra strongly recommends enabling the VMware integration setting, as a best practice. More about this integration is covered in
Deploy vSensors.
To enable VMware integration, enable the VMware option on the Brain, and prepare a vSphere account to use for read-only access by the Brain.
To ensure that the vSphere user/group the Brain will use has global, read-only access, use the following steps in the vSphere UI:
1. From the vSphere Administration page select Access > Global Permissions.
2. Click the plus sign to display the global permissions dialog.
3. At the bottom of the left pane, click Add.
4. Ensure the domain is set to the proper domain, select the users or groups you intend to use in Vectra’s configuration to connect to
vCenter’s API, and click OK.
5. In the Assign Role section, select Read-Only from the drop-down list.
6. Make sure the Propagate to children checkbox is selected, and click OK.
Deploy vSensors
This section describes setup tasks that are specific to deploying vSensors in a DC network.
NOTE: This section assumes that the vSensor VMs have been created and that their management and capture interfaces have been
attached, based on the instructions in the Sensor Installation Guide. Only after the vSensors VMs are installed and attached, continue with
the section below.
NOTE: This section does not cover insertion of Physical Sensors. Instead, see the Sensor Installation Guide.
To automate pairing, log onto the Brain and enable the Settings > System > Sensors > Autopairing option. Auto pairing allows the
Brain and vSensor to pair automatically, without any user intervention.
To instead manually pair an individual vSensor to the Brain, Login to the vCenter and use the vSensor CLI to perform the pairing.
1. Log in to the Vectra Brain and navigate to the Settings > System page.
2. In the Sensors pane, select the Edit icon.
3. The Sensors pane will expand and give the option to enable auto pairing. Enable auto pairing and click Save.
NOTE: It is recommended to disable the auto pairing setting once the deployed vSensors have been successfully paired to the Brain.
4. The Sensors, upon bootup, will try to connect with the Brain and pair. With this setting enabled, pairing occurs automatically.
1. Launch the vSensor from vCenter and gain access to the CLI through the remote console or via SSH. The default username is vectra.
The default password is youshouldchangethis.
2. Once logged in, set the Brain hostname (or IP address, if a hostname has not been set up) using the following command:
NOTE: The IP address will already be configured, as this occurs when the OVA is created from the Brain. In fact, this vSensor cannot
be paired with any other Brain than the Brain from which the OVA was downloaded, as that OVA includes keying material specific to the
source Brain.
To prevent this from occurring, it is important to create affinity rules that pin the vSensor VM to the hypervisor on which it was deployed, so
that the vSensor does not move from one physical host to another automatically.
1. From the Vectra UI of the Vectra Brain, navigate to Settings > System.
2. Click on the Edit button in the Sensor pane.
3. In the Sensor pane, type the new password in the Sensor Password field and click Save. This will propagate the new password to all
of the vSensors and physical Sensors paired to this Brain.
Any future vSensor or physical Sensor that is created or deployed and is paired to this Brain will inherit the same password.
$ set password
Brain Settings
To plan for Brain insertion, fill out this worksheet (Table 13).
General settings for each ESX/ESXi host, virtual switch, and port group where a vSensor will be installed.
IP address and mask Unicast IP address and network mask to assign to the vSensor, IP address:
and IP address of gateway router for reaching other subnets
Default gateway Network mask:
Enter these and other IP addresses in NetworkIP/CIDR format
Default gateway IP address:
(example: 10.10.10.9/24).
Domain Name System (DNS) IP addresses of one or more DNS servers that the Brain will use DNS server 1 IP:
servers for resolving hostnames into IP addresses, in order to send traffic
DNS server 2 IP:
to the devices with those hostnames
DNS server 3 IP:
Network Time Protocol (NTP) IP addresses of one or more servers to use as the source of the NTP server 1 IP:
servers Brain’s system date and time
NTP server 2 IP:
Enter these and other IP addresses in NetworkIP/CIDR format
NTP server 3 IP:
(example: 20.1.0.0/16).
NTP server 4 IP:
Remote access with Sharing of network metadata with Vectra Networks (On or Off)
Vectra Networks
• On: Brain sends metadata from Sensors and vSensors to
Vectra Networks.
• Off: Brain does not send metadata to Vectra Networks.
Set via Settings > Vectra Cloud > Share Metadata with Vectra
Settings required for the Brain to be able to learn device information from vSphere.
Port number TCP port to which the Brain should send API requests
User ID Username for the Brain to use when logging into vSphere
Password Password for the Brain to use when logging into vSphere
vSensor Settings
To plan for vSensor insertion, fill out these worksheets (Table 15).
General settings for each ESX/ESXi host, virtual switch, and port group where a vSensor will be installed.
Resource requirements on Minimum system requirements for ESX/ESXi host • CPU: 4 vCores
host
• Memory: 8 GB vRAM
• Drive: 150 GB
• VMware vSphere 5.0 or later
• Virtual switch type: VSS or VDS
vSwitch name Name of the virtual switch on the hypervisor where the vSensor
will be installed
If using VMware VDS, this will be the same for all hypervisors.
If using VMware VSS, the name may be different for
each hypervisor.
Mgt port group Name and VLAN of port group to which the vSensor’s Mgt port group name:
management interface (Mgt1) will be attached Mgt VLAN number:
Capture port group Name and VLANs of port group to which the vSensor’s capture Capture port group name:
interface will be attached Capture VLAN numbers:
• VSS: 4095 (all VLANs)
• VDS: 0-4094 (for all VLANs).
If capturing subset of VLANs only, enter VLAN numbers
or ranges:
Datastore (on physical host Datastore that will be used for the vSensor VM on the physical
for VM disk usage) host
vSensor mgt interface IP Unicast IP address and mask to assign to the vSensor, and Mgt interface IP:
address and default gateway default gateway to be used to reach other subnets Default gateway IP:
(If DHCP is enabled, leave blank.)
Domain Name System (DNS) IP addresses of one or more DNS servers that the vSensor will DNS server 1 IP:
server IP addresses use for resolving hostnames into IP addresses, in order to send
DNS server 2 IP:
traffic to the devices with those hostnames
DNS server 3 IP:
(If DHCP is enabled, leave blank.)
Pin vSensor Pinning the vSensor to the hypervisor prevents VMware from Yes. Pin the vSensor to the hypervisor.
moving the vSensor to another hypervisor, such as when
running vMotion.
vSensor name
vSensor name