SIEM_ADMIN
SIEM_ADMIN
VERSION 4.10.0
FORTINET DOCUMENT LIBRARY
http://docs.fortinet.com
FORTINET BLOG
https://blog.fortinet.com
http://cookbook.fortinet.com/how-to-work-with-fortinet-support/
FORTIGATE COOKBOOK
http://cookbook.fortinet.com
FORTIGUARD CENTER
http://www.fortiguard.com
FORTICAST
http://forticast.fortinet.com
FEEDBACK
Email: techdocs@fortinet.com
1RY, 2017
Revision 2
TABLE OF CONTENTS
Change Log 5
Installing FortiSIEM 6
System Performance Estimates and Recommendations for Large Scale Deployments 8
Browser Support and Hardware Requirements 11
Information Prerequisites for All FortiSIEM Installations 15
Hypervisor Installations 16
Installing in Amazon Web Services (AWS) 17
Installing in Linux KVM 27
Installing in Microsoft Hyper-V 31
Installing in VMware ESX 33
ISO-Installation 42
Installing a Collector on Bare Metal Hardware 42
General Installation 44
Configuring Worker Settings 44
Registering the Supervisor 44
Registering the Worker 44
Registering the Collector to the Supervisor 45
Using NFS Storage with FortiSIEM 47
Configuring NFS Storage for VMware ESX Server 47
Using NFS Storage with Amazon Web Services 49
Moving CMDB to a separate Database Host 54
Freshly Installed Supervisor 54
FortiSIEM Windows Agent and Agent Manager Install 57
FortiSIEM Windows Agent Pre-installation Notes 58
Installing FortiSIEM Windows Agent Manager 64
Installing FortiSIEM Windows Agent 67
Upgrading FortiSIEM 70
Upgrade notes 71
Upgrade process 74
Migrating from 3.7.x versions to 4.2.1 77
Migrating Before Upgrading to 4.3.x 77
Migrating VMware ESX-based Deployments 78
Migrating AWS EC2 Deployments 93
Migrating KVM-based deployments 105
Migrating Collectors 119
Migrating the SVN Repository to a Separate Partition on a Local Disk 120
Special pre-upgrade instruction for 4.3.3 121
Change Log
2017-10-31 Revision 2 with updated section: 'FortiSIEM Windows Agent Pre-installation Notes'.
Installing FortiSIEM
The topics in this section are intended to guide you through the basic process of setting up and configuring your
FortiSIEM deployment. This includes downloading and installing the FortiSIEM OVA image, using your
hypervisor virtual machine manager to configure the hardware settings for your FortiSIEM node, setting up basic
configurations on your Supervisor node, and registering your Supervisor and other nodes. Setting up IT
infrastructure monitoring, including device discovery, monitoring configuration, setting up business services, is
covered in under the section Configuring Your FortiSIEM Platform.
l Import the FortiSIEM virtual appliance into a hypervisor or Amazon Web Services environment
l Edit the virtual appliance hardware settings
l Start and configure the virtual appliance from the hypervisor console
l Register the virtual appliance
Topics in this section will take you through the specific installation and configuration instructions for the most
popular hypervisors and deployment configurations.
This topic includes estimates and recommendations for storage capacity, disk performance, and network
throughput for optimum performance of FortiSIEM deployments processing over 10,000 EPS.
In general, event ingestion at high EPS requires lower storage IOPS than for queries simply because queries
need to scan higher volumes of data that has accumulated over time. For example, at 20,000 EPS, you have
86,400 times more data in a day than in one second, so a query such as 'Top Event types by count for the past 1
day' will need to scan 20,000 x 86,400 = ~ 1.72 billion events. Therefore, it is important to size your FortiSIEM
cluster to handle your query and report requirements first, which will also handle event ingestion very well. These
are the top 3 things to do for acceptable FortiSIEM query performance:
1. Add more worker nodes, higher than what is required for event ingestion alone
2. 10Gbps network on NFS server is a must, and if feasible on Supervisor and Worker nodes as well
3. SSD Caching on NFS server - The size of the SSD should be as close to the size required to cache hot data. In
typical customer scenarios, the last 1 month data can be considered hot data because monthly reports are quite
commonly run.
Schedule frequently run reports into the dashboard
If you have frequently run ranking reports that have group-by criteria (as opposed to raw message based reports),
you can add such reports into a custom dashboard so that FortiSIEM schedules to run these reports in inline
mode. Such reports compute their results in streaming manner as event data is processed in real-time. Such
reports do not put any burden on the storage IOPS because they read very little data from the EventDB. Note that
raw message reports (no group-by) are always computed directly from EventDB
System Performance
Estimates and Recommendations
Component
Event Storage Capacity Storage capacity estimates are based on an average event size of 64
compressed bytes x EPS (events per section). Browser Support and Hardware
Requirements includes a table with storage capacity requirements for up to
10,000 EPS.
CMDB Disk IOPS 1000 IOPS or more. Lab testing for EC2 scalability used 2000 IOPS.
EventDB IOPS for Event 1000 IOPS for 100K EPS (minimum)
Ingestion
System Performance
Estimates and Recommendations
Component
Network Throughput Recommend 10Gbps network between Supervisor, Workers, and NFS server.
Using VMXNet3 Adapter for VMware
System Performance
Estimates and Recommendations
Component
Number of Workers 6000 EPS per worker for event ingestion. More worker nodes for query
performance. See example below.
Example:
An MSP customer has 12,000 EPS across all their customers. Each event takes up 64 bytes on average in
compressed form in the EventDB.
1 Year Storage for 12,000 EPS = 12000 * 86400 * 365 * 64 bytes = 23TB
1 month Storage = ~ 2TB (SSD cache on NFS)
Run time for 'Top Event types by count for last 1 month' (@ 1 million QEPS using 1
super + 2 workers) = 31536 seconds = 8.75 hours
Example run time for above query using 1 super + 20 workers = 1.25 hours*
* Assuming that read IOPS are not limited due to SSD cache for 1 month data
These calculations are just extrapolations based on a test on EC2. Actual results may vary from this because of
differences in hardware, event data, types of queries. Therefore, it is recommended that customers do a pilot
evaluation using production data either on-premise or on AWS before arriving at an exact number of worker nodes
The storage requirement shown in the Event Data Storage column is only for the eventdb data, but the
/data partition also includes CMDB backups and queries. You should set the /data partition to a larger amount
of storage to accommodate for this.
You can control the exact ciphers used for communications between virtual appliances by editing
the SSLCipherSuite section in the file /etc/httpd/conf.d/ssl.conf on FortiSIEM Supervisors and
Workers. You can test the ciphersuite for your Super or worker using the following nmap command:
nmap --script ssl-cert,ssl-enum-ciphers -p 443 <super_or_worker_fqdn>
Calculating Events per Second (EPS) and Exceeding the License Limit
FortiSIEM calculates the EPS for your system using a counter that records the total number of received events in
a three minute time interval. Every second, a thread wakes up and checks the counter value. If the counter is less
than 110% of the license limit (using the calculation 1.1 x EPS License x 180) , then FortiSIEM will continue to
collect events. If you exceed 110% of your licensed EPS, events are dropped for the remainder of the three
minute window, and an email notification is triggered. At the end of the three minute window the counter resets
and resumes receiving events.
Event Data
Overall
Quantity Host SW Processor Memory OS/App and CMDB Storage Storage (1
EPS
year)
1,500 1 ESXi (4.0 or 4 Core 3 GHz, 16 GB; 24 GB 200GB (80GB OS/App, 60GB 3 TB
later 64 bit (4.5.1+) CMDB, 60GB SVN)
preferred)
Event Data
Overall
Quantity Host SW Processor Memory OS/App and CMDB Storage Storage (1
EPS
year)
ESXi (4.0 or
4 Core 3 GHz, 16 GB; 24 GB 200GB (80GB OS/App, 60GB
4,500 1 later 8 TB
64 bit (4.5.1+) CMDB, 60GB SVN)
preferred)
7,500 1 Super; 1 ESXi (4.0 or Super: 8 Core Super: 24 GB; Super: 200GB (80GB 12 TB
Worker later 3 GHz, 64 bit; Worker: 16 GB OS/App, 60GB CMDB, 60GB
preferred) Worker: 4 SVN); Worker: 200GB (80GB
Core 3 GHz, OS/App)
64 bit
Super: 8 Core
Super: 200GB (80GB
ESXi (4.0 or 3 GHz, 64 bit;
1 Super; 1 Super: 24 GB; OS/App, 60GB CMDB, 60GB
10,000 later Worker: 4 17 TB
Worker Worker: 16 GB SVN); Worker: 200GB (80GB
preferred) Core 3 GHz,
OS/App)
64 bit
20,000 1 Super; 3 ESXi (4.0 or Super: 8 Core Super: 24 GB; Super: 200GB (80GB 34 TB
Workers later 3 GHz, 64 bit; Worker: 16 GB OS/App, 60GB CMDB, 60GB
preferred) Worker: 4 SVN); Worker: 200GB (80GB
Core 3 GHz, OS/App)
64 bit
Super: 8 Core
Super: 200GB (80GB
ESXi (4.0 or 3 GHz, 64 bit;
1 Super; 5 Super: 24 GB; OS/App, 60GB CMDB, 60GB
30,000 later Worker: 4 50 TB
Workers Worker: 16 GB SVN); Worker: 200GB (80GB
preferred) Core 3 GHz,
OS/App)
64 bit
Higher Consult
than FortiSIEM
30,000
OS/App Stor-
Component Quantity Host SW Processor Memory
age
Native Linux
Collector 1 2 Core, 64 bit 4GB 40 GB
Suggested Platform: Dell PowerEdge R210
Rack Server
You should have this information ready before you begin installing the FortiSIEM virtual appliance on ESX:
1. The static IP address and subnet mask for your FortiSIEM virtual appliance.
2. The IP address of NFS mount point and NFS share name if using NFS storage. See the topics Configuring NFS
Storage for VMware ESX Server and Setting Up NFS Storage in AWS for more information.
3. The FortiSIEM host name within your local DNS server.
4. The VMWare ESX datastore location where the virtual appliance image will be stored if using ESX storage.
Hypervisor Installations
Topics in this section cover the instructions for importing the FortiSIEM disk image into specific hypervisors and
configuring the FortiSIEM virtual appliance. See the topics under General Installation for information on
installation tasks that are common to all hypervisors.
You must set up a Virtual Public Cloud (VPC) in Amazon Web Services for FortiSIEM deployment rather than
classic-EC2. FortiSIEM does not support installation in classic-EC2. See the Amazon VPC documentation for
more information on setting up and configuring a VPC. See Creating VPC-based Elastic IPs for Supervisor and
Worker Nodes in AWS for information on how to prevent the public IPs of your instances from changing when they
are stopped and started.
Note: SVN password reset issue after system reboot for FortiSIEM 3.7.6 customers in AWS Virtual Private Cloud
(VPC).
FortiSIEM uses SVN to store monitored device configurations. In AWS VPC setup, we have noticed that
FortiSIEM SVN password gets changed if the system reboots - this prevents FortiSIEM from storing new
configuration changes and viewing old configurations. The following procedure can be used to reset the SVN
password to FortiSIEM factory default so that FortiSIEM can continue working correctly.
Here's an example of how to calculate storage requirements: At 5000 EPS, you can calculate daily storage
requirements to be about 22-30GB (300k events take roughly 15-20MB on average in compressed format stored
in eventDB). So, in order to have 6 months of data available for querying, you need to have 4 - 6TB of storage.
If you only need one FortiSIEM node and your storage requirements are lower than 1TB, and is not expected to
ever grow beyond this limit, you can avoid setting up an NFS server and use a local EBS volume for EventDB. For
this option, see the topic Configuring Local Storage in AWS for EventDB.
1. Log in to AWS.
2. In the E2 dashboard, click Volumes.
3. Click Create Volume.
4. Set Size to 100 GB to 1 TB (depending on storage requirement).
5. Select the same Availability Zone region as the FortiSIEM Supervisor instance.
6. Click Create.
You must set up a Virtual Public Cloud (VPC) in Amazon Web Services for FortiSIEM deployment rather than
classic-EC2. FortiSIEM does not support installation in classic-EC2. See the Amazon VPC documentation for
more information on setting up and configuring a VPC. See Creating VPC-based Elastic IPs for Supervisor and
Worker Nodes in AWS for information on how to prevent the public IPs of your instances from changing when they
are stopped and started.
If the aggregate EPS for your FortiSIEM installation requires a cluster (FortiSIEM virtual appliance + worker
nodes), then you must set up an NFS server. If your storage requirements for the EventDB are more than 1TB, it
is strongly recommended that you use an NFS server where you can configure LVM+RAID0. For more
information, see Setting Up NFS Storage in AWS.
You must set up a Virtual Public Cloud (VPC) in Amazon Web Services for FortiSIEM deployment rather than
classic-EC2. FortiSIEM does not support installation in classic-EC2. See the Amazon VPC documentation for
more information on setting up and configuring a VPC. See Creating VPC-based Elastic IPs for Supervisor and
Worker Nodes in AWS for information on how to prevent the public IPs of your instances from changing when they
are stopped and started.
If the aggregate EPS for your FortiSIEM installation requires a cluster (FortiSIEM virtual appliance + worker
nodes), then you must set up an NFS server. If your storage requirements for the EventDB are more than 1TB, it
is strongly recommended that you use an NFS server where you can configure LVM+RAID0. For more
information, see Setting Up NFS Storage in AWS.
1. Log in to your AWS account and navigate to the EC2 dashboard.
2. Click Launch Instance.
3. Click Community AMIs and search for the AMI ID associated with your version of FortiSIEM. The latest AMI IDs
are on the image server where you download the other hypervisor images.
4. Click Select.
5. Click Compute Optimized.
Using C3 Instances
You should select one of the C3 instances with a Network Performance rating of High, or 10Gb performance.
The current generation of C3 instances run on the latest Intel Xeons that AWS provides. If you are running
these machines in production, it is significantly cheaper to use EC2 Reserved Instances (1 or 3 year) as
opposed to on-demand instances.
6. Click Next: Configure Instance Details.
7. Review these configuration options:
Network and Subnet Select the VPC you set up for your instance.
If you are using local storage for EventDB, click Add New Volume to create a new EBS volume, and
set these options:
Device /dev/xvdi
IOPS 2000
Limiting IP Access
Make sure you have limited the IP addresses that can access your VPC, or that you have set up VPN
access to it. VPN will block all inbound Internet traffic.
14. Click Review and Launch.
15. Review all your instance configuration information, and then click Launch.
16. Select an existing or create a new Key Pair to connect to these instances via SSH.
If you use an existing key pair, make sure you have access to it. If you are creating a new key pair, download the
private key and store it in a secure location accessible from the machine from where you usually connect to these
AWS instances.
17. Click Launch Instances.
18. When the EC2 Dashboard reloads, check that all your instances are up and running.
19. All your instances will be tagged with the Name you assigned in Step 11, select an instance to rename
it according to its role in your deployment.
20. For all types of instances, follow the instructions to SSH into the instances as described in Configuring the
Supervisor and Worker Nodes in AWS, and then run the script phstatus.sh to check the health of the
instances.
Creating VPC-based Elastic IPs for Supervisor and Worker Nodes in AWS
You need to create VPC-based Elastic IPs and attach them to your nodes so the public IPs don't change when you
stop and start instances.
1. Log in to the Amazon VPC Console.
2. In the navigation pane, click Elastic IPs.
3. Click Allocate New Address.
4. In the Allocate New Address dialog box, in the Network platform list, select EC2-VPC, and then click Yes,
Allocate.
5. Select the Elastic IP address from the list, and then click Associate Address.
6. In the Associate Address dialog box, select the network interface for the NAT instance. Select the address to
associate the EIP with from the Private IP address list, and then click Yes, Associate.
1. From the EC2 dashboard, select the instance, and then click Connect.
2. Select Connect with a standalone SSH client, and follow the instructions for connecting with an SSH client.
For the connection command, follow the example provided in the connection dialog, but substitute the FortiSIEM
root user name for ec2-user@xxxxxx. The ec2-user .name is used only for Amazon Linux NFS server.
3. SSH to the Supervisor.
4. Run cd /opt/phoenix/deployment/jumpbox/aws.
5. Run the script pre-deployment.shto configure host name and NFS mount point.
6. Accept the License Agreements.
7. Enter the Host Name.
Local
/dev/xvdi
Storage
5. Click Save.
The Collector will restart automatically after registration succeeds.
In these instructions, br0 is the initial bridge network, em1 is connected as a management network, and em4 is
connected to your local area network.
1. In the KVM host, go to the directory /etc/sysconfig/network-scripts/.
2. Create a bridge network config file ifcfg-br0.
DEVICE=br0
BOOTPROTO=none
NM_CONTROLLED=yes
ONBOOT=yes
TYPE=Bridge
NAME="System br0"
DEVICE=em4 BOOTPROTO=sharedNM_
CONTROLLED=noONBOOT=yesTYPE=EthernetUUID="24078f8d-67f1-41d5-
8eea-xxxxxxxxxxxx"IPV6INIT=noUSERCTL=noDEFROUTE=yesIPV4_
FAILURE_FATAL=yesNAME="System
em4"HWADDR=F0:4D:00:00:00:00BRIDGE=br0
Related Links
Supported Versions
Before you install FortiSIEM virtual appliance in Hyper-V, you should decide whether you plan to use NFS storage
or local storage to store event information in EventDB. If you decide to use a local disk, you can add a data disk of
appropriate size. Typically, this will be named as /dev/sdd if it is the 4th disk. When using local disk, choose the
type 'Dynamically expanding' (VHDX) format so that you are able to resize the disk if your EventDB will grow
beyond the initial capacity.
If you are going to use NFS storage for EventDB, follow the instructions in the topic Configuring NFS Storage for
VMware ESX Server.
FortiSIEM virtual appliances in Hyper-V use dynamically expanding VHD disks for the root and CMDB partitions,
and a dynamically expanding VHDX disk for EventDB. Dynamically expanding disks are used to keep the exported
Hyper-V image within reasonable limits. See the Microsoft documentation topic Performance Tuning Guidelines
for Windows Server 2012 (or R2) for more information.
1. Download and uncompress the FortiSIEM.7z package (using the 7-Zip tool) from FortiSIEM image server to the
location where you want to install the image.
2. Start Hyper-V Manager.
3. In the Action menu, select Import Virtual Machine.
The Import Virtual Machine Wizard will launch.
4. Click Next.
5. Browse to the folder containing Hyper-V VM, and then click Next.
6. Select the FortiSIEM image, and then click Next.
7. For Import Type, select Copy the virtual machine, and then click Next.
8. Select the storage folders for your virtual machine files, and then click Next.
9. Select the storage folder for your virtual machine's hard disks, and then click Next.
10. Verify the installation configuration, and then click Finish.
11. In Hyper-V Manager, connect to the FortiSIEM virtual appliance and power it on.
12. Follow the instructions in Configuring the Supervisor, Worker, or Collector from the VM Console to complete the
installation.
Related Links
Publicly Accessible NTP Server: If you don't have an internal NTP server, you can access a publicly
available one at http://tf.nist.gov/tf-cgi/servers.cgi
l Importing the Supervisor, Collector, or Worker Image into the ESX Server
l Editing the Supervisor, Collector, or Worker Hardware Settings
l Setting Local Storage for the Supervisor
l Troubleshooting Tips for Supervisor Installations
When you're finished with the specific hypervisor setup process, you need to complete your installation by
following the steps described under General Installation.
Importing the Supervisor, Collector, or Worker Image into the ESX Server
1. Download and uncompress the FortiSIEM OVA package from the FortiSIEM image server to the location where
you want to install the image.
2. Log in to the VMware vSphere Client.
3. In the File menu, select Deploy OVF Template.
4. Browse to the .ova file (example: FortiSIEM-VA-4.3.1.1145.ova) and select it.
On the OVF Details page you will see the product and file size information.
5. Click Next.
6. Click Accept to accept the "End User Licensing Agreement," and then click Next.
7. Enter a Name for the Supervisor or Worker, and then click Next.
8. Select a Storage location for the installed file, and then click Next.
9. Select a Disk Format, and then click Next.
Disk Format Recommendation: FortiSIEM recommends using Thick Provision Lazy Zeroed.
Before you start the Supervisor, Worker, or Collector for the first time you need to make some changes to its
hardware settings.
1. In the VMware vSphere client, select the imported Supervisor, Worker, or Collector.
2. Right-click on the node to open the Virtual Appliance Options menu, and then select Edit Settings... .
3. Select the Hardware tab, and check that Memory is set to at least 16 GB and CPUs is set to 8.
For large deployments you should allocate at least 24GB of memory. See the topic Hardware
Requirements for more information.
You can install the Supervisor using either native ESX storage or NFS storage. These instructions are for creating
native EXS storage. See Configuring NFS Storage for VMware ESX Server for more information. If you are using
NFS storage, you will set the IP address of the NFS server during Step 15 of the Configuring the Supervisor,
Worker, or Collector from the VM Console process.
1. On Hardware tab, click Add.
2. In the Add Hardware dialog, select Hard Disk, and then click Next.
3. Select Create a new virtual disk, and then click Next.
4. Check that these selections are made in the Create a Disk dialog:
Disk
Thick Provision Lazy Zeroed
Provisioning
5. In the Advanced Options dialog, make sure that the Independent option for Mode is not selected.
6. Check all the options for creating the virtual disk, and then click Finish.
7. In the Virtual Machine Properties dialog, click OK.
The Reconfigure virtual machine task will launch.
Use SSH to connect to the Supervisor and check that the cmdb, data, query, querywkr,
and svn permissions match those in this table:
Check that the /data , /cmdb, and /svn directory level permissions match those in this table:
Use SSH to connect to the supervisor and run phstatus to see if the system status metrics match those in this
table:
Login root
Password ProspectHills
<NFS_Server_IP_Address>:/<Directory_Path>
NFS storage
After you set the mount point, the Supervisor will automatically reboot, and in 15 to 25 minutes the Supervisor will
be successfully configured.
ISO-Installation
This topic covers installation of FortiSIEM from an ISO under a native file system such as Linux, also known as
installing "on bare metal."
General Installation
If you are not using FortiSIEM clustered deployment, you will not have any Worker nodes. In that case, enter the
IP address of the Supervisor for the Worker Address, and your Collectors will upload their information directly to
the Supervisor.
1. Log in to your Supervisor node.
2. Go to Admin > General Settings > System.
3. For Worker Address, enter a comma-separated list of IP addresses or host names for the Workers.
The Collector will attempt to upload information to the listed Workers, starting with the first Worker address and
proceeding until it finds an available Worker.
You may also enter the Host Name or IP Address of a load balancer for the Worker Address, in
which case the load balancer needs to be configured to send information to the Workers.
4. Click Save.
User ID admin
Password admin*1
Cust/Org ID super
5. Go to Admin > Cloud Health and check that the Supervisor Health is Normal.
4. Go to Setup Wizard > Event Collector and add the Collector information.
Setting Description
This is the number of Events per Second (EPS) that this Collector will be
Guaranteed EPS
provisioned for
When you install FortiSIEM, you have the option to use either local storage or NFS storage. For cluster
deployments using Workers, the use of an NFS Server is required for the Supervisor and Workers to communicate
with each other. These topics describe how to set up and configure NFS servers for use with FortiSIEM.
Supported Versions
FortiSIEM only supports NFS Version 3.
4. Check NFS service status and make sure the nfsd service is running.
5. Create a new directory in the large volume to share with the FortiSIEM Supervisor and Worker nodes, and change
the access permissions to provide FortiSIEM with access to the directory.
mkdir /FortiSIEM
chmod -R 777 /FortiSIEM
6. Edit the /etc/exports file to share the /FortiSIEM directory with the FortiSIEM Supervisor and Worker nodes.
showmount -e localhost
Example:
Export list for localhost:
/FortiSIEM <Supervisor_IP_Address>,<Worker1_IP_Address>,<Worker2_IP_
Address>
Related Links
Several architecture and partner options for setting up NFS storage that is highly available across availability zone
failures are presented by an AWS Solutions Architect in this talk (40 min) and link to slides.
These instructions cover setting up EBS volumes for NFS storage. EBS volumes have a durability guarantee that
is 10 times higher than traditional disk drives. This is because data in traditional disk drives is replicated within an
availability zone for component failures (RAID equivalent), so adding another layer of RAID does not provide
higher durability guarantees. EBS has an annual failure rate (AFR) of 0.1 to 0.5%. In order to have higher
durability guarantees, it is necessary to take periodic snapshots of the volumes. Snapshots are stored in AWS S3,
which has 99.999999999% durability (via synchronous replication of data across multiple data centers) and
99.99% availability. see the topic Setting Up Snapshots of EBS Volumes that Host EventDB and CMDB in AWS
for more information.
If you are running these machines in production, it is significantly cheaper to use EC2 Reserved Instances (1 or 3
year) as opposed to on-demand instances.
1. Log in to your AWS account and navigate to the EC2 dashboard.
2. Click Launch Instance.
3. Select HVM Amazon Linux AMI (HVM) 64-bit, and then click Select.
HVM v. PV
4. The reason to choose the HVM image over the default Paravirtualized (PV) image is that the HVM image
automatically includes drivers to support enhanced networking, which uses SR-IOV for networking and
provide higher performance (packets per second), lower latency, and lower jitter.
5. Click Compute Optimized.
Using C3 Instances
You should select one of the C3 instances with a Network Performance rating of High, or 10Gb
performance. The current generation of C3 instances run on the latest Intel Xeons that AWS provides. If
you are running these machines in production, it is significantly cheaper to use EC2 Reserved Instances
(1 or 3 year) as opposed to on-demand instances.
Network and Subnet Select the VPC you set up for your instance.
At 5000 EPS, you can calculate daily storage requirements to amount to roughly 22-30GB (300k events
are 15-20MB on average in compressed format stored in EventDB). In order to have 6 months of data
available for querying, you need to have 4-6TB of storage. On AWS, the maximum EBS volume is sized
at 1TB. In order to have larger disks, you need to create software RAID-0 volumes. You can attach, at
most 8 volumes to an instance, which results in 8TB with RAID-0. There's no advantage in using a
different RAID configuration other than RAID-0, because it does not increase durability guarantees. In
order to ensure much better durability guarantees, plan on performing regular snapshots which store
the data in S3 as described in Setting Up Snapshots of EBS Volumes that Host EventDB and CMDB in
AWS. Since RAID-0 stripes data across these volumes, the aggregate IOPS you get will be the sum of
the IOPS on individual volumes.
Limiting IP Access
Make sure you have limited the IP addresses that can access your VPC, or that you have set up VPN
access to it. VPN will block all inbound Internet traffic.
16. Select an existing or create a new Key Pair to connect to these instances via SSH.
If you use an existing key pair, make sure you have access to it. If you are creating a new key pair, download the
private key and store it in a secure location accessible from the machine from where you usually connect to these
AWS instances.
17. Click Launch Instances.
18. When the EC2 Dashboard reloads, check that all your instances are up and running.
19. Select the NFS server instance and click Connect.
20. Follow the instructions to SSH into the volumes as described in Configuring the Supervisor and Worker Nodes in
AWS
21. Configure the NFS mount point access to give the FortiSIEM internal IP full access.
Setting Up Snapshots of EBS Volumes that Host EventDB and CMDB in AWS
In order to have high durability guarantees for FortiSIEM data, you should periodically create EBS snapshots on
an hourly, daily, or weekly basis and store them in S3. The EventDB is typically hosted as a RAID-0 volume of
several EBS volumes, as described in Setting Up NFS Storage in AWS. In order to reliably snapshot these EBS
volumes together, you can use a script, ec2-consistent-snapshot, to briefly freeze the volumes and
create a snapshot. You an then use a second script, ec2-expire-snapshots, to schedule cron jobs to delete
old snapshots that are no longer needed. CMDB is hosted on a much smaller EBS volume, and you can also use
the same scripts to take snapshots of it.
You can find details of how download these scripts and set up periodic snapshots and expiration in this blog post:
http://twigmon.blogspot.com/2013/09/installing-ec2-consistent-snapshot.html
You can download the scripts from these from these Github projects:
l https://github.com/alestic/ec2-consistent-snapshot
l https://github.com/alestic/ec2-expire-snapshots
It is desirable to move the CMDB (postgres) database to a separate host for the following reasons:
1. In larger deployments, reduce the database server load on the supervisor node in order to allow more resources
for application server and other backend modules
2. Whenever high availability for CMDB data is desired, it is easier and cleaner to set up separate hosts with
postgres replication that are managed separately than do this on the embedded postgres on the supervisor.
This is especially true in AWS environment where AWS Postgresql Relational Database Service (RDS) is just
a few clicks to set up a DB instance that replicates across availability zones and automatically does failover
Edit phoenix_config.txt
vi /opt/phoenix/config/phoenix_config.txt
# change system_services in the phMonitorSupervisor section to remove psql. The
original line is below which should be modified to the next line
#system_services=<system type-
e="phMonitorSupervisor"><service><name>httpd</name><method>phshell 0 head /etc/ht-
tpd/run/httpd.pid</method></service><service><name>glassfish</name><method>ps -ef
| grep java | grep glassfish | grep -v pid | gawk '{print
$2}'</method></service><service><name>pgsql DB</name><method>ps -ef | grep pgsql |
grep postgres | grep -v pid | gawk '{print $2
}'</method></service></system>#system_services=<system type-
e="phMonitorSupervisor"><service><name>httpd</name><method>phshell 0 head /etc/ht-
tpd/run/httpd.pid</method></service><service><name>glassfish</name><method>ps -ef
| grep java | grep glassfish | grep -v pid | gawk '{print
$2}'</method></service></system># Add db_server_host and db_server_pwd
# db_server_host=phoenixdb.XXXXXX.us-west-2.rds.amazonaws.com
# db_server_pwd=YYYYYYYY
Make sure you have enough additional storage to dump the existing DB
1. Install and have the external postgres ready as described at the beginning of the previous section
2. Take point-in-time snapshots of supervisor to revert back if you hit any issue
3. Stop crond on super, and wait for phwatchdog to stop
4. Stop Apache on super and all workers so that collectors start buffering events
5. Shutdown the worker nodes while you move CMDB out
6. Follow the instructions from "Freshly Installed Supervisor" to complete the steps.
Related articles
FortiSIEM can discover and collect performance metrics and logs from Windows Servers in an agent less fashion
via WMI. However agents are needed when there is a need to collect richer data such as file integrity monitoring
and from a large number of servers.
This section describes how to setup FortiSIEM Windows Agent and Agent Manager as part of FortiSIEM
infrastructure.
Licensing
When you purchase the Windows Agent Manager, you also purchase a set number of licenses that can be applied
to the Windows devices you are monitoring. After you have set up and configured Windows Agent Manager, you
can see the number of both Basic and Advanced licenses that are available and in use in your deployment by
logging into your Supervisor node and going to Admin > License Management, where you will see an entry for
Basic Windows Licenses Allowed/Used and Advanced Windows Licenses Allowed/Used. You can see
how these licenses have been applied by going to Admin > Windows Agent Health. When you are logged into
the Windows Agent Manager you can also see the number of available and assigned licenses on the Assign
Licenses to Users page.
There are two types of licenses that you can associate with your Windows agent.
None An agent has been installed on the device, but no license is associated with it.
This device will not be monitored until a license is applied to it.
The agent is licensed to monitor all activity on the device, including logs,
Advanced
installed software changes, and file/folder changes
When applying licenses to agents, keep in mind that Advanced includes Basic, so if you have purchased a
number of Advanced licenses, you could use all those licenses for the Basic purpose of monitoring logs.. For
example, if you have purchased a total of 10 licenses, five of which are Advanced and five of which are Basic,
you could apply all 10 licenses to your devices as Basic.
Windows Agents
RAM l 1 GB for XP
l 2+GB for Windows Vista & above /
Windows Server
If you are using a firewall, make sure to add Port 80 and 443 as an exception to the inbound firewall rule.
The primary role of Windows Agent Manager is to configure Windows Agents with right security monitoring
policies and monitor Windows Agent health.
Regarding Windows Agent log forwarding, agents can be configured in two ways:
a. Send logs to an available Collector from a list of Collectors (preferred because of high availability).
In this scenario, each Windows Agent Manager can handle up to 10,000 agents since it is only configuring the
agents.
b. Send logs to Windows Agent Manager which then aggregates the logs from each Agent and sends to Collector or
Supervisor.
In this scenario, each Windows Agent Manager can handle up to 1,000 agents at an aggregate of 7.5k events/sec.
Windows 7/8 (performance issues might Performance issues may occur due to
Desktop OS
occur) limitations of desktop OS
Supported versions
Windows Agent
l Windows 7
l Windows 8
l Windows XP SP3 or above
l Windows Server 2003 (Use Windows Agent 2.0 since 2003 does not support TLS 1.2.)
l Windows Server 2008
l Windows Server 2008 R2
l Windows Server 2012
l Windows Server 2012 R2
l Windows Server 2016
l Windows Server 2016 R2
Prerequisites
1. Make sure that the ports needed for communication between Windows Agent and Agent Manager are open and
the two systems can communicate
2. For versions 1.1 and higher, Agent and Agent Manager communicate via HTTPS. For this reason, there is a
special pre-requisite: Get your Common Name / Subject Name from IIS
1. Logon to Windows Agent Manager
2. Open IIS by going to Run, typing inetmgr and pressing enter
3. Go to Default Web Site in the left pane
4. Right click Default Web Site and select Edit Bindings.
5. In Site Bindings dialog, check if you have https under Type column
6. If https is available, then
a. Select column corresponding to https and click on Edit
b. In Edit Site Binding dialog, under SSL certificate section, click on View... button.
c. In Certificate dialog, under General tab, note the value of Issued to. This is your Common
Name / Subject Name
3. If https is not available, then you need to bind the default web site with https.
1. Import a New certificate. This can be done in one of two ways
a. Either create a Self Signed Certificate as follows
i. Open IIS by going to Run, typing inetmgr and pressing enter
ii. In the left pane, select computer name
iii. In the right pane, double click on Server Certificates
iv. In the Server Certificate section, click on Create Self-Signed Certificate... from
the right pane
v. In Create Self-Signed Certificate dialog, specify a friendly name for the certificate
and click OK
vi. You will see your new certificate in the Server Certificates list
b. Or, Import a third party certificate from a certification authority.
a. Buy the certificate (.pfx or .cer file)
b. Install the certificate file in your server
c. Import the certificate in IIS
d. Go to IIS. Select Computer name and in the right pane select Server Certificates
e. If certificate is PFX File
i. In Server Certificates section, click on Import... in right pane
ii. In the Import Certificate dialog, browse to pfx file and put it in Certificate
file(.pfx) box
iii. Give your pfx password and click Ok. Your certificate gets imported to IIS
f. If certificate is CER File
i. In Server Certificates section, click on Complete Certificate Request...
in right pane
ii. In the Complete Certificate Request dialog, browse to CER file and put it
Procedure
1. On the machine where you want to install the manager, launch either the FortiSIEMServer-x86.MSI (for 32-bit
Windows) or FortiSIEMServer-x64.MSI (for 64-bit Windows) installer.
2. In the Welcome dialog , click Next.
3. In the EULA dialog, agree to the Terms and Conditions, and then click Next.
4. Specify the destination path for the installation, and then click Next.
By default the Windows Agent Manager will be installed at C:\Program Files\AccelOps\Server.
5. Specify the destination path to install the client agent installation files, and then click Next.
By default these files will be installed at C:\AccelOps\Agent. The default location will be on the drive that has
the most free storage space. This path will automatically become a shared location that you will access from the
agent devices to install the agent software on them.
6. In the Database Settings dialog,
a. Select the database instance where metrics and logs from the Windows devices will be stored.
b. Select whether you want to use Windows authentication, otherwise provide the login credentials that are
needed to access the SQL Server instance where the database is located.
c. Enter the path where FortiSIEM Agent Manager database will be stored. By default it is
C:\AccelOps\Data
7. Provide the path to the FortiSIEM Supervisor, Worker, or Collector that will receive information about your
Windows devices. Click Next.
8. In the Administrator Settings dialog, enter username and password credentials that you will use to log in to the
Windows Agent Manager.
Both your username and password should be at least six characters long.
9. (New in Release 1.1 for HTTPS communication between Agent and Agent Manager) Enter the common name/
subject name of the SSL certificate created in pre-requisite step 2
10. Click Install.
11. When the installation completes, click Finish.
12. You can now exit the installation process, or click Close Set Up and Run FortiSIEM to log into your FortiSIEM
virtual appliance.
Prerequisites
1. Windows Agent and Agent Manager need to be able to communicate - agents need to access a path on the Agent
Manager machine to install the agent software.
2. Starting with Version 1.1, there is a special requirement if you want user information appended to file/directory
change events. Typically file/directory change events do not have information about the user who made the
change. To get this information, you have to do the following steps. Without this step, File monitoring events will
not have user information.
1. In Workgroup Environment:
a. Go to Control Panel
b. Open Administrative Tools
c. Double click on Local Security Policy
d. Expand Advanced Audit Policy configuration in the left-pane
e. Under Advanced Audit Policy, expand System Audit Policies – Local Group Policy
Object
f. Under System Audit Policies – Local Group Policy Object, select Object Access
g. Double-click on Audit File System in the right-pane
h. Audit File System Properties dialog opens. In this dialog, under Policy tab, select
Configure the following audit events. Under this select both Success and Failure check
boxes
i. Click Apply and then OK
2. In Active Directory Domain Environment: FortiSIEM Administrator can use Group Policies to propagate
the above settings to the agent computers as follows:
a. Go to Control Panel
b. Open Administrative Tools
c. Click on Group Policy Management
d. In Group Policy Management dialog, expand Forest:<domain_name> in the left-pane
e. Under Forest:<domain_name>, expand Domains
f. Under Domains, expand <domain_name>
g. Right-click on <domain_name> and click on 'Create a GPO in this domain, and link it
here...“
h. New GPO dialog appears. Enter a new name (e.g., MyGPO) in Name text box. Press OK.
i. MyGPO appears under the expanded <domain_name> in left-pane. Click on MyGPO and click
on the Scope tab in the right-pane.
j. Under Scope tab, click on Add in Security filtering section
k. Select User, Computer or Group dialog opens. In this dialog click the Object Types button.
l. Object Types dialog appears, uncheck all options and check the Computers option. Click
OK.
m. Back in the Select User, Computer or Group dialog, enter the FortiSIEM Windows Agent
computer names under Enter the object name to select area. You can choose computer
names by clicking the Advanced' button and then in Advanced dialog clicking on the Find Now
button.
n. Once the required computer name is specified, click OK and you will find the selected
computer name under Security Filtering.
o. Repeat steps (xi) – (xiv) for all the required computers running FortiSIEM Windows Agent.
p. Right click on MyGPO in the left-pane and click on Edit.
q. Group Policy Management Editor opens. In this dialog, expand Policies under Computer
Configuration.
r. Go to Policies > Windows Settings > Security Settings > Advanced Audit Policy
Configuration > Audit Policies > Object Access > Audit File System.
s. In the Audit File System Properties dialog, under Policy tab select Configure the
following audit events. Under this, select both Success and Failure check boxes.
1. Log into the machine where you want to install the agent software as an adminstrator.
2. Navigate to the shared location on the Windows Agent Manager machine where you installed the agent
installation files in Step 5 of Installing FortiSIEM Windows Agent Manager.
The default path is C:\AccelOps\Agent.
3. In the shared location, double-click on the appropriate .MSI file to begin installation.
FortiSIEMAgent-x64.MSI is for the 64-bit Agent, while FortiSIEMAgent-x86.MSI is for the 32-bit Agent
4. When the installation completes, go to Start > Administrative Tools > Services and make sure that the
FortiSIEM Agent Service has a status of Started.
Multiple agents can be installed via GPO if all the computers are on the same domain.
1. Log on to Domain Controller
2. Create a separate Organization unit for containing all computers where FortiSIEM Windows Agent have to be
installed.
a. Go to Start > Administrative Tools > Active Directory Users and Computers
b. Right click on the root Domain on the left side tree. Click New > Organizational Unit
c. Provide a Name for the newly created Organizational Unit and click OK.
d. Verify that the Organizational Unit has been created.
3. Assign computers to the new Organizational Unit.
a. Click Computers under the domain. The list of computers will be displayed on the right pane
b. Select a computer on the right pane. Right click and select Move and then select the new Organizational
Unit.
c. Click OK.
4. Create a new GPO
a. Go to Start > Administrative Tools > Group Policy Management
b. Under Domains, select the newly created Organization Unit
c. Right click on the Organization Unit and select Create and Link a GPO here...
d. Enter a Name for the new GPO and click OK.
e. Verify that the new GPO is created under the chosen Organizational Unit
f. Right click on the new GPO and click Edit. Left tree now shows Computer Configuration and User
Configuration
g. Under Computer Configuration, expand Software Settings.
h. Click New > Package. Then go to AOWinAgt folder on the network folder. Select the Agent MSI you
need - 32 bit or 64 bit. Click OK.
i. The selected MSI shows in the right pane under Group Policy Editor window
j. For Deploy Software, select Assigned and click OK.
5. Update the GPO on Domain Controller
a. Open a command prompt
b. Run gpupdate /force
6. Update GPO on Agents
a. Log on to the computer
b. Open a command prompt
c. Run gpupdate
d. Restart the computer
e. You will see FortiSIEM Windows Agent installed after restart
If you have a mix of 32 bit and 64 bit computers, need to have two separate Organizational Units - one for 32bit
and one for 64bit, and then assign corresponding MSIs to each.
Upgrading FortiSIEM
l Upgrade Notes
l Upgrade Process
l Migrating from 3.7.x versions to 4.2.1
l Migrating the SVN Repository to a Separate Partition on a Local Disk
l Special pre-upgrade instruction for 4.3.3
l Special pre-upgrade instruction for 4.6.1
l Enabling TLS 1.2 Patch On Old Collectors
l Upgrading to 4.6.3 for TLS 1.2
l Setting Up the Image Server for Collector Upgrades
l Upgrading a FortiSIEM Single Node Deployment
l Upgrading a FortiSIEM Cluster Deployment
l Upgrading FortiSIEM Windows Agent and Agent Manager
l Automatic OS Upgrades during Reboot
Upgrade notes
During the upgrade process, FortiSIEM checks the existing phoenix_config.txt file on the system and
compares it with the system provided phoenix_config.txt file for that version and asks users whether to
keep the existing or new settings.
The following table states the correct user responses for standard upgrades for phoenix_config.txt
changes.
l The first four fields in the table are IP addresses and the user should retain the old entries.
l For all other fields in the table, the user should choose the new entries introduced by FortiSIEM.
l If you modify any phoenix_config.txt file parameter to work efficiently in your environment, you need to
Select 1 (Old) to keep your choice from being overwritten.
rest_cache_process_list=phParser
phDiscover
phPerfMonitor
phAgentManager
phDataManager
phQueryWorker
phQueryMaster
Super, Worker Select 2 (New) N/A N/A
phRuleWorker
phRuleMaster
phReportWorker
phReportMaster
phIpIdentityWorker
phIpIdentityMaster
phReportLoader
Recovery
If the phoenix_config.txt file is damaged after upgrade due to any reason, repeat the merge process as
follows:
1. Recover your phoenix_config.txt by copying from the backup location /tmp/backup/phoenix_
config.txt to /opt/phoenix/config/phoenix_config.txt
2. Change phoenix_config.txt permissions by running:
a. chmod 644 /opt/phoenix/config/phoenix_config.txt
b. chown admin.admin /opt/phoenix/config/phoenix_config.txt
3. Repeat the merge process by running the command
/opt/phoenix/deployment/jumpbox/phmergeconfig and follow the recommended upgrade response
from FortiSIEM Configuration file merge.
4. Reboot the system at Linux level (Super or Worker).
Upgrade process
UPGRADE REQUIREMENT
Starting 4.5, Supervisor requires 24 GB RAM. This is because Supervisor node is caching device monitoring
status for faster performance.
Starting 4.6.1, Linux swap space is increased to match the physical memory size, as recommended by Linux best
practices for optimal system performance. This size is automatically increased during 4.6.1 upgrade which may
cause upgrade to take a little longer than normal.
If you enabled SNMP on FortiSIEM nodes (Collectors, Workers, Supervisors), it recommended that you modify
the snmpd.local.conf file to store special configurations. You should not modify snmpd.conf file since FortiSIEM
upgrade will wipe away the changes in snmpd.conf. To prevent changes from being lost, copy the changes to
snmpd.local.conf file and then upgrade.
The 4.2 version of FortiSIEM uses a new version of CentOS, and so upgrading to version 4.2 from previous
versions involves a migration from those versions to 4.2.x, rather than a typical upgrade. This process involves
two steps:
1. You have to migrate the 3.7.6 CMDB to a 4.2.1 CMDB on a 3.7.6 based system.
2. The migrated 4.2.1 CMDB has to be imported into a 4.2.1 system.
Topics in this section cover the migration process for supported hypervisors for both migrations in-place and using
staging systems. Using a staging system requires more hardware, but minimizes downtime and CMDB migration
risk compared to the in-place method. If you decide to use the in-place method, we strongly recommend that you
take snapshots for recovery.
Internet access is needed for migration to succeed. A third party library needs to access the schema website.
<faces-config xmlns="http://java.sun.com/xml/ns/javaee" xmlns:x-
si="http://www.w3.org/2001/XMLSchema-instance"xm-
lns:cdk="http://jboss.org/schema/richfaces/cdk/extensions" version="2.0" metadata-
complete="false" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://-
java.sun.com/xml/ns/javaee/web-facesconfig_2_0.xsd">
l Prerequisites
l Upgrading the 3.7.x CMDB to 4.2.1 CMDB
l Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance
l Assigning the 3.7.x Supervisor's IP Address to the 4.2.1 Supervisor
l Registering Workers to the Supervisor
Prerequisites
Install the 4.2.1 virtual appliance on the same host as the 3.7.x version with a local disk that is larger than the
original 3.7.x version. You will need the extra disk space for copying operations during the migration.
Migration is now complete - Make sure all devices, user created rules, reports, dashboards are migrated
successfully
l Overview
l Prerequisites
l Copy the 3.7.x CMDB to a 4.2.1 Virtual Appliance Using rsync
l Upgrading the 3.7.x CMDB to 4.2.1 CMDB
l Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance
l Assigning the 3.7.x Supervisor's IP Address to the 4.2.1 Supervisor
l Registering Workers to the Supervisor
Overview
This migration process is for FortiSIEM deployment with a single virtual appliance and the CMDB data stored on
a local VMware disk, and where you intend to run the 4.2.1 version on a different physical machine as the 3.7.x
version.
Prerequisites
Installing rsynch
Before you can copy CMDB, you need to have rsync installed on the 3.7.x virtual appliance where you will be
making the copy.
1. Log in to the 3.7.x Supervisor as root over SSH.
2. Copy CentOS-Base.repo to /etc/yum.repos.d .
cp /etc/yum.repos.d.orig/CentOS-Base.repo /etc/yum.repos.d
Procedure
1. Log in to the 4.2.1 virtual appliance as root.
2. Check the disk size in the remote system to make sure that there is enough space for the database to be copied
over.
3. Copy the directory /data from the 3.7.x virtual appliance to the 4.2.1 virtual appliance using the rsync tool.
rsync Command Syntax
Make sure that the trailing / is used in the final two arguments in the rsync command
rsync --progress -av root@<3.7.x_VA_ip_address>:/data/ /data/
4. After copying is complete, make sure that the size of the event database is identical to the 3.7.x system.
7. Run the archive script to create an archive version of the CMDB, and specify the directory where it should be
created.
./ao-db-migration-archiver.sh /tmp/376_archive/
8. Check the that archive files phoenixdb_migration_* and opt-migration-*.tar were successfully
created in the destination directory.
9. Copy the opt-migration-*.tar file to /root.
This contains various data files outside of CMDB that will be needed to restore the upgraded CMDB.
10. Run the migration script on the 3.7.x CMDB archive you created in step 7.
The first argument is the location of the archived 3.7.x CMDB, and the second argument is the location where the
migrated CMDB file will be kept.
/root/ao-db-migration.sh /tmp/376_archive/cmdb-migration-xyz /tmp/376_
migration
4. When the migration script completes the virtual appliance will reboot.
Migration is now complete - Make sure all devices, user created rules, reports, dashboards are migrated
successfully
l Overview
l Prerequisites
l Upgrading the 3.7.x CMDB to 4.2.1 CMDB
l Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance
l Mounting the NFS Storage on Supervisors and Workers
l Assigning the 3.7.x Supervisor's IP Address to the 4.2.1 Supervisor
l Registering Workers to the Supervisor
Overview
In this migration method, the production FortiSIEM systems are upgraded in-place, meaning that the production
3.7.x virtual appliance is stopped and used for migrating the CMDB to the 4.2.1 virtual appliance. The advantage
of this approach is that no extra hardware is needed, while the disadvantage is extended downtime during
the CMDB archive and upgrade process. During this downtime events are not lost but are buffered at the
collector. However, incidents are not triggered while events are buffered. Prior to the CDMB upgrade process, you
might want to take a snapshot of CMDB to use as a backup if needed.
Prerequisites
7. Run the archive script to create an archive version of the CMDB, and specify the directory where it should be
created.
./ao-db-migration-archiver.sh /tmp/376_archive/
8. Check the that archive files phoenixdb_migration_* and opt-migration-*.tar were successfully
created in the destination directory.
4. When the migration script completes the virtual appliance will reboot.
Follow this process for each Supervisor and Worker in your deployment.
1. Log in to your virtual appliance as root over SSH.
2. Run the mount command to check the mount location.
3. Stop all FortiSIEM processes.
service crond stop
phtools --stop all
killall -9 phMonitor
su - admin
/opt/glassfish/bin/asadmin stop-domain
exit
service postgresql-9.1 stop
service httpd stop
7. Change to the 3.7.x mount path location in the /etc/fstab file on the Supervisor or Workers.
Migration is now complete - Make sure all devices, user created rules, reports, dashboards are migrated
successfully
l Overview
l Prerequisites
l Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance
l Mounting the NFS Storage on Supervisors and Workers
l Assigning the 3.7.x Supervisor's IP Address to the 4.2.1 Supervisor
l Registering Workers to the Supervisor
Overview
In this migration method, the production 3.7.x FortiSIEM systems are left untouched. A separate mirror image
3.7.x system is first created, and then upgraded to 4.2.1. The NFS storage is mounted to the upgraded 4.2.1
system, and the collectors are redirected to the upgraded 4.2.1 system. The upgraded 4.2.1 system now
becomes the production system, while the old 3.7.6 system can be decommissioned. The collectors can then be
upgraded one by one. The advantages of this method is minimal downtime in which incidents aren't triggered,
and no upgrade risk. If for some reason the upgrade fails, it can be aborted without any risk to your production
CMDB data. The disadvantages of this method are the requirement for hardware to set up the mirror 3.7.x mirror
system, and longer time to complete the upgrade because of the time needed to set up the mirror system.
Prerequisites
7. Check that the archived files were successfully created in the destination directory.
You should see two files, cmdb-migration-*.tar, which will be used to migrate the 3.7.x CMDB, and opt-
migration-*.tar, which contains files stored outside of CMDM that will be needed to restore the upgraded
CMDB to your new 4.2.1 virtual appliance.
8. Copy the cmdb-migration-*.tar file to the 3.7.x staging Supervisor, using the same directory name you
used in Step 6.
9. Copy the opt-migration-*.tar file to the /root directory of the 4.2.1 Supervisor.
4. When the migration script completes the virtual appliance will reboot.
Follow this process for each Supervisor and Worker in your deployment.
1. Log in to your virtual appliance as root over SSH.
2. Run the mount command to check the mount location.
3. Stop all FortiSIEM processes.
service crond stop
phtools --stop all
killall -9 phMonitor
su - admin
/opt/glassfish/bin/asadmin stop-domain
exit
service postgresql-9.1 stop
service httpd stop
7. Change to the 3.7.x mount path location in the /etc/fstab file on the Supervisor or Workers.
Migration is now complete - Make sure all devices, user created rules, reports, dashboards are migrated
successfully
Very broadly, 3.7.6 CMDB have to be first migrated to a 4.2.1 CMDB on a 3.7.6 based system and then the
migrated 4.2.1 CMDB has to be imported to a 4.2.1 system.
If in-place method is to be deployed, then a snapshot method is highly recommended for recovery purposes.
Note: Internet access is needed for migration to succeed. A third party library needs to access the
schema website.
<faces-config xmlns="http://java.sun.com/xml/ns/javaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:cdk="http://jboss.org/schema/richfaces/cdk/extensions"
version="2.0" metadata-complete="false"
xsi:schemaLocation="http://java.sun.com/xml/ns/javaee
http://java.sun.com/xml/ns/javaee/web-facesconfig_2_0.xsd">
Overview
This migration process is for FortiSIEM deployment with a single virtual appliance and the CMDB data stored on
a local AWS volume, and where you intend to run a 4.2.x version on the same physical machine as the 3.7.x
version, but as a new virtual machine.
Prerequisites
7. Run the archive script to create an archive version of the CMDB, and specify the directory where it should be
created.
./ao-db-migration-archiver.sh /tmp/376_archive/
8. Check the that archive files phoenixdb_migration_* and opt-migration-*.tar were successfully
created in the destination directory.
9. Copy the opt-migration-*.tar file to /root.
This contains various data files outside of CMDB that will be needed to restore the upgraded CMDB.
10. Run the migration script on the 3.7.x CMDB archive you created in step 7.
The first argument is the location of the archived 3.7.x CMDB, and the second argument is the location where the
Log in to the AWS EC2 dashboard and stop your 3.7.x virtual appliance.
4. When the migration script completes the virtual appliance will reboot.
1. Log in to AWS EC2 dashboard and power off your 4.2.1 virtual appliance.
2. In the Volumes table, find your production 3.7.x volume and tag it so you can identify it later, while also making a
note of its ID.
For instance, 3.7.x_data_volume.
3. Detach the volume.
4. In the Volumes tab, find your 4.2.1 volume, and Detach it.
5. Attach your 3.7.x volume to your 4.2.1 virtual appliance.
4.2.1 Volume Device Name
Make sure the Device name for your 4.2.1 volume is dev/xvdf .
Migration is now complete - Make sure all devices, user created rules, reports, dashboards are migrated
successfully
l Overview
l Prerequisites
l Upgrading the 3.7.x CMDB to 4.2.1 CMDB
l Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance
l Mounting the NFS Storage on Supervisors and Workers
l Change the SVN URL and Server IP Address
l Change the IP Addresses Associated with Your Virtual Appliances
l Registering Workers to the Supervisor
l Setting the 4.2.1 SVN Password to the 3.7.x Password
Overview
In this migration method, the production FortiSIEM systems are upgraded in-place, meaning that the production
3.7.x virtual appliance is stopped and used for migrating the CMDB to the 4.2.1 virtual appliance. The advantage
of this approach is that no extra hardware is needed, while the disadvantage is extended downtime during
the CMDB archive and upgrade process. During this downtime events are not lost but are buffered at the
collector. However, incidents are not triggered while events are buffered. Prior to the CDMB upgrade process, you
might want to take a snapshot of CMDB to use as a backup if needed.
Prerequisites
7. Run the archive script to create an archive version of the CMDB, and specify the directory where it should be
created.
./ao-db-migration-archiver.sh /tmp/376_archive/
8. Check the that archive files phoenixdb_migration_* and opt-migration-*.tar were successfully
created in the destination directory.
9. Copy the opt-migration-*.tar file to /root.
This contains various data files outside of CMDB that will be needed to restore the upgraded CMDB.
10. Run the migration script on the 3.7.x CMDB archive you created in step 7.
The first argument is the location of the archived 3.7.x CMDB, and the second argument is the location where the
migrated CMDB file will be kept.
/root/ao-db-migration.sh /tmp/376_archive/cmdb-migration-xyz /tmp/376_
migration
4. When the migration script completes the virtual appliance will reboot.
Follow this process for each Supervisor and Worker in your deployment.
1. Log in to your virtual appliance as root over SSH.
2. Run the mount command to check the mount location.
3. Stop all FortiSIEM processes.
service crond stop
phtools --stop all
killall -9 phMonitor
su - admin
/opt/glassfish/bin/asadmin stop-domain
exit
service postgresql-9.1 stop
service httpd stop
7. Change to the 3.7.x mount path location in the /etc/fstab file on the Supervisor or Workers.
4. In Elastic IPs, select the IP address associated with your 3.7.x virtual appliance.
5. Click Disassociate Address, and then Yes, Disassociate.
6. In Elastic IPs, select the production public IP of your 3.7.x virtual appliance, and click Associate Address to
associate it with your 4.2.1 virtual appliance.
The virtual appliance will reboot automatically after the IP address is changed.
Migration is now complete - Make sure all devices, user created rules, reports, dashboards are migrated
successfully
Overview
In this migration method, the production 3.7.x FortiSIEM systems are left untouched. A separate mirror image
3.7.x system is first created, and then upgraded to 4.2.1. The NFS storage is mounted to the upgraded 4.2.1
system, and the collectors are redirected to the upgraded 4.2.1 system. The upgraded 4.2.1 system now
becomes the production system, while the old 3.7.6 system can be decommissioned. The collectors can then be
upgraded one by one. The advantages of this method is minimal downtime in which incidents aren't triggered,
and no upgrade risk. If for some reason the upgrade fails, it can be aborted without any risk to your production
CMDB data. The disadvantages of this method are the requirement for hardware to set up the mirror 3.7.x mirror
system, and longer time to complete the upgrade because of the time needed to set up the mirror system.
Prerequisites
7. Check that the archived files were successfully created in the destination directory.
You should see two files, cmdb-migration-*.tar, which will be used to migrate the 3.7.x CMDB, and opt-
migration-*.tar, which contains files stored outside of CMDM that will be needed to restore the upgraded
CMDB to your new 4.2.1 virtual appliance.
8. Copy the cmdb-migration-*.tar file to the 3.7.x staging Supervisor, using the same directory name you
used in Step 6.
9. Copy the opt-migration-*.tar file to the /root directory of the 4.2.1 Supervisor.
4. When the migration script completes the virtual appliance will reboot.
Follow this process for each Supervisor and Worker in your deployment.
1. Log in to your virtual appliance as root over SSH.
2. Run the mount command to check the mount location.
3. Stop all FortiSIEM processes.
service crond stop
phtools --stop all
killall -9 phMonitor
su - admin
/opt/glassfish/bin/asadmin stop-domain
exit
service postgresql-9.1 stop
service httpd stop
7. Change to the 3.7.x mount path location in the /etc/fstab file on the Supervisor or Workers.
Migration is now complete - Make sure all devices, user created rules, reports, dashboards are migrated
successfully
Very broadly, 3.7.6 CMDB have to be first migrated to a 4.2.1 CMDB on a 3.7.6 based system and then the
migrated 4.2.1 CMDB has to be imported to a 4.2.1 system.
l Staging approach take more hardware but minimizes downtime and CMDB migration risk compared to the in-place
approach
l rsync method takes longer to finish as event database has to be copied
If in-place method is to be deployed, then a snapshot method is highly recommended for recovery purposes.
Note: Internet access is needed for migration to succeed. A third party library needs to access the
schema website.
<faces-config xmlns="http://java.sun.com/xml/ns/javaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:cdk="http://jboss.org/schema/richfaces/cdk/extensions"
version="2.0" metadata-complete="false"
xsi:schemaLocation="http://java.sun.com/xml/ns/javaee
http://java.sun.com/xml/ns/javaee/web-facesconfig_2_0.xsd">
l Overview
l Prerequisites
l Upgrading the 3.7.x CMDB to 4.2.1 CMDB
l Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance
l Assigning the 3.7.x Supervisor's IP Address to the 4.2.1 Supervisor
l Registering Workers to the Supervisor
Overview
This migration process is for FortiSIEM deployment with a single virtual appliance and the CMDB data stored on
a local VMware disk, and where you intend to run a 4.2.x version on the same physical machine as the 3.7.x
version, but as a new virtual machine.
Prerequisites
Install the 4.2.1 virtual appliance on the same host as the 3.7.x version with a local disk that is larger than the
original 3.7.x version. You will need the extra disk space for copying operations during the migration.
7. Run the archive script to create an archive version of the CMDB, and specify the directory where it should be
created.
./ao-db-migration-archiver.sh /tmp/376_archive/
8. Check the that archive files phoenixdb_migration_* and opt-migration-*.tar were successfully
created in the destination directory.
4. When the migration script completes the virtual appliance will reboot.
Migration is now complete - Make sure all devices, user created rules, reports, dashboards are migrated
successfully
l Overview
l Prerequisites
l Copy the 3.7.x CMDB to a 4.2.1 Virtual Appliance Using rsync
l Upgrading the 3.7.x CMDB to 4.2.1 CMDB
l Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance
l Assigning the 3.7.x Supervisor's IP Address to the 4.2.1 Supervisor
l Registering Workers to the Supervisor
Overview
This migration process is for FortiSIEM deployment with a single virtual appliance and the CMDB data stored on
a local VMware disk, and where you intend to run the 4.2.1 version on a different physical machine as the 3.7.x
version. This process requires these steps:
Prerequisites
Installing rsynch
Before you can copy CMDB, you need to have rsync installed on the 3.7.x virtual appliance where you will be
making the copy.
1. Log in to the 3.7.x Supervisor as root over SSH.
2. Copy CentOS-Base.repo to /etc/yum.repos.d .
cp /etc/yum.repos.d.orig/CentOS-Base.repo /etc/yum.repos.d
Procedure
1. Log in to the 4.2.1 virtual appliance as root.
2. Check the disk size in the remote system to make sure that there is enough space for the database to be copied
over.
3. Copy the directory /data from the 3.7.x virtual appliance to the 4.2.1 virtual appliance using the rsync tool. rsync
Command Syntax Make sure that the trailing / is used in the final two arguments in the rsync command
rsync --progress -av root@<3.7.x_VA_ip_address>:/data/ /data/
4. After copying is complete, make sure that the size of the event database is identical to the 3.7.x system.
7. Run the archive script to create an archive version of the CMDB, and specify the directory where it should be
created.
./ao-db-migration-archiver.sh /tmp/376_archive/
8. Check the that archive files phoenixdb_migration_* and opt-migration-*.tar were successfully
created in the destination directory.
9. Copy the opt-migration-*.tar file to /root.
This contains various data files outside of CMDB that will be needed to restore the upgraded CMDB.
10. Run the migration script on the 3.7.x CMDB archive you created in step 7.
The first argument is the location of the archived 3.7.x CMDB, and the second argument is the location where the
migrated CMDB file will be kept.
/root/ao-db-migration.sh /tmp/376_archive/cmdb-migration-xyz /tmp/376_
migration
4. When the migration script completes the virtual appliance will reboot.
Migration is now complete - Make sure all devices, user created rules, reports, dashboards are migrated
successfully
l Overview
l Prerequisites
l Upgrading the 3.7.x CMDB to 4.2.1 CMDB
l Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance
l Mounting the NFS Storage on Supervisors and Workers
l Registering Workers to the Supervisor
Overview
In this migration method, the production FortiSIEM systems are upgraded in-place, meaning that the production
3.7.x virtual appliance is stopped and used for migrating the CMDB to the 4.2.1 virtual appliance. The advantage
of this approach is that no extra hardware is needed, while the disadvantage is extended downtime during
the CMDB archive and upgrade process. During this downtime events are not lost but are buffered at the
collector. However, incidents are not triggered while events are buffered. Prior to the CDMB upgrade process, you
might want to take a snapshot of CMDB to use as a backup if needed.
Prerequisites
7. Run the archive script to create an archive version of the CMDB, and specify the directory where it should be
created.
./ao-db-migration-archiver.sh /tmp/376_archive/
8. Check the that archive files phoenixdb_migration_* and opt-migration-*.tar were successfully
created in the destination directory.
9. Copy the opt-migration-*.tar file to /root.
This contains various data files outside of CMDB that will be needed to restore the upgraded CMDB.
10. Run the migration script on the 3.7.x CMDB archive you created in step 7.
The first argument is the location of the archived 3.7.x CMDB, and the second argument is the location where the
migrated CMDB file will be kept.
/root/ao-db-migration.sh /tmp/376_archive/cmdb-migration-xyz /tmp/376_
migration
4. When the migration script completes the virtual appliance will reboot.
Follow this process for each Supervisor and Worker in your deployment.
1. Log in to your virtual appliance as root over SSH.
2. Run the mount command to check the mount location.
3. Stop all FortiSIEM processes.
service crond stop
phtools --stop all
killall -9 phMonitor
su - admin
/opt/glassfish/bin/asadmin stop-domain
exit
service postgresql-9.1 stop
service httpd stop
7. Change to the 3.7.x mount path location in the /etc/fstab file on the Supervisor or Workers.
8. Reboot the Supervisor or Worker.
Migration is now complete - Make sure all devices, user created rules, reports, dashboards are migrated
successfully.
l Overview
l Prerequisites
l Restoring the Upgraded CMDB in a 4.2.1 Virtual Appliance
l Mounting the NFS Storage on Supervisors and Workers
l Registering Workers to the Supervisor
l Setting the 4.2.1 SVN Password to the 3.7.x Password
Overview
In this migration method, the production 3.7.x FortiSIEM systems are left untouched. A separate mirror image
3.7.x system is first created, and then upgraded to 4.2.1. The NFS storage is mounted to the upgraded 4.2.1
system, and the collectors are redirected to the upgraded 4.2.1 system. The upgraded 4.2.1 system now
becomes the production system, while the old 3.7.6 system can be decommissioned. The collectors can then be
upgraded one by one. The advantages of this method is minimal downtime in which incidents aren't triggered,
and no upgrade risk. If for some reason the upgrade fails, it can be aborted without any risk to your production
CMDB data. The disadvantages of this method are the requirement for hardware to set up the mirror 3.7.x mirror
system, and longer time to complete the upgrade because of the time needed to set up the mirror system.
Prerequisites
7. Check that the archived files were successfully created in the destination directory.
You should see two files, cmdb-migration-*.tar, which will be used to migrate the 3.7.x CMDB, and opt-
migration-*.tar, which contains files stored outside of CMDM that will be needed to restore the upgraded
CMDB to your new 4.2.1 virtual appliance.
8. Copy the cmdb-migration-*.tar file to the 3.7.x staging Supervisor, using the same directory name you
used in Step 6.
9. Copy the opt-migration-*.tar file to the /root directory of the 4.2.1 Supervisor.
4. When the migration script completes the virtual appliance will reboot.
Follow this process for each Supervisor and Worker in your deployment.
1. Log in to your virtual appliance as root over SSH.
2. Run the mount command to check the mount location.
3. Stop all FortiSIEM processes.
service crond stop
phtools --stop all
killall -9 phMonitor
su - admin
/opt/glassfish/bin/asadmin stop-domain
exit
service postgresql-9.1 stop
service httpd stop
7. Change to the 3.7.x mount path location in the /etc/fstab file on the Supervisor or Workers.
Migration is now complete - Make sure all devices, user created rules, reports, dashboards are migrated
successfully
Migrating Collectors
1. After migrating all your Supervisors and Workers to 4.2.1, install the 4.2.1 Collectors.
2. SSH to the 3.7.x Collector as root.
3. Change the directory to /opt/phoenix/cache/parser/events.
4. Copy the files from this directory to the same directory on the 4.2.1 system.
5. Change the directory to /opt/phoenix/cache/parser/upload/svn.
6. Copy the files from this directory to the same directory on the 4.2.1 system.
7. Power off the 3.7.x Collector.
8. SSH to the 4.2.1 Collector and change its IP address to the same as the 3.7.x Collector by running the vami_
config_net script.
/opt/vmware/share/vami/vami_config_net
If you are using NFS storage, your SVN repository will be migrated to a local disk to improve performance and
reliability. If you are using local storage only, the SVN repository will be moved out of the /data partition and
into an /svn partition.
2. Run df –h to see if an /svn partition does NOT exist. The migration script is going to create that partition.
This screenshot shows an expected typical partition structure.
10. When the script executes, you will be asked to confirm that you have 60GB of local storage available for the
migration. When the script completes, you will see the message Upgrade Completed. SVN disk
migration done.
11. Run df –h to confirm that the /svn partition was completed.
Older FortiSIEM collectors 4.5.2 or earlier running JDK 1.7 do not have TLS 1.2 enabled. To enable them to
communicate to FortiSIEM 4.6.3, follow these steps
1. SSH to Collector and edit /opt/phoenix/bin/runJavaAgent.sh
2. Enable TLS v1.2 option.
exec ${JAVA_HOME}/bin/java $initialJobXML -
Djava.library.path=/opt/phoenix/lib64 -
Dhttps.protocols=SSLv3,TLSv1,TLSv1.1,TLSv1.2 -classpath ${MY_CLASS_PATH} -
Xmx2048M com.ph.phoenix.agent.AgentMain "$@"
killall -9 phAgentManager
Enforcing TLS 1.2 requires that the following steps be followed in strict order for upgrade to succeed.
Additional steps for TLS 1.2 compatibility are marked in bold.
1. Remove/etc/yum.repos.d/accelops* andRun "yum update" on Collectors, Worker(s), Supervisor and to get
all TLS 1.2 related libraries up to date. Follow this yum update order Collectors → Worker(s) → Supervisor.
2. If your environment has a collector and it is running FortiSIEM 4.5.2 or earlier (with JDK 1.7), then first patch the
Collector for TLS 1.2 compatibility(see here). This step is not required for Collectors running FortiSIEM 4.6.1 or
later.
3. Pre-upgrade step for upgrading Supervisor: Stop FortiSIEM (previously FortiSIEM) processes all Workers by
running "phtools --stop ALL". Collectors can be up and running. This is to avoid build up of report files.
4. Upgrade Supervisor following usual steps.
5. If your environment has Worker nodes, Upgrade Workers following usual steps.
6. If your environment has FortiSIEM Windows Agents, then upgrade Windows Agent Manager from 1.1 to 2.0. Note
t here are special pre-upgrade steps to enable TLS 1.2 (see here).
7. If your environment has Collectors, upgrade Collectors following usual steps.
If you want to upgrade a multi-tenant deployment that includes Collectors, you must set up and then specify an
image server that will be used as a repository for the Collector upgrade files. You can use a standard HTTP server
for this purpose, but there is a preferred directory structure for the server. These instruction describe how to set up
that structure, and then add a reference to the image server in your Supervisor node.
6. Make sure a directory tree structure like this is created in the images directory before proceeding:
/images/collector/upgrade/CO-x.x.x.xxxx
/FortiSIEM-collector-x.x.x.xxxx.rpm/RPM-GPG-KEY/repodata
/filelists.xml.gz
/other.xml.gz/primary.xml.gz
/repomd.xml
7. Create a link from the image directories to the webserver html pages.
/bin/link -sf /images/collector/upgrade/latest
/var/www/html/vms/collector/upgrade/latest
8. Test the image server locations by entering one of the following addresses into a browser:
l http://images.myserver.net/vms/collector/upgrade/latest/
l https://images.myserver.net/vms/collector/upgrade/latest/
These instructions cover the upgrade process for FortiSIEM Enterprise deployment with a single Supervisor.
1. Using SSH, log in to the FortiSIEM virtual appliance as the root user.
2. Change to the pbin directory:
cd /pbin
3. Run the command to download the image:
./phdownloadimage <userID> <password> <downloadUrl>
Note: You can enter any values in the User id and password fields. These values are not linked to any account.
l Overview
l Upgrading Supervisors and Workers
l Upgrading Collectors
Overview
Follow these steps while upgrading a VA cluster
1. Shutdown all Workers. Collectors can be up and running.
2. Upgrade Super first (while all workers are shutdown)
3. After Super is up and running, upgrade worker one by one.
4. Upgrade collectors
Step #1 prevents the accumulation of Report files while Super is not available during upgrade (#2). If these steps
are not followed, Supervisor may not be able to come up after upgrade because of excessive unprocessed report
fie accumulation.
Note: Both Super and Worker MUST be on the same FortiSIEM version, else various software modules may not
work properly. However, Collectors can be in older versions - they will work except that they may not have the
latest discovery and performance monitoring features in the Super/Worker versions. So FortiSIEM recommends
that you also upgrade Collectors within a short period of time. If you have Collectors in your deployment, make
sure you have configured an image server to use as a repository for the Collector
Note: You can enter any values in the User id and Password fields. These are not linked to any account.
Do Not Stop the Upgrade Process: The system upgrade takes 10 - 30 minutes depending on the
size of your databases and system resources. Do not stop the upgrade process manually.
Your console will display the progress of the upgrade process. When the upgrade process is complete,
your FortiSIEM virtual appliance will reboot.
6. Log in to your virtual appliance, and in the Admin > Cloud Health page, check that you are running the
upgraded version of FortiSIEM.
Upgrading Collectors
The process for upgrading Collectors is similar to the process for Supervisors and Workers, but you must initiate
the Collector process from the Supervisor.
1. Log in to the Supervisor node as an administrator.
2. Go to Admin > General Settings
3. Under Image Server Settings, enter the download path to the upgrade image, and the Username and
Password associated with your license.
4. Go to Admin > Collector Health.
5. Click Download Image, and then click Yes to confirm the download.
As the download progresses you can click Refresh to check its status.
6. When Finished appears in the Download Status column of the Collector Health page, click Install Image.
The upgrade process will begin, and when it completes, your virtual appliance will reboot. The amount of time it
takes for the upgrade to complete depends on the network speed between your Supervisor node and the
Collectors.
7. When the upgrade is complete, make sure that your Collector is running the upgraded version of FortiSIEM.
Note 1.0 Agents and Agent Managers communicate only over HTTP while 1.1 Agents and Agent Managers
communicate only over HTTPS. Subsequently, 1.1 Agents and Agent managers are not backward compatible
with 1.0 Agents and Agent Managers. You have to completely upgrade the entire system of Agents and Agent
Managers.
1. Uninstall V1.0 Agents
2. Close V1.0 Agent Manager Application.
3. Uninstall V1.0 Agent Manager
4. Bind Default Website with HTTPS as described in Pre-requisite in Installing FortiSIEM Windows Agent Manager.
5. Install V1.1 Agent Manager following Installing FortiSIEM Windows Agent Manager.
a. In Database Settings dialog, enter the V1.0 database path as the "FortiSIEM Windows Agent Manager"
SQL Server database path (Procedures Step 6 in Installing FortiSIEM Windows Agent Manager).
b. Enter the same Administrator username and password (as the previous installation) in the Agent
Manager Administrator account creation dialog
6. Install V1.1 Agents
7. Assign licenses again. Use the Export and Import feature.
"HKLM\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL
\Protocols\TLS 1.2\Client" /v DisabledByDefault /t REG_DWORD /d
00000000 REG ADD
"HKLM\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL
\Protocols\TLS 1.2\Server" /v DisabledByDefault /t REG_DWORD /d
00000000
c. Restart computer
2. Uninstall Agent Manager 1.1
3. Install SQL Server 2012-SP1 Feature Pack on Agent manager available at https://www.microsoft.com/en-
in/download/details.aspx?id=35580.
a. Select the language of your choice and mark the following two MSIs (choose x86 or x64 depending on
your platform) for download:
i. SQLSysClrTypes.msi
ii. SharedManagementObjects.msi
b. Click on the Download button to download those two MSIs. Then double-click on those MSIs to install
those one by one.
4. Install Agent Manager 2.0
a. In Database Settings dialog, set the old database path as FortiSIEMCAC database path.
b. Enter the same Administrator username and password (as in the previous installation) in the new Agent
Manager Administrator account creation dialog.
5. Run Database migration utility to convert from 1.1 to 2.0
a. Open a Command Prompt window
b. Go to the installation directory (say, C:\Program Files\AccelOps\Server)
c. Run AOUpdateManager.exe with script.zip as the command line parameter. You will find script.zip
alongside the MSI.
Windows Agent
1. Uninstall V1.0 Agents
2. Install Agents
Uninstalling Agents
Single Agent
l Simply uninstall like a regular Windows service
In order to patch CentOS and system packages for security updates as well as bugfixes and make the system
on-par with a fresh installed FortiSIEM node, the following script is made available. Internet connectivity to
CentOS mirrors should be working in order for the following script to be successful, otherwise the script will print
and error and exit. This script is available on all nodes starting from 4.6.3: Supervisor, Workers, Collectors, and
Report Server
/opt/phoenix/phscripts/bin/phUpdateSystem.sh
The above script is also invoked during system boot up and is invoked in the following script:
/etc/init.d/phProvision.sh
The ensures that the node is up to date right after an upgrade and system reboot. If you are running a node that
was first installed in an older release and upgraded to 4.6.3, then there are many OS/system packages that will
be downloaded and installed the first time. Therefore, upgrade time is longer than usual. On subsequent
upgrades and reboots, the updates will be small.
Nodes that are deployed in bandwidth constrained environments can disable this by commenting out the line
phUpdateSystem.sh in phProvision.sh above. However, it is strongly recommended to keep this in-place to
ensure that your node has security fixes from CentOS and minimize the risk of an exploit. Alternatively, in
bandwidth constrained environments, you can deploy a freshly installed collector to ensure that security fixes are
up to date.