0% found this document useful (0 votes)
36 views

001 Now You Know Splunk (FreeCourseWeb - Com)

Uploaded by

freestlz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views

001 Now You Know Splunk (FreeCourseWeb - Com)

Uploaded by

freestlz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 162

Now You Know Splunk!

Tom Kopchak // tom@hurricanelabs.com //


@tomkopchak // #tomjoke
Who The Heck Is This Guy?

Director of Technical Operations @ Hurricane Labs (Cleveland, OH)


• Splunk Partner
4+ years Splunk experience
• Splunk Consultant II and Enterprise Security
Implementation Accredited
Overview and Agenda
Why are we here?
What are we looking to accomplish?
What to expect during this class
• Lots of material
• Save slides for future reference
• Hands-on activities where possible

What you should leave the class knowing


• Why Splunk is important for you
• Basic Splunk terminology
• Basic Splunk use and administration
Overview and Agenda
• Not an official Splunk class
• You won’t get a Splunk certification from this
• Some of the material is based on official Splunk classes
• Splunk Fundamentals
• Searching and Reporting
• Some relevant Splunk administration topics
Most of the topics covered will be relevant to
someone interested in playing around with Splunk
or creating a small-scale environment
Certifications

If you like this material, get a


Splunk certification for free:
• https://www.splunk.com/en_us/
training/certification-track/splunk-
certified-users.html
• Splunk Fundamentals 1 -
eLearning (Free)
• Splunk Certified User exam
What you will need for this course

Splunk account
• Sign up for free at splunk.com
Linux machine
• Virtual machine or cloud instance
will work
• SSH access required
• Examples will be based on Ubuntu
Let’s Get Started >
What is Splunk?
• Software for searching, monitoring, and
analyzing machine-generated big data, via a
web-style interface
• Splunk (the product) captures, indexes and
correlates real-time data in a searchable
repository from which it can generate graphs,
reports, alerts, dashboards and visualizations
• E-learning course (free to take, so do it!)
• http://www.splunk.com/view/SP-CAAAH9U
Why Use Splunk?

• Security
• Collecting logs and finding “bad stuff”
• Operations
• Streamline operations
• Compliance
• PCI/SOX/HIPAA -> log review and retention
requirements
Who Uses Splunk?
What Are Logs?

• Logs are machine data - and they’re


everywhere
• Nearly every device can and does
produce logs
• Logs tell the story of what’s going on in
your environment
• Not always an exciting story, but a boring,
nerdy one
• Logs are important and useful
• Especially for us! (Well, at least for me)
Why Would Anyone Want to Search a Log?

Figure out a problem


• What’s going on here?
• Is something being blocked?
• Why did this stop working?

Understand a security incident


• What is happening?
• Is there an active attack?
• How did this incident start and progress?
What Are We Waiting For?
Let’s Fire Up Some Splunk!
Lab 1: Let’s Install Some Splunk!

This lab will be the foundation for the rest of the class
• Time to build a Splunk instance you can use for the examples

This won’t be representative of a large Splunk deployment


(we’ll talk about those more later)
• Single, standalone system
• Use cases:
• Splunk POC, getting started
• Sometimes, you want to build a quick system for testing data inputs or
apps (I do this often myself)

Let’s do it!
Your Lab System

Your instructor has provisioned a cloud-hosted Ubuntu VM


for each student
• Use this or your own Linux system (if you want to keep using it later)
• Test connectivity using IP/username/password given to you
• Username: hurricane, password: TEhurricaneMP
• Create /opt/splunk directory (mkdir /opt/splunk)
• Note - this is frequently a separate partition

Download Splunk installer


• www.splunk.com, click on “TRY SPLUNK”
• Free download of Splunk Enterprise
• Download Linux .tgz (if you don’t have a Splunk account, create one)
wget Download

Use this to
download Splunk
on your system
(cd to /tmp first)
Extract Splunk

• root@hostname:/opt# cd /opt
• root@hostname:/opt# tar -zxvf /tmp/splunk-7.0.3-fa31da744b51-Linux-
x86_64.tgz
• <lots of stuff extracting>
• When you’re done, it’ll look like something like this:
Start Splunk!
• Simply run /opt/splunk/bin/splunk start —accept-license
• In Splunk 7.1 and later - set admin password
Try Out Your New Splunk install!
• Go to http://<ip>:8000 in a web browser
Your First Search
• There’s no data in Splunk yet, so it’s not all that useful currently - but you
have internal logs you can search
And Audit Logs, Too!
Enable Splunk boot start

Because having Splunk start automatically on boot is nice


• /opt/splunk/bin/splunk enable boot-start
Splunk Terminology
(Splexicon)
Searching/Splunk UI

• Event
• Search
• Report
• Dashboard
• SPL
• Sourcetype
• Index
• Field Extractions
• Lookup tables
Event

A single piece of data in Splunk software, similar to a record in


a log file or other data input. When data is indexed, it is
divided into individual events.
• Each event is given a timestamp, host, source, and source type.
• Often, a single event corresponds to a single line in your inputs, but some
inputs (for example, XML logs) have multiline events, and some inputs have
multiple events on a single line.
• When you run a successful search, you get back events.
Similar events can be categorized together with event types.
Events
Search

The primary way users navigate data in Splunk Enterprise.


• You can write a search to retrieve events from an index, use statistical
commands to calculate metrics and generate reports, search for specific
conditions within a rolling time range window, identify patterns in your
data, predict future trends, and so on.
• You can save searches as reports and use them to power
dashboard panels.
Search

Search Time Range


Report

A report is a saved search


• When you create a search or pivot that you want to use later, save it as a
report. You can run the report again by finding the report on the Reports
listing page and clicking its name.
You can schedule reports to run on a regular interval. You can
configure these scheduled reports to generate alerts when the
results of their runs meet particular conditions.
Example search
Saved Report
Dashboard

User interface associated with an app.


• Dashboards have one or more visualization panels.

• A dashboard that includes user inputs is known as a form.


• You can build dashboards when saving a search or visualization. You can
also create dashboards using the dashboard editor.
• Dashboards use Simple XML source code. Edit this source code to
customize a dashboard or form. You can also convert dashboard source
code to HTML.
SPL

SPL is the Splunk search language.


• SPL is the abbreviation for Search Processing Language.
• SPL is designed by Splunk for use with Splunk software.
• SPL encompasses all the search commands and their functions,
arguments, and clauses.
• Its syntax was originally based on the Unix pipeline and SQL.
• The scope of SPL includes data searching, filtering, modification,
manipulation, insertion, and deletion.
Sourcetype

A default field that identifies the data structure of an event.


• A source type determines how Splunk Enterprise formats the data during the indexing process.
• Example source types include access_combined, opsec, WinEventLog:Security and cisco_syslog.
Splunk Enterprise comes with a large set of predefined source types, and it assigns a source
type to your data.
• You can override this assignment by assigning an existing source type or creating a custom
source type.
The indexer identifies and adds the source type field when it indexes the data. As a result, each
indexed event has a sourcetype field.
Use the sourcetype field in searches to find all data of a certain type (as opposed to all data
from a certain source).
Sourcetype
Index

• An index is a “container” of data.


• The repository for data in Splunk Enterprise.
• When Splunk Enterprise indexes raw event data, it transforms the data
into searchable events.
• Indexes reside in flat files on the Splunk Enterprise instance known as the
indexer.
• Different indexes are used to physically segregate data.
• Access control and permissions.
Knowledge Objects

• Splunk knowledge objects give you different ways to interpret, classify,


enrich, and normalize (organize) your events.
• Add value to your data.

Examples of Knowledge Objects


• Saved searches
• Tags
• Event types
• Views (Dashboards)
Fields/Field Extractions

• A searchable name/value pair in Splunk Enterprise event data.


• Splunk Enterprise extracts specific default fields from your data, including
host, source, and sourcetype.
• You can also set up Splunk Enterprise to create search time or index time
field extractions, for example, using the field extractor or the rex
command.
• Use tags or aliases to change the name of a field or to group similar fields
together.
• Field names are case-sensitive.
Selected Fields/Interesting Fields
Full list of fields in an event
Lookup

A knowledge object that provides data enrichment by mapping a select value in


an event to a field in another data source, and appending the matched results
to the original event.
• For example, you can use a lookup to match an HTTP status code and return a
new field containing a detailed description of the status.
• The data sources for lookup content include search results, .csv files,
geospatial .kmz files, KVStore collections, and script-facilitated external database
connections.
Lookups are incorporated into dashboards and forms to provide content in a
human readable format, allowing users to interact with event data without
knowing obscure or cryptic event fields.
Sample Lookup Table

nessus_severities.csv (from Splunk_TA_nessus)

severity_id severity

0 informa0onal

1 low

2 medium

3 high

4 cri0cal
Enough of Me Talking
Let’s Play With Some Data
Lab 2: Importing Data

Let’s take a log file and get it into Splunk.


• We’ll use logs on our Splunk instance for this exercise.
• We’ll treat this like data onboarding from any production system, with
some adjustments to match the constraints of the class.

Goal - have data in Splunk we can use for the next


activities.
What the Heck is This?

• It’s not uncommon to be asked to work with a log type you’ve never seen
before - don’t be scared
• As long as it’s in a text format somewhere that can be read or accessed
by Splunk, we generally can work with it
• Some data types or log sources are much easier to work with than others
• Don’t re-invent the wheel - re-use existing work and techniques whenever
possible
Data Onboarding Approach

When you want to onboard a new data type, research to see what
information we already have about that log type
• Have we done this before?
• Is there a Splunk app available?
• https://splunkbase.splunk.com
• Is there any documentation available for working with this source?
• It’s generally helpful to ask the following questions:
• What index should store this data?
• What is the desired retention period?
• Who should have access to this data?
• Is there a sample log to review?
Experimenting With Data

• A test Splunk instance (like the one we’re


playing with today) is a great way to test
on boarding data before doing it in
production
• Experiment with unfamiliar apps in the lab
to learn how they work
• If you break something, no customer
impact or data loss
Let’s Onboard Some Data

• Our Splunk VMs have system logs we can index


• On Ubuntu, /var/log/auth.log
• Where do we start?
What Do We Know?

• Well, Linux authentication logs should be pretty common


• Is there any documentation?
• https://answers.splunk.com/topics/auth.log.html
• https://danielmiessler.com/blog/monitoring-ssh-bruteforce-attempts-
using-splunk/
• Is there a Splunk app?
• https://splunkbase.splunk.com/app/833/ - Getting data in
• https://splunkbase.splunk.com/app/273/ - Using/visualizing the data
• This is a great example for monitoring/indexing a log file
Start by Creating an Index

• For the lab, this can be


done in the WebUI
• In a distributed
environment, this
is typically managed
in an app
Creating an Index
Import Data
• Let’s tell Splunk to monitor data on our system

Import Data From File
• Select “Files & Directories”, and locate the file
Set Sourcetype
• Set source type to linux_secure
Set Host and Index
• On a production system, we would typically configure this in an inputs app (I’ll show you this in a bit)
Yay! You’re Almost Done
• Click submit to finish, then start searching!
Your First Search (of this data)
• What do you see?
Let’s Install Some Apps
• Can we install the Unix/Linux Add-on from the GUI?
Yes! Install App and Restart Splunk
• Log in with your Splunkbase account to download and install the app
• Restart Splunk when done
• Majority of app installs will require a Splunk restart
Let’s Run That Search Again
That Was Fun

But what if there isn’t an app?


• There are ways to extract fields without using an app if necessary
• Goal - try not to duplicate work or make things more complicated
• Sometimes custom data extractions are inevitable, especially for log
sources unique to a client/custom apps
Under the Hood

• Everything in Splunk is a file


• In this example, our monitor stanza is in inputs.conf within the search app
(this may be different on yours)

• Protip - if you can’t find the config file location, try btool:
Splunk Infrastructure
Distributed Splunk Environments
Splunk Infrastructure/Servers

• Indexer
• Search Head
• Forwarder
• Universal Forwarder
• Syslog Receiver
• Deployment Server
• License master
• Splunk Clustering (Indexer, Search Head,
Multisite)
Core Splunk Infrastructure
Indexer

• A Splunk Enterprise instance that indexes data, transforming


raw data into events and placing the results into an index. It also
searches the indexed data in response to search requests.
• Indexers store the data and supply it in response to search
requests from the search heads.

Indexers are very CPU and disk I/O intensive


• Minimum recommended: 12 cores/12gb RAM
• 800-1200 IOPS (disk performance)
Search Head

• In all but the smallest Splunk deployments, a specialized


Splunk Enterprise instance, called a search head,
handles search management and coordinates searches
across multiple indexers.
• Search heads are how users primarily interact with
Splunk
• This is where Dashboards/Visualizations/Reports/etc
live and run
• Environments with Splunk ES typically have at least two
search heads or search head clusters (SHCs)
• One ES and one Ad-Hoc
Forwarder

A Splunk instance that forwards data to another Splunk Enterprise instance,


such as an indexer or another forwarder, or to a third-party system.

There are three types of forwarders:


• A universal forwarder is a dedicated, streamlined version of Splunk
Enterprise that contains only the essential components needed to
send data.
• A heavy forwarder is a full Splunk Enterprise instance, with some
features disabled to achieve a smaller footprint.
• A light forwarder is a full Splunk Enterprise instance, with most
features disabled to achieve a small footprint. The universal forwarder
supersedes the light forwarder for nearly all purposes. The light
forwarder has been deprecated as of Splunk Enterprise version 6.0.0.
Universal Forwarder

• A type of forwarder, which is a Splunk instance that sends data to


another Splunk Enterprise instance or to a third-party system.
• The universal forwarder is a dedicated, streamlined version of Splunk
Enterprise that contains only the essential components needed to
forward data. The universal forwarder does not support python and does
not expose a UI.
• In most situations, the universal forwarder is the best way to forward
data to indexers. Its main limitation is that it forwards only unparsed
data. You must use a heavy forwarder to route event-based data.
• Typically you will install these on any Windows or Linux system you
would like to collect logs from.
Syslog Receiver

Splunk can receive direct TCP or UDP inputs


• Best practice: never do this
• Why?
For receiving syslog, deploy syslog-ng on a heavy/
universal forwarder
• Syslog-ng will receive incoming syslog, write logs to /var/
log/network/<sourcetype>/<host>/syslog.log
• Splunk inputs will be configured to read this file
Deployment Server

A Splunk Enterprise instance that acts as a centralized configuration


manager, grouping together and collectively managing any number of
Splunk instances.

Instances that are remotely configured by deployment


servers are called deployment clients.
• The deployment server downloads updated content,
such as configuration files and apps, to deployment
clients.
• Units of such content are known as deployment apps.
The forwarder management interface offers an easy way to
configure the deployment server.
Deployment Client

A Splunk instance that is remotely configured by a deployment


server.
• Deployment clients can be grouped together into one or more server
classes.
• Both Splunk Enterprise (full installations) or Universal Forwarders can
be deployment clients.

Each deployment client periodically polls its deployment server.


• If the deployment server has new content for the client's server class, it
distributes that content to the polling client.
Deployment App

• A unit of content deployed by the deployment server to a group of


deployment clients.
• Deployment apps can be fully developed apps, such as those
available in Splunkbase, or they can be simple groups of
configurations.
Server Classes

A group of deployment clients and associated apps.


• Server classes facilitate the management of a set of deployment
clients as a single unit.
• A server class can group deployment clients by application, operating
system, data type to be indexed, or any other feature of a Splunk
Enterprise deployment.
A deployment server uses server classes to determine what content
to deploy to groups of deployment clients.
• The forwarder management interface offers an easy way to create,
edit, and manage server classes.
License Master/License Server

• Splunk is licensed by the amount of data ingested per day


• The license server typically runs on the deployment server
Splunk Clustering

More than one type of cluster


• Indexer replication cluster
• Search head cluster
Splunk Clustering

• These next slides will cover high-level clustering


terminology
• Cluster administration will be outside of the scope of
this class
• Most larger Splunk environments have clustering in
some capacity (or multiple capacities)
• All sizable Splunk Cloud instances are clustered at
the indexer level (Splunk Cloud Ops manages this,
you do not)
Indexer Clustering/Replication

Indexer clustering:
• A specially configured group of Splunk Enterprise indexers that
replicate external data, so that they maintain multiple copies of the
data.
• Indexer clusters promote high availability and disaster recovery.
Index replication:
• A Splunk Enterprise feature that consists of clusters of indexers that
are configured to replicate data to achieve several goals: data
availability, data fidelity, disaster tolerance, and improved search
performance.
Think of indexer clustering as RAID for indexed data in Splunk
Multisite Clustering

• An indexer cluster that spans multiple physical sites, such as data


centers.
• Each site has its own set of peer nodes and search heads.
• Each site also obeys site-specific replication and search factor rules.
• Typically used for disaster recovery (DR) purposes for clients.
• Generally the best option for backing up running Splunk.
Search Head Clustering
• A type of Splunk Enterprise deployment that consists of groups of search heads
configured to serve as a central resource for searching.
Clustered Splunk Enterprise search heads serve as a central resource for
searching.
• The search heads in a cluster are interchangeable.
• You can run or access the same searches, dashboards, knowledge objects, and
so on, from any member of the cluster.
• To achieve this interchangeability, the search heads in the cluster share
configurations, apps, search artifacts, and job loads.
• Commonly used by clients where search volume is greater than what a single
search head can handle on its own.
• You need at least 3 search heads for a search head cluster.
Putting It All Together
Apps

At the most basic level, a Splunk app is a bundle of configurations


• Can be as simple as a single configuration file:
• Connect to a deployment server: deploymentclient.conf
• Configurations for reading/monitoring a file: inputs.conf

Other apps have views, dashboards, lookups, etc.


Splunk Enterprise Security App

• ES is Splunk’s SIEM - typically used for alerting and investigations (SOC)


More data onboarding
Making data useful through normalization
Lab 3: More Data

• Auth logs are interesting, but let’s add something else


• Start by making iptables log (allthethings)
iptables -I INPUT 1 -j LOG
iptables -I FORWARD 1 -j LOG
iptables -I OUTPUT 1 -j LOG
Time to Bring on Some More Data

• Let’s do this the hard(er) way


• Open your inputs.conf from the previous exercise
• Add an inputs stanza for the file containing iptables logs
• Restart Splunk
Now We Have iptables Logs:
Problem - Our Field Extractions Are Terrible
Normalizing Data
The Common Information Model
Field Anarchy is Bad, Very, Very Bad

Notice how the fields here are based on the


key=value pairs in the logs
• Default Splunk behavior
• Not consistent with how other firewalls might
log data

How is a source IP address represented?


• SRC
• source_ip
• source
• ip_addr_src
Consistency Matters!

Solution: data models


• Specifically, the common information model (CIM)
• Allows for common fields to be used regardless of data type
• Makes correlation of sources possible and practical
• http://docs.splunk.com/Documentation/CIM/4.10.0/User/
NetworkTraffic
Apps to the Rescue
What Does the App Give Us?

• Note: the sourcetype changes to “linux:netfilter”


What Does the App Give Us? CIM
Splunk UI Overview
Hands on With Splunk
Lab 4: Intro to Searching

• Everything in Splunk is searchable


• * is a wildcard
• search terms are not case sensitive
• field names ARE case sensitive
• Boolean logic (AND, OR, NOT) must be in all caps
• Quoted phrases and parenthesis are supported
• Eg, (“bad stuff” OR “not good things”) AND failed
Search Results

• Search results are returned with the newest events first


• Any matching results are highlighted
Selected Fields
Time Range Picker

Defaults to “Last 24 hours”


• This may be too broad, but is a good
starting point
• Previous versions defaulted to “All time”
Avoid real time searches whenever
possible
• CPU intensive
• Useful for troubleshooting incoming data
Working in Search Interface

• To add a term to a search, click on it


• You can also exclude a term from the search as well
Working with Search Results

• Click on the timeline to filter down the range of results


Lab 4: Hands On

• Using Splunk environment, explore the linux:netfilter source type


• These are the firewall logs we onboarded earlier
• Search for your (public) IP address and review the fields available
• Experiment with the time range picker
• Search for logs from a 15 minute window earlier today
• Practice with logical operators and by using different search terms
• Stop a running job, and share the results with a classmate
• Export raw search results
• View your search history/activity
Search is Powerful
Doing more Than Just a Basic Search
Lab 5: Fancier Searching

Fields
• Selected
• Interesting
• All fields
Interesting fields are those that are present in at least 20% of
your events
• You may need to look at all fields to find something you are looking
for, depending on the events
Field Names and Values

• Field names ARE case sensitive!


• Values are NOT case sensitive
• The easiest way to get the right field names is to run a search for
similar data and explore the fields available
How to Use Fields
• CIDR notation is valid for IP addresses
src_ip=“74.114.44.0/24”
src_ip=“74.114.44.*”
• Wildcards can be used as well
user=* host=*.hurricanelabs.com
• Comparison operators can be used for numeric values
sourcetype=linux:netfilter src_port>20 src_port<25
Where Are All My Fields?!?!
• Check to see if you’re in fast mode
Lab 5: Hands On

• Run a basic search on the firewall logs


• sourcetype=linux:netfilter src_ip=<your machine’s IP>
• Toggle between fast, smart, and verbose modes and observe the
results
• What fields are extracted in each mode?
• Expand the search
• sourcetype=linux:netfilter src_ip=<your machine’s IP> dest_port=8000
• Do the field extractions change at all?
Using the Search Pipeline
Put it in Your | and Splunk It
Lab 6: Search Pipeline

• So we have search results, that’s nice


• We probably want to be able to do more useful things with them
• Business people love charts and graphs
• How do we do this?
• Statistical reports!
• Built into the search pipeline
Top Values

• This can be created automagically in the web GUI


Top Values (continued)

• This adds a | top to the search pipeline


• By default, the top 20 are included
Change Visualization Type

• Because everyone <3s pie charts


Chart Formatting Options

• Drilldown - determines if clicking on the chart returns events for a


specific value
Lab 6: Hands On

• Create a search showing the top TCP/UDP ports passing through the
firewall from your machine over the past 15 minutes
• Experiment with different visualizations and limits on the search
• Save this search as a report
• Save As -> Report
• Title: <Your Name> Top Services
Working with Results
Reporting with Log Data
Lab 7: Tables

• We’re often looking to provide specific information to a client, and not


just raw results
• Tables are useful for reporting purposes, or dashboards where a
visualization won’t communicate the data appropriately
Fun with Tables

• Let’s pretend you have AWS instances that are exposed to the
internet
• Let’s also assume the firewalls on these instances allow SSH in from
the world
• For the sake of this example, let’s make a report of the users that are
trying to SSH in
• We’ll eventually turn this into a dashboard
Exploring the Logs

What do we know?
• index containing logs is os
• sourcetype is linux_secure
• How do we find the OS logs?
• Let’s search for it!
Let’s Explore the Fields Available

• Some potentially interesting fields include:


• app, src_ip, user, eventtype, action
Let’s Make a Table
Saving a Search as a Report
Lab 7: Hands On

• Create a few tables using the firewall log and linux_secure data
• Save these searches as reports
Splunk Power Tools
Cooking With Gas
Lab 8: More Search Syntax

• We’ve seen how to do dashboards and tables - but there’s much


more we can do
• SPL is incredibly powerful, and there are a number of commands we
can use to manipulate the data or create different types of output
• We won’t cover every possible command or every usage of every
command - reference the Splunk docs for full syntax
Charts

When a search returns statistical values, the results can be displayed


as a chart
More than one way to do this:
• Create a table of the data
you want to graph
• Use the chart command to
do this
Timechart

• Use timechart to create a time series chart/statistics table


• You can use span to adjust the time range, otherwise Splunk will pick
what it thinks is best
Timechart vs. Chart

• The chart command is used to format results for use in chart


visualizations
• “chart count by field” is the same as “stats count by field” - “field” here
is the “row split field”
• “chart” can also have a “column split” to further break down results -
“chart count over _time by field” - “_time” is the “column split field”
• “timechart” is the same as chart, but with “over _time” handled
automatically - use this for line charts with _time as the X-axis
• “Column split” is useful with bar/column charts - the X-axis will be
grouped by the “column split” field, with each of the “row split” field
values
Timechart vs. Chart

• index=asa | chart count over host by log_level


• “host” is the “column split” field
• “log_level” is the “row split” field
Timechart vs. Chart

• index=asa | timechart count by log_level


• “_time” is automatically the “column split” field
• “log_level” is the “row split” field
Geostats and IPlocation

• iplocation - associate physical location with IP address


• geostats - generate statistics for displaying geographic data
• Mainly useful for dashboards, probably not something you’d normally
use otherwise
• Pew, pew!
Geostats and IPlocation

• Or see which countries are trying to log in


Eval

• Let’s say we want to make the output more…interesting


Eval for Modifying Fields

• eval "Country"=replace(Country,"China","Definitely not China")


• Note: this is a very contrived example that you probably wouldn’t
ever actually want to do
Rename

• Generally used to make table headers more friendly to management


types or humans
Relative Time Syntax

• Splunk has a very robust syntax for describing relative times


• These can be used in lots of places:
• To limit the time range of a search (via time picker or search box)
• To modify events (using eval)
• To compare events (using where)
• Documentation for the syntax is available here:
• https://docs.splunk.com/Documentation/Splunk/latest/
SearchReference/SearchTimeModifiers
Relative Time Syntax

• Using the time picker, you can enter relative time syntax in the
“Advanced” selector
• A preview of the resulting date and time is shown under each text box
Relative Time Syntax

• You can also enter relative times using “earliest” and “latest” as fields
in your search string
• The time picker won’t update, but the time range below the search
box will reflect your chosen time range
• Unless there are no results, then you just see the end of your range
Relative Time Syntax

Two key components to the syntax


• Offset - “[+|-]<time_integer><time_unit>” - go forward or backward
this many of these things (“-3h” means “go back three hours”)
• “Snap to” - “@<time_unit>” - means “jump back to the previous
even unit interval (“@h” means “beginning of the hour”)

These can be joined together multiple times to form a pretty


elaborate time expression
• Fancy example (explanation coming in a few slides):
• -1y@y+10mon+27d@w4
Relative Time Syntax
• Time Units
• second: s, sec, secs, second, seconds
• minute: m, min, minute, minutes
• hour: h, hr, hrs, hour, hours
• day: d, day, days
• week: w, week, weeks
• month: mon, month, months
• quarter: q, qtr, qtrs, quarter, quarters
• year: y, yr, yrs, year, years
• w0 (Sunday), w1, w2, w3, w4, w5 and w6 (Saturday). For Sunday, you can
specify w0 or w7.
• Only valid in “snap to”, not offset (Can’t say “+4w4” for “Add 4
Thursdays”)
Relative Time Syntax

Examples:
• First of this year - “@y”
• First of next year - “@y+1y” or “+1y@y”
• Sunday of this week - “@w0” (or just “@w”)
• Sunday of last week - “-1w@w0” (or just “-1w@w”)
• Noon today - “@d+12h” or “+1d@d-12h”
• Start of the previous hour - “-1h@h”
• You could use “-1h@h” and “@h” as your earliest and latest times
to search the logs “during the previous hours”
Relative Time Syntax

• Really complex example: -1y@y+10mon+27d@w4


• “-1y@y” - subtract one year, snap to beginning of the year - so Jan
1 last year
• “+10mon” - add 10 months - so Nov 1 last year (*not* Oct 1)
• “+27d” - add 27 days - So Nov 28 (still last year)
• “@w4” - Snap to the Thursday on or before
• This effectively gets you the 4th Thursday of the 11th month of last
year
• Happy Turkey Day!
Search Performance

• Beware - the commands used and the order they are used can have
a significant impact on search performance
• For more info, see Steve McMaster’s blog post:
• https://www.hurricanelabs.com/blog/splunk-search-optimization-
paleo-diet-spl
Expert Commands

You probably shouldn’t use these unless you know what you’re doing
transaction
• The transaction command finds transactions based on events that
meet various constraints.
map
• The map command is a looping operator that runs a search
repeatedly for each input event or result.
join
• Use the join command to combine the results of a subsearch with
the results of a main search. One or more of the fields must be
common to each result set.
Making Logs Pretty
Time for Some Dashboards
Lab 9: Building a Dashboard

• Dashboards are also referred to as views


• Apps with visualizations often contain dashboards
• Dashboards are powered by reports/saved searches
• If a dashboard isn’t working, you can view the underlying search for
troubleshooting purposes
• Many pre-built dashboards use a variety of knowledge objects
(saved searches, macros, datamodels) to visualize data
Let’s Create a New Dashboard

• In searching and reporting, click Dashboards -> Create New


Dashboard
Your New Dashboard

• Doesn’t have any panels by default - so it’s pretty lame


Add Panel from Report

• Locate your saved report in the list of reports


Add Panel from Report

• Preview report and add to dashboard


Add Additional Reports, and Adjust Dashboard
Lab 9: Hands On

• Build a dashboard using the report created earlier


• Create new reports using the firewall and auth logs and add them to
the dashboard
• Adjust the look and feel of the dashboard
• Re-arrange panels
• Rename panels
Now You Know Splunk!
Tom Kopchak // tom@hurricanelabs.com //
@tomkopchak // #tomjoke
Lab 10: Archive Your System

• If you want to access your Splunk config later, archive your system:

• If you run out of space:

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy