Commands
Convert Unix Timestamps
1
2
3
4
5
6
7
| | eval unix_time = 1725573600
| eval datetime=strftime(unix_time, "%Y-%m")
# Examples
%Y-%m-%d -> 2021-12-31
%y-%m-%d -> 21-12-31
%b %d, %Y -> Feb 11, 2022
|
streamstats
Used for calculating statistics and adding them as new fields in your search results, based on the order in which the events are encountered.
1
2
3
4
| # current - option means the current event's value isn't included when calculating (f = false, t = true)
# window - limits to just the next event
| streamstats current=f window=1 last(value) as last_value by cve
|
where
- Search command to compare two fields.
- Possibility to use functions:
isnotnull()
isnull()
like()
- …
1
2
3
4
5
6
7
8
9
10
11
12
| # Compare two field values
index=_index
| where a > b
# Search for wildcards
index=perfmon counter=*
| where counter like "%Disk%"
... | where like(ipaddress, "198.%")
# Search for a specific string in a field
... | where foo="bar"
|
When you want to know the time data were written to an index or sourcetype, you can use the metadata
command.
1
2
3
4
5
6
7
8
9
10
| # Show the metadata lastTime, firstTime, and recentTime for the sourcetype
| metadata type=sourcetypes index=_internal
# Format the output to be human-readable
| metadata type=sourcetypes index=_internal
| rename totalCount as Count firstTime as "First Event" lastTime as "Last Event" recentTime as "Last Update"
| fieldformat Count=tostring(Count, "commas")
| fieldformat "First Event"=strftime('First Event', "%c")
| fieldformat "Last Event"=strftime('Last Event', "%c")
| fieldformat "Last Update"=strftime('Last Update', "%c")
|
eventstats
The eventstats
command calculates statistics like stats
but appends the results to each event in the search results, rather than returning only the aggregated statistics.
1
2
3
4
5
6
7
| # Add the total count of events as a new field to each event
index=_internal
| eventstats count as total_events
# Calculate the average response time and append it to each event
index=web_logs
| eventstats avg(response_time) as avg_response_time by endpoint
|
Key Differences Between stats
and eventstats
:
stats
: Returns only the aggregated statistics.eventstats
: Appends the aggregated statistics to each event.
Use Cases:
- Enrich events with aggregated data for further analysis.
- Compare individual event values against group-level statistics.
Sourcetype
JSON
1
2
3
4
| [json]
KV_MODE = json
SHOULD_LINEMERGE = false
LINE_BREAKER = ([\r\n]+){
|
Note: “If you set KV_MODE = json
, do not also set INDEXED_EXTRACTIONS = JSON
for the same sourcetype. If you do this, the JSON fields are extracted twice, once at index time and again at search time.”
Environment Variables
1
| /opt/splunk/bin/splunk envvars
|
Split Data from One Sourcetype into Different Indexes
- Create indexes.
- It is recommended to create a new app to gather all configuration files in one place.
- transforms.conf:
- Define the transformation.
- Use regex to find the data.
Format
defines the target index.
- props.conf:
- Combine the sourcetype with the transformation.
TRANSFORMS-<Key>
defines the transformation that will be used on the data.<Key>
can be any value you want; it doesn’t matter.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| # Input
## inputs.conf
[monitor://path/to/data]
index = <index>
sourcetype = some_data_sourcetype
# Transformation
## transforms.conf
[transformation_for_somedata]
REGEX = <matching_criteria>
DEST_KEY = _MetaData:Index
FORMAT = <target-index>
# Sourcetype
## props.conf
[some_data_sourcetype]
TRANSFORMS-index = transformation_for_somedata
|
RBA (Risk-Based Alerting)
Where Do Risk Scores Come From?
- Adaptive Response Actions.
- Manual Risk Score (
| eval risk_score=50
). - Risk Factor Editor:
- Can use addition or multiplication to raise the risk score.
- Multiply by 0 to change the risk score to 0.
1
2
3
| # See all combined
| from datamodel:"Risk"
| table source risk_factor_add risk_factor_mult risk_score
|
How to Weight?
- MITRE ATT&CK Weight + Use Case Weight
- Weight by Volume:
- Track how often detections are firing.
- Create more depth to risk scores.
1
2
3
4
5
6
| | from datamodel:"Risk"."All_Risk"
| search `risk_notable_sources`
| stats count by search_name
| eval avg=round(count/30)
| eval velocity=case(avg<=1,1.25,avg>1 AND avg<=50,1,avg>50 AND avg<=100,0.75,avg>100 AND avg<=500,0.5,avg>500,0.25)
| outputlookup risk_velocity.csv
|
References
Source