Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[cyberarkpas] Collect monitoring data #11478

Merged
merged 14 commits into from
Oct 28, 2024
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view

Large diffs are not rendered by default.

Large diffs are not rendered by default.

Large diffs are not rendered by default.

4 changes: 4 additions & 0 deletions packages/cyberarkpas/data_stream/audit/manifest.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,9 @@
type: logs
title: CyberArk PAS audit logs
dataset: cyberarkpas.audit
elasticsearch:
dynamic_dataset: true
dynamic_namespace: true
streams:
- input: logfile
enabled: false
Expand Down
8 changes: 8 additions & 0 deletions packages/cyberarkpas/data_stream/audit/routing_rules.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
- source_dataset: cyberarkpas.audit
rules:
- target_dataset: cyberarkpas.monitor
if: |
ctx.message?.contains('"Product":"VaultMonitor"') == true
chrisberkhout marked this conversation as resolved.
Show resolved Hide resolved
namespace:
- "{{data_stream.namespace}}"
- default

Large diffs are not rendered by default.

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
@@ -0,0 +1,360 @@
---
description: Pipeline for CyberArk PAS monitor
processors:
#
# Set ECS version.
#
- set:
field: ecs.version
value: '8.11.0'
#
# Set event.original from message, unless reindexing.
#
- rename:
field: message
target_field: event.original
if: ctx.event?.original == null
ignore_missing: true
#
# Parse syslog headers (if any) and extract JSON payload.
#
- grok:
field: event.original
patterns:
# RFC5424 from CyberArk.
# UseLegacySyslogFormat=No
# <5>1 2021-03-04T17:28:23Z VAULT {"format":"elastic","version":"1.0",...}
- "^<%{NONNEGINT:log.syslog.priority:long}>%{NONNEGINT} %{TIMESTAMP_ISO8601:_tmp.syslog_ts} %{SYSLOGHOST:_tmp.hostname} %{JSON_PAYLOAD:_tmp.payload}"

# Legacy format.
# UseLegacySyslogFormat=Yes
# Mar 08 02:57:42 VAULT {"format":"elastic","version":"1.0",...}
- "^%{SYSLOGTIMESTAMP:_tmp.syslog_ts} %{SYSLOGHOST:_tmp.hostname} %{JSON_PAYLOAD:_tmp.payload}"

# Catch-all mode, just JSON payload.
- "%{JSON_PAYLOAD:_tmp.payload}"
pattern_definitions:
JSON_PAYLOAD: '{"format":"elastic","version":"1.0",.*}'
on_failure:
- fail:
message: "unexpected event format: {{{_ingest.on_failure_message}}}"

- json:
field: _tmp.payload
target_field: _tmp.json
on_failure:
- fail:
message: "malformed JSON event: {{{_ingest.on_failure_message}}}"

- rename:
field: _tmp.json.syslog.monitor_record
target_field: cyberarkpas.monitor
on_failure:
- fail:
message: "unexpected event structure: {{{_ingest.on_failure_message}}}"


#
# Remove all empty fields
#
- script:
lang: painless
description: 'Removes empty monitor fields'
source: >-
ctx.cyberarkpas.monitor.entrySet().removeIf(entry -> entry.getValue() == "");

- rename:
field: _tmp.json.raw
target_field: cyberarkpas.monitor.raw
ignore_missing: true

# The following processors populate @timestamp from the different sources that can exist in an event.
# In the following order of precedence:
# - IsoTimestamp field (expected ISO8601). Present when new syslog format is used (rfc5424: yes).
# - Timestamp (expected MMM dd HH:mm:ss). Also present only when new syslog format is used.
# - Syslog header timestamp. Either ISO8601 or legacy MMM dd HH:mm:ss, depending on the syslog format in use.
# - Original @timestamp from Filebeat.
- date:
if: ctx.cyberarkpas.monitor.IsoTimestamp != null
field: cyberarkpas.monitor.IsoTimestamp
target_field: _tmp.timestamp
formats:
- ISO8601
on_failure:
- append:
field: error.message
value: "failed to parse ISO timestamp field: {{{cyberarkpas.monitor.IsoTimestamp}}}: {{{_ingest.on_failure_message}}}"

- date:
if: 'ctx._tmp.timestamp == null && ctx.cyberarkpas.monitor.Timestamp != null'
field: cyberarkpas.monitor.Timestamp
target_field: _tmp.timestamp
formats:
# This is the default format.
- 'MMM dd HH:mm:ss'
# Drop a few other formats in case the above fails.
- ISO8601
- 'MMM d HH:mm:ss'
- "EEE MMM dd HH:mm:ss"
- "EEE MMM d HH:mm:ss"
- "MMM d HH:mm:ss z"
- "MMM dd HH:mm:ss z"
- "EEE MMM d HH:mm:ss z"
- "EEE MMM dd HH:mm:ss z"
- "MMM d yyyy HH:mm:ss"
- "MMM dd yyyy HH:mm:ss"
- "EEE MMM d yyyy HH:mm:ss"
- "EEE MMM dd yyyy HH:mm:ss"
- "MMM d yyyy HH:mm:ss z"
- "MMM dd yyyy HH:mm:ss z"
- "EEE MMM d yyyy HH:mm:ss z"
- "EEE MMM dd yyyy HH:mm:ss z"
on_failure:
- append:
field: error.message
value: "failed to parse timestamp field: {{{cyberarkpas.monitor.Timestamp}}}: {{{_ingest.on_failure_message}}}"

- date:
if: ctx._tmp.timestamp == null && ctx._tmp.syslog_ts != null && ctx.event?.timezone == null
field: _tmp.syslog_ts
target_field: _tmp.timestamp
formats:
# This is the default format.
- 'MMM dd HH:mm:ss'
# Drop a few other formats in case the above fails.
- ISO8601
- 'MMM d HH:mm:ss'
- "EEE MMM dd HH:mm:ss"
- "EEE MMM d HH:mm:ss"
- "MMM d HH:mm:ss z"
- "MMM dd HH:mm:ss z"
- "EEE MMM d HH:mm:ss z"
- "EEE MMM dd HH:mm:ss z"
- "MMM d yyyy HH:mm:ss"
- "MMM dd yyyy HH:mm:ss"
- "EEE MMM d yyyy HH:mm:ss"
- "EEE MMM dd yyyy HH:mm:ss"
- "MMM d yyyy HH:mm:ss z"
- "MMM dd yyyy HH:mm:ss z"
- "EEE MMM d yyyy HH:mm:ss z"
- "EEE MMM dd yyyy HH:mm:ss z"
on_failure:
- append:
field: error.message
value: "failed to parse legacy syslog timestamp: {{{_tmp.syslog_ts}}}: {{{_ingest.on_failure_message}}}"

- date:
if: ctx._tmp.timestamp == null && ctx._tmp.syslog_ts != null && ctx.event?.timezone != null
field: _tmp.syslog_ts
target_field: _tmp.timestamp
timezone: '{{{event.timezone}}}'
formats:
# This is the default format.
- 'MMM dd HH:mm:ss'
# Drop a few other formats in case the above fails.
- ISO8601
- 'MMM d HH:mm:ss'
- "EEE MMM dd HH:mm:ss"
- "EEE MMM d HH:mm:ss"
- "MMM d HH:mm:ss z"
- "MMM dd HH:mm:ss z"
- "EEE MMM d HH:mm:ss z"
- "EEE MMM dd HH:mm:ss z"
- "MMM d yyyy HH:mm:ss"
- "MMM dd yyyy HH:mm:ss"
- "EEE MMM d yyyy HH:mm:ss"
- "EEE MMM dd yyyy HH:mm:ss"
- "MMM d yyyy HH:mm:ss z"
- "MMM dd yyyy HH:mm:ss z"
- "EEE MMM d yyyy HH:mm:ss z"
- "EEE MMM dd yyyy HH:mm:ss z"
on_failure:
- append:
field: error.message
value: "failed to parse legacy syslog timestamp: {{{_tmp.syslog_ts}}}: {{{_ingest.on_failure_message}}}"

- set:
field: '@timestamp'
value: '{{{_tmp.timestamp}}}'
ignore_empty_value: true
override: true

#
# Convert field names from CamelCase to snake_case.
#
- script:
lang: painless
description: "Converts monitor field's names from CamelCase to snake_case"
source: >
String to_snake_case(String s) {
/* faster code path for strings that won't need an underscore */
if (s.chars().skip(1).noneMatch(Character::isUpperCase)) {
return s.toLowerCase();
}
int run = 0;
boolean first = true;
StringBuilder result = new StringBuilder();
for (char c : s.toCharArray()) {
char o = Character.toLowerCase(c);
if (c != o) {
if (run == 0 && !first) {
result.append('_');
}
run ++;
} else {
if (run > 1) {
char prev = result.charAt(result.length()-1);
result.setCharAt(result.length()-1, (char)'_');
result.append(prev);
}
run = 0;
first = false;
}
result.append(o);
}
return result.toString();
}
def keys_to_snake_case_recursive(Map object) {
return object.entrySet().stream().collect(
Collectors.toMap(
e -> to_snake_case(e.getKey()),
e -> e.getValue() instanceof Map ? keys_to_snake_case_recursive(e.getValue()) : e.getValue()
)
);
}
ctx.cyberarkpas.monitor = keys_to_snake_case_recursive(ctx.cyberarkpas.monitor);

########################################################
# All processors from this point use the snake_case form
# to access CyberArk fields.
########################################################

#
# Parse ingegers
chrisberkhout marked this conversation as resolved.
Show resolved Hide resolved
#
- convert:
field: cyberarkpas.monitor.average_execution_time
type: integer
ignore_missing: true
- convert:
field: cyberarkpas.monitor.max_execution_time
type: integer
ignore_missing: true
- convert:
field: cyberarkpas.monitor.average_queue_time
type: integer
ignore_missing: true
- convert:
field: cyberarkpas.monitor.max_queue_time
type: integer
ignore_missing: true
- convert:
field: cyberarkpas.monitor.number_of_parallel_tasks
type: integer
ignore_missing: true
- convert:
field: cyberarkpas.monitor.max_parallel_tasks
type: integer
ignore_missing: true
- convert:
field: cyberarkpas.monitor.transaction_count
type: integer
ignore_missing: true
- convert:
field: cyberarkpas.monitor.cpu_usage
type: integer
ignore_missing: true
- convert:
field: cyberarkpas.monitor.memory_usage
type: integer
ignore_missing: true
- convert:
field: cyberarkpas.monitor.drive_free_space_in_gb
type: integer
ignore_missing: true
- convert:
field: cyberarkpas.monitor.drive_total_space_in_gb
type: integer
ignore_missing: true
- convert:
field: cyberarkpas.monitor.syslog_queue_size
type: integer
ignore_missing: true

########################################################
# ECS enrichment
########################################################

- set:
field: event.kind
value: metric

#
# Observer fields
#
- rename:
field: cyberarkpas.monitor.vendor
target_field: observer.vendor
ignore_missing: true
- rename:
field: cyberarkpas.monitor.product
target_field: observer.product
ignore_missing: true
- set:
field: observer.version
copy_from: cyberarkpas.monitor.version
ignore_empty_value: true
- rename:
field: cyberarkpas.monitor.hostname
target_field: observer.hostname
ignore_missing: true
# Use hostname from syslog if monitor record's Hostname field is missing.
- rename:
field: _tmp.hostname
target_field: observer.hostname
ignore_missing: true
if: ctx.observer?.hostname == null

#
# Populate related.hosts
#
- append:
field: related.hosts
value: '{{{observer.hostname}}}'
if: ctx.observer?.hostname != null
allow_duplicates: false

#
# Set host fields, unless already set
#
- script:
lang: painless
description: 'Set host.cpu.usage'
if: ctx.host?.cpu?.usage == null
source: >-
if (ctx.host == null) ctx.host = [:];
if (ctx.host.cpu == null) ctx.host.cpu = [:];
ctx.host.cpu.usage = ctx.cyberarkpas.monitor.cpu_usage/100.0;
- set:
field: host.name
value: '{{{observer.hostname}}}'
ignore_empty_value: true
if: ctx.host?.name == null

#
# Cleanup
#
- remove:
field: _tmp
ignore_missing: true

on_failure:
- append:
field: error.message
value: '{{{_ingest.on_failure_message}}}'
chrisberkhout marked this conversation as resolved.
Show resolved Hide resolved
- remove:
field: _tmp
ignore_missing: true
- set:
field: event.kind
value: pipeline_error
Loading