Contenu
This hands-on module provides the fundamentals of deploying a Splunk DSP cluster and designing pipelines for core use cases. It covers installation, source and sink configurations, pipeline design and backup, and monitoring a DSP environment.
Please note that this class may run across four days, in 4.5 hour sessions each day and contains 18 total hours of content.
A qui s'adresse cette formation
This 4-day module is designed for the experienced Splunk administrators who are new to Splunk DSP.
Pré-requis
Required:
Recommended:
- Architecting Splunk Enterprise Deployments (ASED)
- Working knowledge of:
- Distributed system architectures
- Apache Kafka (user level)
- Apache Flink (user level)
- Kubernetes (admin level)
Objectifs
- Introduction to Splunk Data Stream Processor
- Deploying a DSP cluster
- Prepping Sources and Sinks
- Building Pipelines - Basics
- Building Pipelines - Deep Dive
- Working with 3rd party Data Feeds
- Working with Metric Data
- Monitoring DSP Environment
Outline: Implementing Splunk Data Stream Processor (DSP) (ISDSP)
Topic 1 – Introduction to DSP
- Review Splunk deployment options and challenges
- Describe the purpose and value of Splunk DSP
- Understand DSP concepts and terminologies
Topic 2 – Deploying a DSP Cluster
- List DSP core components and system requirements
- List DSP core components and system requirements
- Describe installation options and steps
- Check DSP service status
- Learn to navigate in DSP UI
- Use scloud
Topic 3 – Prepping Sources and Sinks
- Ingest data with DSP REST API service
- Configure DSP source connections for Splunk data
- Configure DSP sink connections for Splunk indexers
- Create Splunk-to Splunk pass-through pipelines
Topic 4 – Building Pipelines - Basics
- Describe the basic elements of a DSP pipeline
- Create data pipelines with the DSP canvas and SPL2
- List DSP pipeline commands
- Use scalar functions to convert data types and schema
- Filter and route data to multiple sinks
Topic 5 – Building Pipelines - Deep Dive
- Manipulate pipeline options:
- Extract
- Transform
- Obfuscate
- Aggregate and conditional trigger
Topic 6 – Working with 3rd party Data Feeds
- Read from and write data to pub-sub systems like Kafka
- List sources supported with the collect service
- Transform data from Kafka and normalize
- Write to S3
Topic 7 – Working with Metric Data
- Onboard metric data into DSP
- Transform metric data for Splunk indexers and SignalFx
- Send metric data to Splunk indexers
- Send metric data to Splunk SignalFx
Topic 8 – Monitoring DSP Environment
- Back up DSP pipelines
- Monitor DSP environment
- Describe steps to isolate DSP service issues
- Scale DSP
- Replace DSP master node
- Upgrade DSP cluster