Skip to content

Conversation

@vivekr-splunk
Copy link
Collaborator

@vivekr-splunk vivekr-splunk commented Oct 22, 2025

Description

Add a new guide docs/HeavyForwarder-Standalone.md that walks users through creating a Splunk Heavy Forwarder on Kubernetes using the Splunk Operator Standalone custom resource. The guide covers:

  • ConfigMap driven configuration with defaultsUrl
  • Reliable Splunk to Splunk forwarding with outputs.conf (indexAndForward=false, useACK=true, compressed=true, autoLBFrequency=30)
  • Operator managed HEC enablement via the namespace secret
  • Deployment steps, validation, troubleshooting, cleanup, and references

Key Changes

  • Added docs/HeavyForwarder-Standalone.md

No code or charts changed.

Testing and Verification

  • Rendered the Markdown locally to verify headings, anchors, and code blocks

  • Ran Markdown linting to check formatting

  • Dry-ran the provided kubectl commands against a test cluster to validate:

    • Standalone CR becomes Ready
    • Operator secret contains a HEC token
    • inputs.conf shows HEC enabled
    • outputs.conf shows heavy forwarder settings
    • Test HEC event posts successfully and is searchable at the indexers

No automated tests are added since this is documentation only.

Related Issues

  • Jira: CSPL-4133

PR Checklist

  • Code changes adhere to the project's coding standards.
  • Relevant unit and integration tests are included.
  • Documentation has been updated accordingly.
  • All tests pass locally.
  • The PR description follows the project's guidelines.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds comprehensive documentation for deploying a Splunk Heavy Forwarder on Kubernetes using the Splunk Operator's Standalone Custom Resource. The guide demonstrates ConfigMap-based configuration management, operator-managed HEC enablement, and reliable Splunk-to-Splunk forwarding.

Key Changes:

  • Added complete heavy forwarder deployment guide covering architecture, configuration, deployment steps, validation procedures, and troubleshooting
  • Documented operator-managed HEC token generation and automatic inputs.conf configuration
  • Provided detailed testing procedures with expected outputs for each validation step

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.


The ConfigMap contains a `default.yml` file with three main configuration sections:

##### 1. outputs.conf - Forwarding Configuration (Lines 9-22)
Copy link

Copilot AI Oct 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The line number reference '(Lines 9-22)' is misleading as it refers to lines in the original YAML file structure, not the documentation itself. Consider removing the line number reference or clarifying what it refers to (e.g., 'Lines 9-22 in the ConfigMap YAML structure shown below').

Suggested change
##### 1. outputs.conf - Forwarding Configuration (Lines 9-22)
##### 1. outputs.conf - Forwarding Configuration (Lines 9-22 in the ConfigMap YAML structure shown below)

Copilot uses AI. Check for mistakes.
- `compressed: true` - Reduces network bandwidth
- `autoLBFrequency: 30` - Distributes load across indexers every 30 seconds

##### 2. props.conf - Data Parsing Rules (Lines 23-30)
Copy link

Copilot AI Oct 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar to the previous section, the line number reference '(Lines 23-30)' is unclear. These line numbers don't correspond to lines in this document and may confuse readers.

Suggested change
##### 2. props.conf - Data Parsing Rules (Lines 23-30)
##### 2. props.conf - Data Parsing Rules

Copilot uses AI. Check for mistakes.
- Can filter out unwanted events
- Can route to specific indexer groups

##### 3. transforms.conf - Data Transformation (Lines 31-42)
Copy link

Copilot AI Oct 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The line number reference '(Lines 31-42)' should be clarified or removed to avoid confusion, as these numbers don't map to the current document structure.

Suggested change
##### 3. transforms.conf - Data Transformation (Lines 31-42)
##### 3. transforms.conf - Data Transformation

Copilot uses AI. Check for mistakes.
- `drop_noise` - Filters out DEBUG messages to reduce noise
- `to_idx_svc` - Routes specific host data to the indexer cluster

### Part 2: Standalone Custom Resource (Lines 44-55)
Copy link

Copilot AI Oct 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The line number reference '(Lines 44-55)' should be removed or clarified, as it doesn't correspond to lines in this documentation.

Suggested change
### Part 2: Standalone Custom Resource (Lines 44-55)
### Part 2: Standalone Custom Resource

Copilot uses AI. Check for mistakes.
@coveralls
Copy link
Collaborator

coveralls commented Oct 22, 2025

Pull Request Test Coverage Report for Build 18706328521

Details

  • 0 of 0 changed or added relevant lines in 0 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage remained the same at 86.552%

Totals Coverage Status
Change from base Build 18653942794: 0.0%
Covered Lines: 10710
Relevant Lines: 12374

💛 - Coveralls


1. **Kubernetes cluster** with kubectl access
2. **Splunk Operator** installed and running
3. **Two namespaces**:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's optional to run these in the following namespaces since we support both single-namespace and multiple-namespace deployments, so it doesn't have to be a part of the prerequisites. Same with the service name.


**Transforms Explained:**
- `drop_noise` - Filters out DEBUG messages to reduce noise
- `to_idx_svc` - Routes specific host data to the indexer cluster
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it idx_svc as above or to_idx_svc as here?

### Official Documentation
- [Splunk Operator GitHub](https://github.com/splunk/splunk-operator)
- [Splunk Operator Documentation](https://splunk.github.io/splunk-operator/)
- [Splunk Heavy Forwarder Documentation](https://docs.splunk.com/Documentation/Forwarder/latest/Forwarder/Aboutforwardingandreceivingdata)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It returns 404

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

- [transforms.conf](https://docs.splunk.com/Documentation/Splunk/latest/Admin/Transformsconf)

### Splunk Operator Custom Resources
- [Standalone CR Specification](https://splunk.github.io/splunk-operator/StandaloneSpec.html)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It returns 404


### Splunk Operator Custom Resources
- [Standalone CR Specification](https://splunk.github.io/splunk-operator/StandaloneSpec.html)
- [Common Spec Parameters](https://splunk.github.io/splunk-operator/CommonSpec.html)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It returns 404

kubectl get standalone -n forwarder

# Optional: Delete PVCs (data will be lost)
kubectl delete pvc -n forwarder -l app.kubernetes.io/instance=splunk-hf-standalone-standalone
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are finalizers, so it should happen automatically

indexAndForward: false # CRITICAL: False = Heavy Forwarder mode
autoLBFrequency: 30 # Load balance every 30 seconds
compressed: true # Compress data during transmission
"tcpout:idx_svc":
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason that this has double quotes and dcpout on line 117 does not? It looks like they are at the same indentation, so it might make sense to make them consistent.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants