Experimental project simulating the Detection-as-Code (DaC)
lifecycle as it would operate in a Google Chronicle SecOps environment, this is more focused on a bare-bones build as getting access to a Chronicle license, is not an option.
Instead, I’ve decided to build this from a bottom up approach of the expected detection flow, and later dive into how the Chronicle API will fit into this picture
Note: What's not included, as this is experimental
- A full Detection-as-Code CI/CD pipeline which would contain (testing, validating content and use cases, linting and formatting) --> Will be something to explore later.
.
├── `docker-log-generator/` # Synthetic mock UDM logs to Fluentd
├── `fluentd/` # Collects and buffers logs locally
├── `processed-logs/udm_log/` # Logging sink (simulates Chronicle UDM ingestion)
├── `scripts/` # Custom Python logic to match logs with rules
├── `rules/` # YARA-L and YAML rule definitions
├── `reference_lists/` # Chronicle-style reference lists (IP ranges, etc.)
├── `terraform/` # Example rule and datasource definitions (Chronicle-like)
├── `secops_rules.yaml` # DaC-friendly YAML defining rules to deploy (How it would be operationalised)
├── `secops_reference_lists.yaml`
├── `alerts/` # Simulated alerts generated from matching rules
├── `docs/` # Extra documentation
├── `Makefile` # Easy CLI for bringing the system up/down
└── `tests/` # Rule test cases (to be continued)
Engineering Parts:
- Docker container running fluentd
- UDM-compatible synthetic logs
- Custom Python detection engine and alert creation
- Another script to generate logs for fluentd
Detection Life Cycle Management:
- Detection logic in YAML/ YARA-L format for handling
- Looking into Terraform IaC to manage rules and data sources
- Simulated GitOps model for lifecycle management

Logs are emitted by docker-log-generator/log_emitter.py
, simulating user activity like logins
, admin changes
, geolocation anomalies
, etc. The logs are already UDM-shaped (based on Chronicle’s Unified Data Model) and pushed to Fluentd:
{
"metadata": {
"event_timestamp": "2025-08-06T12:34:56Z",
...
},
"principal": {
"ip": "8.8.8.8",
"location": {
"country_or_region": "RU"
},
...
},
"security_result": {
"action": "LOGIN_SUCCESS",
...
}
}
Logs are consumed by a custom Python engine in scripts/match_logs_from_fluentd.py
, which performs:
- UDM field normalization
{"metadata":{"event_timestamp":"2025-08-06T13:07:12.122409+00:00","ingested_timestamp":"2025-08-06T13:07:12.122409+00:00"},"product":"gcp","event_type":"LOGIN","vendor_name":"Google","principal":{"email_addresses":["[email protected]"],"ip":"103.27.4.178","hostname":"host-103-27-4-178.reliancejio.com"},"geo":{"country":"IN","is_admin":true},"network":{"asn":"AS55410","asn_name":"Reliance Jio","ip":"103.27.4.178","reverse_dns":"host-103-27-4-178.reliancejio.com"},"security_result":{"severity":"HIGH","rule_name":"None"}}
- Matching UDM fields against rule conditions
- Cross-referencing with reference lists (e.g., internal IP ranges)
- Emitting alerts to the
alerts/
folder
This simulates what Chronicle’s Detection Engine would do using the YARA-L rule format.
YAML/YARA-L Rules define detection logic:
rule: impossible_travel
condition: |
principal.ip NOT IN private_ip_ranges.txt AND
principal.location.country_or_region CHANGED_WITHIN 1 HOUR
severity: HIGH
type: anomaly
Reference list example (reference_lists/private_ip_ranges.txt):
10.0.0.0/8
192.168.0.0/16
If a match is found, alerts are generated to the alerts/
folder in JSON format. You can also run test cases in tests/
against your rules.
New Detection: impossible_travel_alerts
{
"metadata": {
"event_timestamp": "2025-08-06T13:07:14.148646+00:00",
"ingested_timestamp": "2025-08-06T13:07:14.148646+00:00"
},
"locations": [
{
"ip": "3.32.132.92",
"hostname": "host-3-32-132-92.googlellc.com",
"geo": {
"country": "US",
"is_admin": true
}
},
{
"ip": "142.112.180.19",
"hostname": "host-142-112-180-19.teluscommunicationsinc.com",
"geo": {
"country": "CA",
"is_admin": true
}
}
],
"note": "Two admin logins from different locations within the same second"
}
New Detection: high_volume_login_alerts
[
{
"rule_name": "High Volume Login from CA",
"severity": "MEDIUM",
"event_count": 15,
"events": [
{
"metadata": {
"event_timestamp": "2025-08-06T13:07:29.552018+00:00",
"ingested_timestamp": "2025-08-06T13:07:29.552018+00:00"
},
"product": "gcp",
"event_type": "LOGIN",
"vendor_name": "Google",
"principal": {
"email_addresses": [
"[email protected]"
],
"ip": "184.108.217.157",
"hostname": "host-184-108-217-157.teluscommunicationsinc..com"
},
"geo": {
"country": "CA",
"is_admin": true
},
"network": {
"asn": "AS852",
"asn_name": "TELUS Communications Inc.",
"ip": "184.108.217.157",
"reverse_dns": "host-184-108-217-157.teluscommunicationsinc..com"
},
"security_result": {
"severity": "HIGH",
"rule_name": "None"
}
},
The terraform/
folder simulates how you would actually deploy your resources in a real world environment using the Chronicle API. Each rule can be defined in .tf
files and governed via secops_rules.yaml.
You would need to add the UDM fields that are populated for your datasources in here:
File Name: gcp_audit_logsource.tf
In a real Chronicle deployment, these .tf
files would use the Chronicle Terraform provider to push rules via API.
This is a more focused table of Chronicle APIs that will be useful for creating, updating, validating and managing detection content like rules, reference lists and data sources.
| Resource Type | Endpoint | Method | Description | DaC Use Case |
| ------------------------- | ---------------------------------------------- | -------- | -------------------------------- | -------------------------------------------------------------- |
| **Detection Rules** | `/v1/detect/rules` | `POST` | Create new YARA-L detection rule | Create rule from GitHub-pushed code |
| | `/v1/detect/rules/{rule_id}` | `PATCH` | Update an existing rule | Update rule when a PR is merged |
| | `/v1/detect/rules/{rule_id}` | `DELETE` | Delete a rule | Remove deprecated or broken rules |
| | `/v1/detect/rules` | `GET` | List all existing rules | Validate drift between Git and Chronicle |
| | `/v1/detect/rules:validate` | `POST` | Validate rule syntax (dry run) | Lint rule pre-merge or CI check |
| **Rule Alert Counts** | `/v1/detect/rules/{rule_id}:getAlertStats` | `GET` | Get alert stats (volume) | Prioritize tuning noisy rules |
| **Rule Execution Logs** | `/v1/detect/rules/{rule_id}:getExecutionStats` | `GET` | Get match execution stats | Analyze performance of rules |
| **Reference Lists** | `/v1/detect/referenceLists` | `POST` | Create new reference list | Add threat intel or allowlists from Git |
| | `/v1/detect/referenceLists/{list_id}` | `PATCH` | Update reference list | Update indicators/IPs regularly |
| | `/v1/detect/referenceLists` | `GET` | List all reference lists | Audit existing lists vs repo |
| | `/v1/detect/referenceLists/{list_id}` | `DELETE` | Delete reference list | Remove stale lists |
| **Data Sources** | `/v2/assets/dataSources` | `POST` | Create new data source | Register new log sources programmatically |
| | `/v2/assets/dataSources` | `GET` | List data sources | Validate ingestion sources match code |
| **Parsers (UDM Mapping)** | `/v1/ingestion/parsers` *(limited access)* | `POST` | Deploy custom parsers | Align parser changes with rule logic (requires partner access) |
| **Detection Reports** | `/v1/detect/reports` | `GET` | Retrieve rule reports | Track rule health and triage history |
| **Auth (OAuth2)** | `https://oauth2.googleapis.com/token` | `POST` | Get access token for API | Required for all programmatic DaC actions |
Template Chronicle Github Repo
.
├── .github/
│ └── workflows/
│ └── detect.yaml <-- CI/CD pipeline
├── rules/
│ ├── detection_rule_1.yaral
│ ├── detection_rule_2.yaral
│ └── metadata.yaml
├── reference_lists/
│ └── high_risk_ips.yaml
├── tests/
│ └── test_rules.py <-- (optional) unit tests
├── tools/
│ └── push_to_chronicle.py <-- Helper script for API calls
├── README.md
└── terraform/ <-- Optional DaC Terraform setup
flowchart TD
A[Push to GitHub - main/dev branch] --> B[GitHub Actions Triggered]
B --> C[Validate Syntax - rules:validate API]
C --> D{Did Validation Pass?}
D -- Yes --> E[Deploy to Chronicle API - rules & lists]
D -- No --> F[Fail CI - Comment on PR]
E --> G[Check Drift or Sync State]
G --> H[Post Summary as GitHub PR Comment]