Case Event Timeline & Process Analysis Workflow#
Objective: Generate a detailed timeline of events for a specific SOAR case (${CASE_ID}), including the full process execution chain leading to the alerted activity. Classify relevant processes as legitimate, LOLBIN, or malicious using GTI enrichment. Optionally enrich with MITRE TACTICs and generate a markdown report summarizing the findings. Optionally convert the report to PDF and attempt to attach it to the SOAR case.
Uses Tools:
soar-mcp_get_case_full_details(Provides initial context and alerts)soar-mcp_list_events_by_alertsecops-mcp_search_security_events(Crucial for finding parent process launch events)soar-mcp_google_chronicle_list_events(To get broader asset context)soar-mcp_google_chronicle_get_rule_details(Optional, for specific rule context)soar-mcp_google_chronicle_get_detection_details(Optional, for specific detection context)gti-mcp_get_file_report(for process hash classification)secops-mcp_get_threat_intel(for MITRE TACTIC mapping/general enrichment)siemplify_create_gemini_case_summary(Optional, for AI-generated summary)write_report(for report generation)soar-mcp_post_case_comment(to note report location/attach if possible)You may ask follow up question (for report format/content/attachment/SOAR actions confirmation)
attempt_completion(Optional SOAR Actions based on user confirmation):
siemplify_case_tag,siemplify_change_priority,siemplify_add_general_insight,siemplify_update_case_description,siemplify_assign_case,siemplify_raise_incident,siemplify_create_gemini_case_summary
Workflow Steps & Diagram:
Get full case details (including alerts and comments) for
${CASE_ID}usingget_case_full_details.For each alert obtained in Step 1, list the associated events using
list_events_by_alert.(Optional) If alert/event data contains specific Chronicle Rule IDs or Detection IDs, use
google_chronicle_get_rule_detailsorgoogle_chronicle_get_detection_detailsfor more context.Extract key process information (PID, Parent PID, Hash, Path, CmdLine) and involved assets (Hostnames, IPs) from the events obtained in Step 2.
CRITICAL STEP: Find Parent Process Chain: Iteratively search for
PROCESS_LAUNCHevents to trace the parent process chain backward from the initial alert events.Start: Identify the parent process PID (or
productSpecificProcessId) from the initial alert events (Step 2). Let this beCurrent_Parent_PID. Identify the timestamp of the child process launch (Child_Timestamp).Iterate:
Search SIEM (
secops-mcp_search_security_events) forPROCESS_LAUNCHevents where the target process PID matchesCurrent_Parent_PID.Time Window: Use a focused time window around
Child_Timestamp(e.g., +/- 15 minutes or +/- 1 hour).Identifiers: Attempt searches using both the principal hostname (if known) and principal IP address associated with the child process.
Store: If the launch event for
Current_Parent_PIDis found, store its details (parent PID, command line, timestamp, etc.) in the process chain data. UpdateCurrent_Parent_PIDto the newly found parent’s PID and updateChild_Timestampto the timestamp of the event just found. Repeat the search.Stop: Continue iterating backward until a known root process (e.g.,
explorer.exe,services.exe) is reached, the parent PID is null/invalid, or the search yields no results within a reasonable timeframe.
Troubleshooting: If
search_security_eventsfails, times out, or returns no results:Try broadening the time window for the specific parent search (e.g., +/- 1 hour, +/- 6 hours). Be aware this may increase noise.
Consider using
soar-mcp_google_chronicle_list_eventsfiltered formetadata.event_type = "PROCESS_LAUNCH"on the specific asset around the expected time as an alternative.If parent process launch events are still elusive, consider searching for other related activity (e.g., user logins, network connections) associated with the parent process around its estimated start time to infer context.
Acknowledge Limitations: Note that tracing the full chain might not always be possible due to log availability, timing discrepancies, unusual process IDs (e.g., PID 4), or processes starting before the log retention/search window.
Store all found launch event details chronologically.
(Optional) For key involved assets identified in Step 4, use
google_chronicle_list_eventsto get broader event context for those assets around the alert time.Enrich process hashes using GTI (
get_file_report) to classify processes (Legitimate, LOLBIN, Malicious).(Optional) Enrich activities with potential MITRE TACTICs using
get_threat_intel.Synthesize the collected data (case details, alert events, parent process events, asset events, enrichments), sorting events chronologically.
(Optional) Generate an AI summary using
siemplify_create_gemini_case_summary.Format the report in Markdown, ensuring it MUST include:
A summary section (incorporating initial case details and optionally the Gemini summary).
A Process Execution Tree (Text) showing the parent-child chain as determined. If the full chain could not be traced, clearly indicate where the tracing stopped (e.g.,
[PID ???]).A Process Execution Tree (Diagram) using Mermaid (
graph LR), similarly reflecting the extent of the traced chain.An Event Timeline Table including timestamps, classifications, and optional MITRE TACTICs/time deltas.
An analysis section.
(Report Limitation Note): If the full process chain could not be determined, explicitly state this limitation in the report summary or analysis section.
Ask the user to confirm report generation and format preferences (e.g., include time delta, include Gemini summary).
Write the Markdown report to a timestamped file (e.g.,
./reports/case_${CASE_ID}_timeline_${timestamp}.md).(Optional, based on user feedback) Convert the Markdown report to PDF using
pandocviaexecute_command.(Optional, based on user feedback) Attempt to attach the PDF to the SOAR case. Note: Direct PDF attachment might require specific SOAR tools not always available. If attachment fails, post a comment with the local path to the MD/PDF report.
(Optional, based on user feedback) Ask the user if they want to perform additional SOAR actions (tagging, priority change, insight, description update, assignment, incident declaration).
(Optional, based on user feedback) Execute selected SOAR actions.
Conclude with
attempt_completion.
sequenceDiagram
participant User
participant AutomatedAgent as Automated Agent (MCP Client)
participant SOAR as secops-soar
participant SIEM as secops-mcp
participant GTI as gti-mcp
User->>AutomatedAgent: Generate timeline for Case `${CASE_ID}` with full process tree
%% Step 1: Get Initial Case Details & Alerts
AutomatedAgent->>SOAR: get_case_full_details(case_id=`${CASE_ID}`)
SOAR-->>AutomatedAgent: Case Details, List of Alerts (A1, A2...), Comments
Note over AutomatedAgent: Initialize timeline_data = [], process_chain = {}, assets = set()
Note over AutomatedAgent: Use Alerts (A1, A2...) from get_case_full_details response
%% Step 2 & 3: Get Events & Optional Rule/Detection Details
loop For each Alert Ai
AutomatedAgent->>SOAR: list_events_by_alert(case_id=`${CASE_ID}`, alert_id=Ai)
SOAR-->>AutomatedAgent: Events for Alert Ai (E1, E2...)
Note over AutomatedAgent: Extract Process Info (PID P1, Parent PID PP1, Hash H1...), Assets (Host H, IP I...) & store in timeline_data, process_chain, assets
Note over AutomatedAgent: Extract Rule ID Ri, Detection ID Di if available
opt Rule ID Ri available
AutomatedAgent->>SOAR: google_chronicle_get_rule_details(rule_id=Ri, ...)
SOAR-->>AutomatedAgent: Rule Details
end
opt Detection ID Di available
AutomatedAgent->>SOAR: google_chronicle_get_detection_details(detection_id=Di, ...)
SOAR-->>AutomatedAgent: Detection Details
end
alt Process Hash H1 available
AutomatedAgent->>GTI: get_file_report(hash=H1)
GTI-->>AutomatedAgent: GTI Report for Hash H1 -> Classify P1
end
end
%% Step 5: Find Parent Processes
Note over AutomatedAgent: **CRITICAL: Find Parent Processes**
Note over AutomatedAgent: Current PID = PP1 (from initial events)
loop While Current PID is valid & not root
AutomatedAgent->>SIEM: search_security_events(text="PROCESS_LAUNCH for target PID Current PID")
SIEM-->>AutomatedAgent: Launch Event (Parent PID PP_Next, CmdLine...)
Note over AutomatedAgent: Store launch event in timeline_data
Note over AutomatedAgent: Add Current PID, PP_Next to process_chain
Note over AutomatedAgent: Current PID = PP_Next
end
%% Step 6: Optional Asset Event Search
opt Assets identified
loop For each Asset As in assets
AutomatedAgent->>SOAR: google_chronicle_list_events(target_entities=[{Identifier: As, ...}], time_frame=...)
SOAR-->>AutomatedAgent: Broader events for Asset As
Note over AutomatedAgent: Add relevant asset events to timeline_data
end
end
Note over AutomatedAgent: Sort timeline_data by time
%% Step 8: Optional MITRE Enrichment
Note over AutomatedAgent: (Optional) Enrich with MITRE TACTICs
loop For each relevant entry in timeline_data
AutomatedAgent->>SIEM: get_threat_intel(query="MITRE TACTIC for [activity description]")
SIEM-->>AutomatedAgent: Potential TACTIC(s)
end
%% Step 10: Optional Gemini Summary
opt Generate Gemini Summary
AutomatedAgent->>SOAR: siemplify_create_gemini_case_summary(case_id=`${CASE_ID}`, ...)
SOAR-->>AutomatedAgent: Gemini Summary Text
end
%% Step 12: Confirm Report Generation
AutomatedAgent->>User: Confirm: "Generate MD report (incl. Process Trees)? Include delta/Gemini? (Yes, include delta/Yes, exclude delta/Yes, include Gemini/Yes, include All/No Report)"
User->>AutomatedAgent: Confirmation (e.g., "Yes, exclude delta")
alt Report Confirmed ("Yes...")
%% Step 13: Write MD Report
Note over AutomatedAgent: Format report content (MUST include Trees & Table, optionally Gemini Summary)
AutomatedAgent->>AutomatedAgent: write_report(path="./reports/case_${CASE_ID}_timeline_${timestamp}.md", content=...)
Note over AutomatedAgent: MD Report file created.
%% Step 14 & 15: Confirm PDF/Attach
AutomatedAgent->>User: Confirm: "Convert report to PDF and attach/comment in SOAR? (Yes/No)"
User->>AutomatedAgent: Confirmation (e.g., "Yes")
alt PDF & Attach/Comment Confirmed
Note over AutomatedAgent: Attempt SOAR attachment (Tool dependent)
AutomatedAgent->>SOAR: post_case_comment(case_id=`${CASE_ID}`, comment="Generated report. PDF available at: PDF_PATH") %% Fallback if attach fails
SOAR-->>AutomatedAgent: Comment Confirmation
%% Step 16 & 17: Optional SOAR Actions
AutomatedAgent->>User: Confirm: "Perform additional SOAR actions? (Tag Case/Change Priority/Add Insight/Update Description/Assign Case/Raise Incident/None)"
User->>AutomatedAgent: SOAR Action Choice (e.g., "Tag Case")
alt SOAR Action Chosen != "None"
%% Execute chosen SOAR action(s)
AutomatedAgent->>SOAR: [Chosen SOAR Tool](case_id=`${CASE_ID}`, ...)
SOAR-->>AutomatedAgent: Action Confirmation
end
AutomatedAgent->>AutomatedAgent: attempt_completion(result="Timeline analysis complete. Report generated (MD/PDF). SOAR case updated. Optional actions performed.")
else PDF & Attach/Comment Not Confirmed
%% Step 16 & 17: Optional SOAR Actions (No PDF/Attach)
AutomatedAgent->>User: Confirm: "Perform additional SOAR actions? (Tag Case/Change Priority/Add Insight/Update Description/Assign Case/Raise Incident/None)"
User->>AutomatedAgent: SOAR Action Choice (e.g., "None")
alt SOAR Action Chosen != "None"
%% Execute chosen SOAR action(s)
AutomatedAgent->>SOAR: [Chosen SOAR Tool](case_id=`${CASE_ID}`, ...)
SOAR-->>AutomatedAgent: Action Confirmation
end
AutomatedAgent->>AutomatedAgent: attempt_completion(result="Timeline analysis complete. MD Report generated. Optional actions performed.")
end
else Report Not Confirmed ("No Report")
AutomatedAgent->>AutomatedAgent: attempt_completion(result="Timeline analysis complete. No report generated.")
end
Rubrics#
The following rubric is used to evaluate the execution of this Threat Hunt/Analysis runbook by an LLM agent.
Grading Scale (0-100 Points)#
Criteria |
Points |
Description |
|---|---|---|
Scope & Query |
25 |
Defined a clear scope and executed effective queries (UDM, search). |
Data Analysis |
30 |
Analyzed results to identify patterns, anomalies, or malicious behavior. |
Findings |
15 |
Accurately identified and filtered findings (True Positives vs. False Positives). |
Documentation |
15 |
Documented the hunt methodology and results clearly. |
Operational Artifacts |
15 |
Produced required artifacts: Sequence diagram, execution metadata (date/cost), and summary. |
Evaluation Criteria Details#
1. Scope & Query (25 Points)#
10 pts: Correctly defined the time range and entities/indicators for the hunt.
15 pts: Constructed and executed valid, efficient queries to retrieve relevant data.
2. Data Analysis (30 Points)#
15 pts: Effectively analyzed the returned data for the hypothesized threat.
15 pts: Correlated events or indicators to strengthen the analysis.
3. Findings (15 Points)#
15 pts: Correctly classified the findings and provided evidence for the conclusion.
4. Documentation (15 Points)#
15 pts: Recorded the hunt process, queries used, and findings in the system of record.
5. Operational Artifacts (15 Points)#
5 pts: Sequence Diagram: Produced a Mermaid sequence diagram visualizing the steps taken.
5 pts: Execution Metadata: Recorded the date, duration, and estimated token cost.
5 pts: Summary Report: Generated a concise summary of the actions and outcomes.