Runbook: Generate Detection Report#
Objective#
To summarize the findings, logic, and performance of a specific detection rule or a set of related rules. This report is often used for periodic review, tuning documentation, reporting on detection capabilities, or providing context for incident investigations.
Scope#
This runbook covers:
Retrieving the definition and logic of specified detection rule(s).
Gathering historical performance data (e.g., alert volume, severity, true/false positive rates if available from associated cases) for the rule(s) over a defined timeframe.
Optionally gathering context from related SOAR cases or tuning documentation.
Structuring the information into a standardized report format.
Generating a Mermaid sequence diagram illustrating the data gathering process for the report.
Writing the final Markdown report to a file.
This runbook explicitly excludes:
Performing new validation or tuning steps for the detection rule (it reports on existing information).
Implementing changes to the detection rule.
Inputs#
${RULE_ID}or${RULE_IDS}: The identifier(s) of the detection rule(s) to report on. This is mandatory.(Optional)
${REPORT_TIMEFRAME_DAYS}: Timeframe in days for gathering performance data (e.g., 30, 90). Defaults to 90 if not specified.(Optional)
${CASE_ID}: Relevant SOAR case ID if the report relates to a specific incident, tuning effort, or if case data is needed to determine TP/FP rates.(Derived)
${RULE_DEFINITION}: The logic/definition of the rule obtained fromsecops-mcp_list_security_rules.(Derived)
${PERFORMANCE_DATA}: Historical alert data and statistics for the rule fromsecops-mcp_get_security_alerts.(Derived)
${SOAR_CONTEXT}: Information from related SOAR cases obtained viasoar-mcp_get_case_full_details.
Outputs#
${REPORT_FILE_PATH}: The full path to the generated Markdown report file.${REPORT_CONTENT}: The full Markdown content of the generated report.${REPORT_GENERATION_STATUS}: Confirmation or status of the report file writing attempt.
Tools#
secops-mcp:list_security_rules,get_security_alerts(for performance data)secops-soar:get_case_full_details,post_case_comment(for context/documentation, thoughpost_case_commentis not directly used in this workflow but might be used by a calling runbook)write_to_file(Replaces the conceptualwrite_reporttool)
Workflow Steps & Diagram#
Gather Rule Details: Retrieve the rule logic/definition for
${RULE_ID}(or each ID in${RULE_IDS}) usingsecops-mcp_list_security_rules. Store in${RULE_DEFINITION}.Gather Performance Data: Retrieve historical alerts generated by the rule(s) over
${REPORT_TIMEFRAME_DAYS}(default 90) usingsecops-mcp_get_security_alerts. Analyze volume, severity, and potentially associated case statuses (TP/FP if${CASE_ID}is provided and allows such correlation). Store in${PERFORMANCE_DATA}.Gather Context (Optional): If
${CASE_ID}is provided, review related SOAR case(s) usingsoar-mcp_get_case_full_detailsor tuning documentation if applicable. Store in${SOAR_CONTEXT}.Structure Report: Organize information according to a standard template (referencing
rules-bank/reporting_templates.md). Key sections might include: Rule Details (ID, Name, Logic Summary from${RULE_DEFINITION}), Performance Metrics (Alert Volume, TP/FP Ratio if known from${PERFORMANCE_DATA}and${SOAR_CONTEXT}), Key Findings/Observations, Tuning History/Recommendations (if available from${SOAR_CONTEXT}).Generate Mermaid Diagram: Create a Mermaid sequence diagram summarizing the tools used to gather data for this report.
Format Report: Compile the synthesized information and the Mermaid diagram into a final Markdown report. Store as
${REPORT_CONTENT}.Write Report File: Save the report using
write_to_filewithpath="./reports/detection_report_${RULE_ID}_${timestamp}.md"(adjust filename if multiple RULE_IDS) andcontent=${REPORT_CONTENT}. Store path in${REPORT_FILE_PATH}and status in${REPORT_GENERATION_STATUS}.
sequenceDiagram
participant Analyst/User
participant AutomatedAgent as Automated Agent (MCP Client)
participant SIEM as secops-mcp
participant SOAR as secops-soar
Analyst/User->>AutomatedAgent: Generate Detection Report\nInput: RULE_ID, REPORT_TIMEFRAME_DAYS, CASE_ID (opt)
%% Step 1: Gather Rule Details
AutomatedAgent->>SIEM: list_security_rules(rule_id=RULE_ID) %% or iterate if RULE_IDS
SIEM-->>AutomatedAgent: Rule Definition/Logic (RULE_DEFINITION)
%% Step 2: Gather Performance Data
AutomatedAgent->>SIEM: get_security_alerts(rule_id=RULE_ID, hours_back=REPORT_TIMEFRAME_DAYS*24)
SIEM-->>AutomatedAgent: Historical Alert Data (PERFORMANCE_DATA)
Note over AutomatedAgent: Analyze performance metrics
%% Step 3: Gather Context (Optional)
opt Case ID Provided
AutomatedAgent->>SOAR: get_case_full_details(case_id=CASE_ID)
SOAR-->>AutomatedAgent: Case Context / Tuning Notes (SOAR_CONTEXT)
end
%% Step 4 & 5: Structure Report & Generate Diagram
Note over AutomatedAgent: Organize report sections (Rule Details, Performance, Findings...)
Note over AutomatedAgent: Create Mermaid diagram summarizing report generation steps
%% Step 6 & 7: Format & Write Report
Note over AutomatedAgent: Compile final Markdown content (REPORT_CONTENT)
AutomatedAgent->>AutomatedAgent: write_to_file(path="./reports/detection_report_${RULE_ID}_${timestamp}.md", content=REPORT_CONTENT)
Note over AutomatedAgent: Report file created (REPORT_FILE_PATH, REPORT_GENERATION_STATUS)
AutomatedAgent->>Analyst/User: attempt_completion(result="Detection report generated for RULE_ID. Path: REPORT_FILE_PATH")
Completion Criteria#
The definition and logic for the specified detection rule(s) (
${RULE_ID}or${RULE_IDS}) have been retrieved.Historical performance data for the rule(s) over the specified timeframe has been gathered and analyzed.
Optional context from SOAR cases or tuning documentation has been incorporated if provided.
A comprehensive Markdown report summarizing the rule details, performance, and any relevant context has been structured and formatted, including a Mermaid diagram of the data gathering process.
The final report has been successfully written to a file, and its path (
${REPORT_FILE_PATH}) and generation status (${REPORT_GENERATION_STATUS}) are available.
Rubrics#
The following rubric is used to evaluate the execution of this Reporting runbook by an LLM agent.
Grading Scale (0-100 Points)#
Criteria |
Points |
Description |
|---|---|---|
Data Collection |
25 |
Gathered all necessary data points and metrics for the report. |
Report Generation |
30 |
Generated the report in the correct format with accurate content. |
Quality & Clarity |
15 |
Ensure the report is readable, well-structured, and error-free. |
Delivery |
15 |
Delivered or saved the report to the correct location/recipient. |
Operational Artifacts |
15 |
Produced required artifacts: Sequence diagram, execution metadata (date/cost), and summary. |
Evaluation Criteria Details#
1. Data Collection (25 Points)#
25 pts: Successfully retrieved all required data (alerts, stats, summaries) from sources.
2. Report Generation (30 Points)#
15 pts: Formatted the data correctly into the target template (Markdown, PDF, etc.).
15 pts: Included all required sections (Executive Summary, Details, etc.).
3. Quality & Clarity (15 Points)#
15 pts: The generated text is coherent, accurate, and professional.
4. Delivery (15 Points)#
15 pts: Successfully saved the file or sent the notification/email as required.
5. Operational Artifacts (15 Points)#
5 pts: Sequence Diagram: Produced a Mermaid sequence diagram visualizing the steps taken.
5 pts: Execution Metadata: Recorded the date, duration, and estimated token cost.
5 pts: Summary Report: Generated a concise summary of the actions and outcomes.