Detection Use-Case Package: [USE_CASE_NAME_Placeholder]#

1. Use-Case Overview#

  • Use-Case_ID: [YYYYMMDD_ShortName_Version] (e.g., 20241126_SuspiciousPowerShell_v1)

  • Detection_Title: [Clear, concise title for the detection use-case]

  • Date_Created: YYYY-MM-DD

  • Last_Updated: YYYY-MM-DD

  • Author/SME: [Name/Team]

  • Status: [e.g., Concept, Research, Development, Testing, Production, Archived]

  • Description:

    • [Briefly describe the threat, TTP, or malicious activity this use-case aims to detect. What problem does it solve?]

  • Business/Security Impact:

    • [Explain the potential impact if this activity is not detected (e.g., data exfiltration, ransomware deployment, lateral movement).]

2. Threat Alignment & Context#

  • MITRE ATT&CK Mapping:

    • Tactic(s): [e.g., TA0002 - Execution, TA0005 - Defense Evasion]

    • Technique(s): [e.g., T1059.001 - PowerShell, T1027 - Obfuscated Files or Information]

    • Sub-Technique(s) (if applicable): [e.g., T1059.001 - PowerShell: Base64 Encoding]

  • Relevant Threat Actors/Campaigns:

    • [List known threat actors or campaigns that utilize this TTP. Reference internal_threat_profile.md if applicable.]

  • Severity Assessment (Initial): [Low, Medium, High, Critical - based on incident_severity_matrix.md principles]

3. Technical Details & Detection Logic#

  • Data_Sources_Required:

    • [List specific log sources needed, e.g., “Windows Security Event Logs (ID 4688)”, “EDR Process Events”, “Chronicle UDM: PROCESS_LAUNCH”. Reference log_source_overview.md.]

  • Detection_Logic_Hypothesis / Rule Pseudocode:

    • [Describe the core logic for detection. This can be in pseudocode or a clear English description.]

    • Example: “Detect when powershell.exe is launched with command-line arguments containing ‘Invoke-Expression’ AND ‘-EncodedCommand’ AND the parent process is not explorer.exe or cmd.exe.”

  • Key_IOC_Patterns_or_Behavioral_Signatures:

    • [List specific strings, regex patterns, commands, API calls, network patterns, or sequences of events that are indicative of this activity.]

  • Known_False_Positive_Considerations:

    • [What legitimate activity might resemble this? How can it be filtered?]

    • [Reference common_benign_alerts.md or whitelists.md if applicable.]

  • Detection_Platform(s): [e.g., Chronicle SIEM, EDR Product X, Custom Script]

4. Incident Response & Automation Planning#

  • Initial_IR_SOP_Considerations (Draft - to be expanded into a full runbook):

    1. Triage: [Initial steps to validate the alert - e.g., check AI confidence, verify key indicators.]

    2. Investigation: [Key questions to answer, e.g., What user executed it? What was the parent process? Any network connections made? Any files created/modified? Use indicator_handling_protocols.md and analytical_query_patterns.md.]

    3. Containment (if applicable): [Potential containment actions, e.g., isolate host, block C2 IP. Reference automated_response_playbook_criteria.md.]

    4. Eradication (if applicable): [Potential eradication steps.]

    5. Escalation: [When to escalate to Tier 2/3 or other teams, per escalation_paths.md.]

  • Automation_Feasibility_Assessment:

    • Candidate Steps for Automation: [Which of the above SOP steps could be automated?]

    • Required Tools & Integrations: [List MCP tools, SOAR integrations needed. Reference mcp_tool_best_practices.md.]

    • Existing Automation Reuse: [Can any existing playbooks/scripts from automated_response_playbook_criteria.md be leveraged?]

    • Data Requirements for Automation: [What specific inputs would automated steps need?]

    • Confidence Threshold for Automated Action: [If an automated response is considered, what confidence level is required from the detection/AI?]

  • AI_Agent_Role_in_This_Use_Case:

    • [Describe how an AI agent can assist with this detection use-case.]

    • Examples:

      • “AI can monitor for alerts generated by this rule.”

      • “AI can perform initial triage steps 1.1-1.3 from the SOP.”

      • “AI can enrich IOCs found in the alert using indicator_handling_protocols.md.”

      • “If detection confidence >90% AND asset is not critical, AI can trigger automated host isolation as per automated_response_playbook_criteria.md.”

5. Testing & Validation Plan#

  • Test Scenarios (Simulated & Real-World):

    • [Describe how this detection will be tested. E.g., Use Atomic Red Team, custom scripts, replay of historical data.]

  • Expected True Positive Outcome: [What should happen when the malicious activity is correctly detected?]

  • Expected False Positive Scenarios to Test: [What legitimate activities will be tested to ensure they don’t trigger the alert?]

  • Success Criteria for Testing: [e.g., Detection fires on X% of true positive tests, <Y% false positives on benign tests.]

6. Delivery & Deployment Notes#

  • Rule ID (once deployed): [PlatformSpecificRuleID]

  • Associated Runbook/SOP Document: [Link to the full runbook, e.g., handle_suspicious_powershell.md]

  • Deployment Date:

  • Version History:

7. Optimization & Metrics (Post-Deployment)#

  • Key Performance Indicators (KPIs) to Track:

    • True Positive / False Positive Rate for this specific detection.

    • Analyst feedback volume/sentiment (from ai_decision_review_guidelines.md).

    • Time spent by analysts/AI on alerts from this detection.

  • Review Cycle: [e.g., Quarterly, or after X number of alerts]

  • Lessons Learned / Improvement Areas: [To be filled in during optimization reviews]


References and Inspiration#