Unit 6 - Notes
Unit 6: Communication and Reporting
1. Communication Path
The communication path defines the formal lines of authority and information flow between the penetration testing team (Red Team) and the client organization (Blue Team/Management). Establishing this prior to the engagement is critical for incident response and de-confliction.
Key Components
- Primary Points of Contact (POC):
- Tester Side: The lead penetration tester responsible for the engagement.
- Client Side: Usually the CISO, Security Manager, or a designated technical lead who authorizes the test.
- Escalation Matrix: A hierarchical list of contacts to be notified in specific scenarios (e.g., system crash, detection of prior compromise).
- Secure Channels: Communication must never occur over unencrypted mediums.
- Encrypted Email: Use PGP/GPG keys for all email correspondence containing findings.
- Secure Portals: Web-based repositories with MFA where reports are uploaded.
- Out-of-Band (OOB) Communication: Phone numbers or Signal/Wire groups used if the corporate network is compromised or unavailable.
2. Communication Triggers
Triggers are specific events that require immediate interruption of the standard testing timeline to notify the client. Not all findings wait for the final report.
Critical Triggers
- Critical Security Findings: Discovery of a vulnerability that poses an imminent threat to the business (e.g., Remote Code Execution on a core banking server).
- Service Disruption (DoS): If a test inadvertently causes a server crash or network outage, the client must be notified immediately to initiate disaster recovery.
- Indicators of Compromise (IoC): If the tester discovers that the system has already been breached by a malicious actor.
- Action: Stop testing immediately to preserve forensic evidence and notify the POC.
- Scope Deviation: If the tester realizes the scope provided is incorrect (e.g., IP addresses belong to a third party not authorized for testing).
- Legal/Regulatory Issues: Discovery of illegal content (e.g., CSAM) on the target systems.
3. Reporting Tools
Tools used to aggregate data, manage evidence, and generate the final deliverable.
Data Collection and Management
- Note-Taking Apps: Obsidian, CherryTree, Microsoft OneNote. Used for raw data, screenshot organization, and command logging during the test.
- Collaboration Platforms: Slack, Microsoft Teams, Mattermost. (Must be configured securely/self-hosted).
Automated Reporting Platforms
These tools import data from scanners (Nessus, Burp Suite, Nmap) and allow for manual editing to generate consistent PDFs/HTML reports.
- Dradis: An open-source framework that integrates with over 19 tools. It consolidates findings and aids in collaborative report writing.
- Serpico: Designed specifically for report generation; allows testers to create templates and reusable finding descriptions.
- Plextrac: A commercial platform focusing on "purple teaming" and vulnerability management workflows.
4. Identify Report Audience
A penetration test report is read by two distinct groups with different needs. A successful report must cater to both.
The Executive Audience (C-Suite, Board, Directors)
- Focus: Risk, Budget, Reputation, Compliance.
- Needs:
- High-level summary of security posture.
- Financial impact of vulnerabilities.
- Risk score (e.g., "High Risk").
- No technical jargon or code snippets.
- ROI on security investments.
The Technical Audience (Sysadmins, Developers, DevOps)
- Focus: Remediation, Reproduction, Root Cause.
- Needs:
- Exact steps to reproduce the exploit (PoC).
- Affected endpoints and parameters.
- Code snippets (diffs).
- Specific configuration changes (e.g., "Update nginx.conf to disable TLS 1.0").
- Reference links (CVEs, Vendor patches).
5. Report Content
The standard structure of a professional penetration test report includes the following sections:
I. Executive Summary
A non-technical overview summarizing the engagement.
- Objective: Why was the test performed? (Compliance, M&A, routine check).
- Scope Summary: High-level description of what was tested.
- Key Findings: The top 3-5 most critical issues.
- Risk Matrix: A visual representation of the overall risk.
- Strategic Recommendations: Long-term security improvements.
II. Methodology
Describes how the test was conducted.
- Frameworks: (e.g., OSSTMM, PTES, OWASP).
- Tools Used: List of software and versions.
- Timeline: Dates and times of active testing.
III. Detailed Findings (Technical Report)
For each vulnerability discovered:
- Title & ID: (e.g., "SQL Injection in Login Form").
- Severity: CVSS Score (Base/Temporal/Environmental).
- Affected Hosts: IP addresses/URLs.
- Description: What is the vulnerability?
- Evidence/PoC: Screenshots, HTTP requests/responses, scripts.
- Remediation: Technical fix steps.
IV. Appendices
- Full asset lists.
- Clean/Raw scan output (if required).
- Compliance mapping (PCI-DSS, HIPAA).
6. Presentation of Findings
How data is visualized impacts how well the client understands the risk.
Visualization Techniques
- Attack Graphs/Paths: Visual diagrams showing how a tester moved from the Internet -> Web Server -> Database -> Domain Controller.
- Pie Charts: Breakdown of vulnerabilities by severity (Critical, High, Medium, Low, Info).
- Trend Analysis: Comparing current findings to previous pentest reports (if applicable) to show improvement or degradation.
Narrative Flow
The presentation should tell a story: "We entered through X, pivoted to Y, and exfiltrated Z." This provides context rather than a disjointed list of bugs.
7. Define Best Practices for Reports
- Clarity and Conciseness: Avoid fluff. Be direct.
- Tone: Maintain an objective, professional tone. Avoid "shaming" the developers (e.g., avoid saying "The admin failed to..."; instead use "The system was configured to...").
- Verification: Never rely solely on automated scanner output. False positives must be manually weeded out.
- Screenshot Hygiene: Redact sensitive live data (PII, credit card numbers, production passwords) in screenshots unless explicitly authorized to demonstrate impact.
- Consistency: Use consistent formatting, fonts, and terminology throughout the document.
- Risk Scoring: Use a standard scoring system like CVSS v3.1 to justify severity ratings objectively.
8. Recommending Remediation
Remediation advice typically falls into three categories:
1. Immediate/Tactical Fixes
- "Apply Patch KB123456."
- "Sanitize input on the 'search' parameter."
- "Close port 23 (Telnet) on the firewall."
2. Strategic/Architectural Fixes
- "Implement a Web Application Firewall (WAF)."
- "Move to a centralized Identity and Access Management (IAM) solution."
- "Implement Network Segmentation to prevent lateral movement."
3. Mitigation/Compensating Controls
Used when a direct fix is impossible (e.g., legacy systems).
- "Isolate the legacy server on a VLAN with no internet access and strict ACLs."
9. Performing Post-Report Delivery Activities
The engagement does not end when the PDF is sent.
Cleanup
- Client Side: The tester must provide a list of artifacts to remove (e.g., test user accounts, webshells, cron jobs, uploaded binaries).
- Tester Side: Secure deletion (wiping) of client data from tester laptops and servers according to the NDA and data retention policy.
Debriefing (The Read-Out)
A meeting with the client to walk through the report.
- Clarify technical findings for the dev team.
- Explain risk context to management.
- Accept feedback on the engagement process.
Attestation / Re-testing
- After the client applies fixes, the tester re-evaluates specific vulnerabilities.
- Deliverable: A "Letter of Attestation" or an addendum report confirming that critical findings have been closed.
10. Attacks on IoT Devices
Internet of Things (IoT) devices present unique challenges due to the intersection of hardware, firmware, radio, and cloud components.
IoT Attack Surface
- Hardware Interfaces:
- UART/JTAG: Debug ports often left open on the circuit board. Attackers connect physically to gain a root shell or dump firmware.
- Side-Channel Attacks: Analyzing power consumption or electromagnetic emissions to extract encryption keys.
- Firmware:
- Hardcoded Credentials: Analyzing the binary (using tools like
binwalk) often reveals hardcoded root passwords or API keys. - Outdated Components: Linux kernels or libraries (e.g., BusyBox) that are years old and vulnerable.
- Hardcoded Credentials: Analyzing the binary (using tools like
- Radio/Wireless Protocols:
- Zigbee/Z-Wave: Proprietary protocols often lacking encryption or susceptible to replay attacks.
- Bluetooth Low Energy (BLE): Sniffing and spoofing attacks (e.g., using Ubertooth One).
- SDR (Software Defined Radio): intercepting proprietary sub-GHz communications.
- Cloud/Mobile Ecosystem:
- Attacking the companion mobile app or the API endpoints the device communicates with.
OWASP IoT Top 10 Risks (Key Study Areas)
- Weak, Guessable, or Hardcoded Passwords.
- Insecure Network Services.
- Insecure Ecosystem Interfaces (APIs).
- Lack of Secure Update Mechanism (Firmware signing).
- Use of Insecure or Outdated Components.