Cybersecurity

Incident Response: What to Do in the First 24 Hours After a Breach

Prootego Team

A breach has happened. Maybe your monitoring platform fired a critical alert at 2 AM. Maybe an employee reported a ransomware screen. Maybe a client called to say their data appeared on a dark web forum. However you found out, the clock is now running – and the next 24 hours will determine whether this incident costs your business weeks of disruption or stays contained to hours.

According to IBM’s 2025 Cost of a Data Breach Report, organizations that contain a breach within the first 200 days spend significantly less than those that take longer. Breaches contained within the first 24 to 48 hours have a drastically reduced overall impact on both finances and reputation. Yet only 26% of small businesses have a formal incident response plan. The gap between risk and readiness is where the real damage happens – not from the attack itself, but from the chaos that follows.

This article provides a practical, step-by-step framework for the first 24 hours after a breach. It is designed for businesses of any size – whether you have a dedicated SOC or a two-person IT team – and covers containment, investigation, communication, compliance, and the critical mistakes that make everything worse.


What Should You Do in the First 0–2 Hours After Discovering a Breach?

The first two hours are about one thing: stopping the bleeding without destroying the evidence.

Confirm the incident is real. Not every alert is a breach. Before escalating, verify the signal. Check whether the alert correlates with other suspicious activity – unusual login patterns, unexpected network connections, anomalous file access. False positives waste critical time and resources. But if multiple indicators align, treat the situation as a confirmed incident and escalate immediately.

Activate your incident response team. This team should be predefined – not assembled for the first time during a crisis. At minimum, it includes someone with technical authority to make containment decisions (IT lead or security engineer), someone with business authority to approve communications and expenditures (executive sponsor), legal counsel (internal or external), and a communications lead. If you use an MDR (Managed Detection and Response) service, contact your provider immediately – their SOC analysts should be your first call.

Contain the threat. Containment means preventing the attacker from expanding their access while preserving the evidence needed for investigation. The specific actions depend on the type of incident. For a compromised endpoint: isolate it from the network. For compromised credentials: disable the affected accounts and force password resets. For active lateral movement: segment the affected network zone. For data exfiltration in progress: block the destination IP or domain at the firewall level.

A critical principle during containment: isolate, don’t wipe. The instinct to reformat compromised machines or delete suspicious files is understandable but destructive. Every action the attacker took – every process they launched, every file they modified, every connection they established – is evidence. Destroying it eliminates your ability to understand what happened, how far the attacker got, and whether they left backdoors for re-entry.


What Should You Investigate in Hours 2–6?

Once the immediate threat is contained, the focus shifts to understanding the scope and nature of the breach.

Determine the attack vector. How did the attacker get in? The most common initial access methods in 2025–2026 include phishing emails with credential harvesting links, exploitation of unpatched vulnerabilities in internet-facing systems, compromised credentials purchased from infostealer marketplaces, and abuse of remote access tools (RDP, VPN) with weak or reused passwords. Understanding the entry point is essential for two reasons: it tells you what to patch to prevent re-entry, and it helps scope how far the attacker may have moved after gaining initial access.

Map the blast radius. Which systems did the attacker access after the initial compromise? This requires examining authentication logs (which accounts were used, from which devices, at which times), network connection logs (which internal systems communicated with the compromised endpoint), file access and modification logs (what data was accessed, copied, or exfiltrated), and process execution logs (what tools, scripts, or malware the attacker ran). If your organization uses an XDR platform, this investigation is dramatically faster because all of this telemetry is already collected, correlated, and searchable from a single console. Without centralized detection, this step requires manually pulling logs from multiple systems – a process that can take days instead of hours.

Assess data exposure. Was sensitive data accessed or exfiltrated? This question drives your legal and regulatory obligations. The types of data involved – personal data (names, emails, national ID numbers), financial data (payment information, bank details), health data, intellectual property, authentication credentials – determine which notification requirements apply and which regulatory bodies must be informed.


What Happens in Hours 6–12?

By hour six, you should have a working understanding of what happened and how far the attacker got. The focus now shifts to validating your containment, beginning remediation, and preparing for communication.

Verify containment is holding. Check that the attacker does not still have access through a secondary pathway. Common persistence mechanisms include scheduled tasks or cron jobs that re-establish connections, new user accounts created by the attacker, modified authentication tokens or certificates, web shells planted on internet-facing servers, and backdoor processes disguised as legitimate services. A thorough containment verification requires scanning all potentially affected systems for indicators of compromise (IoCs) – not just the systems you already know were involved.

Begin remediation of the root cause. If the attack exploited a vulnerability, patch it. If it exploited weak credentials, enforce password resets and enable multi-factor authentication. If it exploited a misconfigured service, fix the configuration. Remediation must happen before systems are brought back online – otherwise you are restoring the same vulnerability the attacker originally used.

Prepare your communication plan. By this point, you need to brief internal leadership on what is known, what is still being investigated, and what the expected timeline looks like. Do not communicate externally until you have verified facts. Premature public statements based on incomplete information create confusion, erode trust, and may need to be corrected later – which looks worse than a brief delay in communication.


What Are Your Notification and Compliance Obligations?

Regulatory notification requirements vary by jurisdiction, industry, and the type of data involved. Understanding your obligations before an incident occurs is essential – trying to research them during a crisis wastes time you don’t have.

GDPR (European Union). If the breach involves personal data of EU residents, the General Data Protection Regulation requires notification to the relevant supervisory authority within 72 hours of becoming aware of the breach. If the breach is likely to result in a high risk to the rights and freedoms of individuals, affected individuals must also be notified without undue delay. The notification must include the nature of the breach, the categories and approximate number of individuals affected, the likely consequences, and the measures taken or proposed to address the breach.

NIS2 Directive (European Union). Organizations classified as essential or important entities under the NIS2 Directive must submit an early warning to their national CSIRT (Computer Security Incident Response Team) within 24 hours of becoming aware of a significant incident, followed by a full incident notification within 72 hours. A final report must be submitted within one month.

Industry-specific requirements. Financial services, healthcare, and critical infrastructure sectors typically have additional notification requirements with shorter timelines. Payment card breaches under PCI DSS generally require notification to the acquiring bank within 24 hours.

Documentation is critical. From the moment a breach is discovered, every action taken should be logged: who discovered it, when, what was done, by whom, what evidence was preserved, what systems were affected. This documentation serves three purposes: it supports regulatory compliance, it enables post-incident review, and it protects the organization in potential litigation or insurance claims.


What Should You Do in Hours 12–24?

By the twelve-hour mark, the acute crisis should be stabilizing. Containment is verified. The scope is understood. Remediation of the root cause is underway. The work now shifts to controlled recovery and stakeholder communication.

Begin phased system restoration. Bring systems back online in a controlled sequence, starting with the most critical business functions. Each system should be verified clean before reconnection – restored from known-good backups or rebuilt from scratch if the integrity of the existing image cannot be confirmed. Monitor restored systems closely for any signs of re-compromise.

Communicate with stakeholders. Internal communication should be factual, concise, and ongoing. Employees need to know what happened (at an appropriate level of detail), what they should do differently (password resets, vigilance for follow-up phishing), and who to contact if they notice anything suspicious. External communication – to customers, partners, regulators – should be coordinated with legal counsel and follow the notification timelines applicable to your jurisdiction.

Engage cyber insurance. If your organization has a cyber insurance policy, notify your insurer as early as possible – ideally within the first few hours, not at the end of the first day. Many policies require prompt notification as a condition of coverage. Your insurer may also provide access to forensic investigators, legal counsel, and crisis communication specialists that are included in your policy.

Initiate evidence preservation for forensics. If the breach may result in legal proceedings, regulatory investigation, or insurance claims, preserve all evidence in its original state. This includes disk images of affected systems, memory dumps, log files, network captures, and copies of any malware or tools the attacker used. Chain of custody must be documented – who collected the evidence, when, how it was stored, and who has had access to it.


What Are the Most Common Mistakes in the First 24 Hours?

Knowing what not to do is as important as knowing what to do. These are the mistakes that consistently turn containable incidents into catastrophic ones.

Wiping systems before forensic investigation. The urgency to “clean up” and get back to normal leads many organizations to reformat or reimage compromised machines immediately. This destroys the evidence needed to understand the full scope of the breach, identify all compromised systems, and confirm that the attacker has been fully removed. Always image before you wipe.

Assuming containment means the incident is over. Containing the initial point of compromise does not mean the attacker is gone. Sophisticated attackers establish multiple persistence mechanisms. If you only address the visible compromise without hunting for additional footholds, the attacker may re-enter through a backdoor you missed – sometimes weeks or months later.

Communicating too much too soon. Issuing a public statement before the scope is understood creates risk. If the initial statement says “10,000 customer records were affected” and the investigation later reveals 500,000, the organization appears either incompetent or dishonest. It is better to acknowledge the incident, confirm that an investigation is underway, and commit to providing updates – than to release premature specifics.

Failing to involve legal counsel early. Legal counsel should be involved from the first hour, not brought in after decisions have already been made. Counsel guides notification obligations, privilege protections (communications conducted under legal privilege may be protected from disclosure), and the organization’s exposure to regulatory penalties and litigation.

Neglecting to check backups. Ransomware operators increasingly target backup systems as part of their attack chain. Before relying on backups for restoration, verify that they have not been compromised, encrypted, or corrupted. Organizations that discover their backups are unusable after a ransomware attack face a dramatically worse recovery scenario.


Why Does Having an Incident Response Plan Before a Breach Matter?

The difference between organizations that handle breaches well and those that spiral into chaos is almost always preparation – not capability.

According to industry data, organizations with rehearsed incident response plans reduce breach costs by approximately 61%. The plan itself does not need to be complex. It needs to answer six questions in advance: who is on the incident response team and how are they contacted outside business hours? What authority does the team have to make containment decisions without waiting for executive approval? What are the first three actions for each major incident type (ransomware, credential compromise, data exfiltration, insider threat)? Who handles external communication and under what approval process? What are the regulatory notification timelines for the organization’s jurisdiction and industry? Where are the backups, and when were they last tested?

A plan that exists only as a document in a shared folder is marginally better than no plan at all. A plan that has been rehearsed through tabletop exercises – where the team walks through a realistic scenario and practices decision-making under pressure – is transformative. The time to discover that your backup restoration process takes 48 hours instead of the expected 4 is during a drill, not during an actual ransomware attack.


How Does an XDR Platform Change Incident Response?

An XDR (Extended Detection and Response) platform fundamentally changes the speed and effectiveness of incident response by providing three capabilities that are difficult or impossible to replicate with disconnected tools.

Faster detection. XDR correlates telemetry from endpoints, network, email, cloud, and identity systems in real time. An alert that would take hours to investigate manually – cross-referencing endpoint logs, firewall logs, authentication logs, and email gateway logs from separate consoles – is automatically correlated into a single incident view. The time from initial alert to confirmed incident drops from hours to minutes.

Faster scoping. When investigating a breach, the first critical question is “how far did the attacker get?” XDR answers this by showing every action taken by the compromised account or device across all monitored layers. Lateral movement, privilege escalation, data access, and exfiltration attempts are visible in a single timeline – not buried across five separate tools requiring manual correlation.

Faster response. XDR platforms include automated and manual response capabilities: isolating endpoints, blocking IPs, killing processes, quarantining files, disabling accounts. These actions can be executed directly from the investigation console, without switching tools or logging into individual systems. For organizations with MDR services, the provider’s SOC analysts can execute these response actions on the customer’s behalf – providing immediate, expert-driven containment even when the customer’s own team is unavailable or overwhelmed.

The combination of faster detection, faster scoping, and faster response directly compresses the breach timeline – which, as the data consistently shows, is the single most important factor in reducing breach cost and impact.


Frequently Asked Questions

How quickly should a business respond to a suspected breach?

Immediately. The moment a credible alert or report is received, the incident response process should be activated. Every hour of delay increases the attacker’s opportunity to expand access, exfiltrate data, or deploy destructive payloads. Organizations with detection platforms that provide real-time alerting and MDR services with 24/7 SOC coverage have a structural advantage because the response begins within minutes, regardless of when the incident occurs.

What if we don’t have a formal incident response plan?

If an incident occurs without a plan in place, prioritize these four actions in order: contain the threat (isolate affected systems from the network), preserve evidence (do not wipe or reformat anything), contact legal counsel and your cyber insurance provider, and begin documenting everything. Then, once the crisis is resolved, build the plan – because the probability of a second incident is higher after the first, and next time you need to be prepared.

Do we need to notify authorities for every breach?

Not necessarily. Under GDPR, notification to the supervisory authority is required when the breach is likely to result in a risk to individuals’ rights and freedoms. Purely internal incidents with no personal data exposure may not trigger notification requirements. However, the assessment must be documented – you need to show that you evaluated the risk and made an informed decision. When in doubt, consult legal counsel.

How long does full recovery from a breach typically take?

Recovery timelines vary dramatically based on the severity of the breach, the organization’s preparedness, and the quality of backups. Minor incidents contained within hours may be fully resolved in 2-5 days. Ransomware attacks without reliable backups can take weeks or months. In a 2025 study, 65% of organizations reported they had still not fully recovered from their most recent breach at the time of the survey. The strongest predictor of fast recovery is not the severity of the attack – it is the existence of a tested, rehearsed response plan.

Should we pay a ransomware demand?

This is a legal, business, and ethical decision that depends on the specific circumstances. Law enforcement agencies generally advise against paying, because payment funds criminal operations, does not guarantee data recovery, and may invite repeat attacks. However, organizations facing existential disruption with no viable backup may have limited options. This decision should always involve executive leadership, legal counsel, and law enforcement. If your organization has cyber insurance, your insurer should also be consulted – some policies cover ransom payments, others exclude them.

What should we do differently after the incident is resolved?

Conduct a formal post-incident review within two weeks of resolution. Identify what worked, what failed, and what was missing. Update your incident response plan based on the findings. Address the root cause – not just the symptoms. And invest in the detection and response capabilities that would have caught the breach earlier: centralized monitoring, behavioral analysis, and if possible, 24/7 managed security operations.


Prootego’s XDR and MDR platform provides the detection, investigation, and response capabilities that compress your breach timeline from days to hours – with 24/7 SOC monitoring, automated containment, and full forensic visibility across endpoints, network, and cloud. Book a demo to see how it works.

Incident Response: What to Do in the First 24 Hours After a Breach | Prootego