
The First 60 Minutes of Incident Response (and After)
The first hour of an incident response sets the trajectory for everything that follows.
Not because you will solve the incident in 60 minutes. But because this is when evidence is lost, bad assumptions harden, and decisions get made under stress.
Good incident response in the first hour is not about heroics. It is about slowing down damage without making things worse.
This guide exists to help you do exactly that, whether the incident turns out to be ransomware or not.
What the First 60 Minutes Are Really For
The goal of the first hour is not to:
- find the attacker
- identify malware
- determine root cause
The goal is to:
- stabilize the situation
- preserve evidence
- reduce uncertainty
If you try to do more than that, you usually fail at all three.
This is especially true in ransomware cases, where panic-driven containment can permanently destroy evidence and recovery options.
A Critical Branch Early On: Ransomware vs Non-Ransomware
You do not need to know for sure whether this is ransomware in the first hour.
What you do need to know is whether:
- systems are being encrypted or destroyed
- data may be actively exfiltrated
- backups may be at risk
Think of ransomware not as a different incident, but as an incident with irreversible consequences if mishandled early.
When ransomware is suspected:
- preservation mistakes are catastrophic
- backup isolation becomes urgent
- containment decisions carry higher risk
Keep that in mind throughout every step below.
Minute 0–10: Establish Control and Reality
This is where most incidents go wrong.
People jump to conclusions. Systems get rebooted. Logs get wiped accidentally.
Your job here is to stop the chaos.
Confirm Whether the Incident Is Ongoing
You must determine whether something is still happening right now.
Ask:
- Is suspicious activity still occurring?
- Are alerts firing right now?
- Are systems actively misbehaving?
If activity is ongoing and destructive (especially encryption), containment may take priority. If activity appears past, preservation is more important.
Understand Why You Were Called
Ask:
- What triggered the concern?
- Who noticed it?
- What exactly looked wrong?
This tells you what signal started the response and how reliable it might be.
In ransomware cases, this might be:
- ransom notes
- file extensions changing
- sudden system failures
In non-ransomware cases, it is often:
- alerts
- unusual access
- data exposure concerns
Identify Suspected Systems and Accounts
Do not accept “everything” as an answer.
Push for:
- specific systems
- specific accounts
- specific symptoms
You are not confirming scope yet. You are building an initial map.
Find Out What Has Already Been Done
This is critical.
Ask directly about:
- password resets
- server reboots
- account disablements
- log deletions
- containment attempts
In ransomware incidents, premature shutdowns can destroy encryption keys in memory. In non-ransomware incidents, they can erase forensic evidence.
Minute 10–25: Preserve What Matters Most
Once you have basic control, shift focus to evidence and impact.
Check Log Availability and Retention
Ask:
- what logs exist
- where they are stored
- how long they are retained
- whether retention is at risk
If logs are short-lived, preservation becomes urgent.
This applies equally to ransomware and non-ransomware incidents.
Verify Backup Status and Isolation
You are not restoring yet.
You need to know:
- whether backups exist
- whether they are writable
- whether they might already be impacted
For ransomware, this is existential. For non-ransomware incidents, this still affects recovery and confidence.
Never assume backups are safe.
Identify Privileged Access Involvement
Ask whether:
- admin accounts are suspected
- service accounts may be involved
- automation credentials could be abused
If privileged access is involved, escalation is immediate regardless of incident type.
Assess Potential Data Exposure
Even uncertainty matters.
Ask whether:
- customer data
- credentials
- regulated information
might be involved.
This affects legal, regulatory, and communication timelines early.
Minute 25–40: Build a Minimal Timeline
This is not full forensics. It is about anchoring reality.
Establish the Detection Time
When did someone first notice something was wrong?
This is not when the incident started, but it is your first anchor.
Identify the Earliest Known Suspicious Activity
Even approximate answers help.
This narrows the window for:
- log searches
- containment decisions
- impact assessment
Ask About Similar Past Incidents
Repeat behavior matters.
Previous incidents often reveal:
- unresolved weaknesses
- recurring abuse paths
- ignored alerts
This is especially common in credential abuse and ransomware precursor activity.
Minute 40–55: Clarify Authority and Communication
Technical response fails if decision-making is unclear.
Identify the Decision-Maker
Ask explicitly:
- who can authorize containment actions
- who can accept downtime
- who can approve risky steps
Do not assume this is the security team.
Understand Who Is Already Involved
IT, security, legal, leadership, PR.
In ransomware incidents, legal and executive involvement often comes early. You need to know who is already in the loop.
Identify Hard Constraints
Ask what you absolutely must not do without approval.
This protects:
- fragile systems
- legal obligations
- business operations
Minute 55–60: Set Expectations and Next Steps
This minute is about calm framing.
You should clearly communicate:
- what is known
- what is unknown
- what assumptions exist
- what the immediate priorities are
- what will happen next
A simple statement like:
“Right now we are focused on stopping further damage, preserving evidence, and understanding scope. Our understanding will change as we gather more information.”
This reduces panic and builds trust.
The Questions to Ask the Client in the First 60 Minutes
These questions are designed to be asked out loud, under pressure. You do not need perfect answers. You need direction.
Situation and Urgency
- What made you call us right now?
- Do you believe the activity is still happening?
- What is the single thing you are most worried about?
Scope and Impact
- Which specific systems or accounts are affected?
- Has anything stopped working or behaved abnormally?
- Are privileged or admin accounts suspected?
Actions Already Taken
- What actions have already been taken?
- Has anyone deleted logs, wiped systems, or restored from backup?
- Has anyone tried to “fix” the issue?
Evidence and Visibility
- What logs exist and how long are they kept?
- Are logs centralized?
- Are backups available and isolated?
Data and Legal Exposure
- Could sensitive or regulated data be involved?
- Are notification timelines a concern?
- Is legal or compliance already involved?
Authority and Constraints
- Who can approve containment actions?
- Who should be kept informed?
- What must not be done without approval?
Closing
- Has this happened before?
- What does success look like in the next few hours?
- Is there anything we should know that we haven’t asked?
What the Incident Response Team Should Do at Each Step
Minute 0–10
- Take control of the conversation
- Stop uncoordinated actions
- Write everything down
- Classify urgency, not root cause
Goal: Prevent irreversible mistakes.
Minute 10–25
- Decide preservation vs containment
- Secure logging sources
- Protect backups
- Escalate on privilege involvement
Goal: Preserve evidence while limiting damage.
Minute 25–40
- Anchor time
- Separate facts from assumptions
- Narrow scope deliberately
- Look for patterns
Goal: Replace panic with structured uncertainty.
Minute 40–55
- Confirm decision authority
- Define communication paths
- Identify legal and business constraints
- Align on realism
Goal: Enable safe, fast decisions.
Minute 55–60
- Summarize clearly
- Define immediate priorities
- Set expectations
- Assign next actions
Goal: Turn panic into a plan.
What Responders Must Actively Avoid
Do not:
- guess root cause early
- overpromise answers
- allow parallel uncoordinated actions
- treat the first theory as truth
- optimize for speed over correctness
Fast and wrong is worse than slow and deliberate.
The Guard Role: When Containment Must Come Before Investigation
In specific high-impact scenarios, waiting to investigate is a luxury you cannot afford. This is where the concept of a "Guard Role" becomes critical.
If an alert triggers for an action that is irreversible and highly damaging such as mass data deletion, credential compromise, or ransomware execution, containment must be the immediate and only priority.
A designated person with decision-making authority, following pre-approved playbooks, should be empowered to take immediate action based on the alert alone. The person in this Guard Role is also responsible for meticulously time-stamping all actions, providing brief but regular updates (e.g., every 10-15 minutes) on what has been done, and maintaining clear, concise notes for a clean handover to the Incident Commander (IC) once the broader response is mobilized. This ensures that even rapid, pre-authorized actions are documented and auditable.
The guiding principle is simple:
If the containment action is reversible and the potential damage is not, act first and escalate.
It is better to take a system offline for ten minutes based on a false positive than to wait ten minutes to investigate while an attacker encrypts your entire infrastructure.
In these moments, security maturity is not about perfect analysis. It is about decisive, pre-authorized action.
Early and loud beats late and perfect.
False Positives Are Not Failures
Isolating a benign system is not a mistake if:
- the playbook justified it
- the risk was real
The real failure is not acting when risk is present.
The One Thing to Always Remember
You are protecting time.
Time before encryption.
Time before exfiltration.
Time before impact.
Everything you do is about buying time.
Final Guard Wisdom (Worth Keeping)
- Early > perfect
- Reversible > irreversible
- Loud > silent
- Calm > clever
If you live by those, you’ll be a good guard.
The First 24 Hours of Incident Response
And What Threat Hunting Should Look Like After
The first 24 hours of an incident are not about solving everything.
They are about making sure the organization survives long enough to understand what actually happened.
Teams that handle this window well rarely look heroic.
They look calm, boring, and deliberate.
Teams that fail look busy.
How to Think About the First 24 Hours
The first day of incident response has three goals:
- Stop damage from spreading
- Preserve the ability to learn
- Enable informed decisions
Everything else is secondary.
Phase 1: Immediate Control (Hour 0–1)
This is the phase already covered in detail elsewhere, but it frames everything that follows.
The priorities are:
- stop uncontrolled actions
- preserve evidence
- establish authority and communication
- reduce panic
Success here means:
- no irreversible mistakes
- no false certainty
- no evidence loss
Failure here poisons the next 23 hours.
Phase 2: Stabilization and Containment (Hour 1–6)
Once the initial chaos slows, the organization moves into controlled response.
What matters in this phase:
- Confirm that destructive activity has stopped
- Validate that containment actions are working
- Ensure backups and recovery paths remain protected
- Prevent secondary infections or re-entry
This is where responders must balance:
- safety vs uptime
- containment vs observation
- speed vs accuracy
Bad containment decisions in this window often create bigger outages than the attack itself.
Phase 3: Understanding Scope and Impact (Hour 6–12)
This is where incident response becomes analytical.
The goal is not “find everything.”
The goal is understand enough to make real decisions.
Key questions being answered:
- Which systems were actually touched?
- Which identities were actually abused?
- What data was realistically accessible?
- How confident are we in those answers?
Uncertainty is expected.
What matters is knowing where uncertainty still exists.
Phase 4: Executive and Legal Alignment (Hour 12–18)
By this point, leadership pressure increases.
Decisions start to stack:
- notify or wait
- restore or investigate further
- communicate publicly or internally
- involve law enforcement or not
What responders must do here:
- present facts clearly
- separate evidence from hypotheses
- explain tradeoffs without panic
- resist pressure to “just be done”
This is where trust between technical teams and leadership is tested.
Phase 5: Transition to Recovery (Hour 18–24)
The last part of the first day is about preparing for recovery without rushing into it.
This includes:
- validating cleanup plans
- ensuring access resets are meaningful
- confirming attackers cannot immediately return
- planning phased restoration
Recovery that begins without confidence often triggers a second incident.
What Success Looks Like After 24 Hours
You do not need all answers.
You do need:
- controlled systems
- preserved evidence
- bounded uncertainty
- aligned leadership
- a plan that can survive scrutiny
If you have those, the incident is manageable.
What Threat Hunting Should Look Like After the First 24 Hours
This is where many organizations go wrong.
They either:
- hunt randomly
- hunt everything
- or don’t hunt at all
Good threat hunting after an incident is focused, hypothesis-driven, and restrained.
Threat Hunting Is Not “Searching for More Bad”
Threat hunting exists to answer specific unresolved questions, not to prove competence.
Bad hunting looks like:
- running massive queries
- pulling endless logs
- looking for anything suspicious
Good hunting looks like:
- testing assumptions
- validating conclusions
- reducing unknowns
How Threat Hunting Should Be Framed Post-Incident
Threat hunting should start by asking:
- What do we believe happened?
- What would disprove that belief?
- Where would attackers hide if we’re wrong?
Every hunt should exist to challenge confidence, not reinforce it.
Priority Hunt Themes After an Incident
High-value hunt areas usually include:
- persistence mechanisms related to the incident
- credential reuse or abuse patterns
- lateral movement paths connected to known systems
- access outside expected time or behavior
- systems adjacent to confirmed compromise
These are not broad hunts.
They are surgical.
How to Avoid Threat Hunting Fatigue
Threat hunting fails when it:
- runs forever
- has no stopping criteria
- produces only “nothing found”
Every hunt must define:
- what success looks like
- what failure means
- what decision the result informs
Hunts without decisions are noise.
How Incident Response and Threat Hunting Should Connect
Incident response stabilizes reality.
Threat hunting tests whether that reality is true.
They are not separate disciplines.
They are sequential.
If hunting contradicts assumptions, incident response resumes.
Threat Hunting Is Not a Phase. It Is an Ongoing Job.
One of the most dangerous misunderstandings in security is the idea that threat hunting is something you “do after an incident.”
Post-incident threat hunting is important, but it is not the whole job. It is a focused surge of activity designed to reduce uncertainty after something has gone wrong.
Threat hunting itself should already be happening.
How Ongoing Threat Hunting Differs From Post-Incident Hunting
Post-incident threat hunting is:
- narrow
- hypothesis-driven
- focused on validating or disproving specific assumptions
- time-bound
Ongoing threat hunting is:
- continuous
- environment-specific
- focused on understanding normal behavior
- designed to surface the unknown before it becomes an incident
If your first serious hunt starts after an incident, you are already behind.
Why Ongoing Threat Hunting Matters
Incidents are rarely clean.
Attackers:
- make mistakes before detection
- leave traces long before impact
- test access quietly
- reuse infrastructure and credentials
Ongoing threat hunting increases the chance that:
- activity is noticed earlier
- incidents are smaller
- post-incident hunts are shorter and more confident
It reduces both blast radius and response fatigue.
What Ongoing Threat Hunting Should Focus On
Good ongoing threat hunting is not about chasing every threat report.
It focuses on:
- behaviors that should not exist in your environment
- misuse of legitimate access
- patterns that do not trigger alerts
- gaps between what you think you see and what you actually see
This work builds the baseline that makes post-incident hunting effective.
How Ongoing Hunting and Incident Response Reinforce Each Other
Incident response reveals:
- blind spots
- missing telemetry
- flawed assumptions
- weak detection
Ongoing threat hunting turns those lessons into:
- better questions
- stronger hypotheses
- earlier detection
- fewer surprises
When this loop works, incidents become less dramatic and more manageable.
A Hard Truth
If threat hunting only happens after something goes wrong, it is not threat hunting.
It is damage control.
Real threat hunting is a discipline that:
- runs quietly
- asks uncomfortable questions
- rarely produces flashy results
- pays off when incidents don’t escalate
Final Perspective
Post-incident threat hunting answers the question:
Are we really done?
Ongoing threat hunting answers the more important one:
Would we have caught this sooner next time?
Organizations that understand this distinction don’t just recover from incidents.
They shorten them, limit them, and sometimes avoid them entirely.
That is the difference between reacting to attacks and slowly becoming harder to attack.
Final Thought
The first 24 hours of an incident are about control and clarity.
Threat hunting that follows is about humility.
The strongest teams are not the ones who assume they’ve seen everything.
They are the ones who actively try to prove themselves wrong.
That is how incidents end instead of repeating.
