In this post

I want to frame this from two operational roles:

  • the researcher building and refining threat hypotheses
  • the incident responder making fast decisions under pressure

Both often start with the same signal: an ATT&CK technique.

Example: a detection triggers an alert tagged T1059 (Command and Scripting Interpreter).

Most teams treat that tag as a reporting label.

Used properly, it is a pivot point that tells you where to look deeper:

  • what likely happened before this step
  • what is likely to happen next

That is the difference between taxonomy use and sequence inference.

In this post, I will outline a practical way to do that.


Why single-technique mapping is not enough

Technique labels are useful for consistency, reporting, and control mapping.

But incidents are not single nodes. They are chains.

If you want a deeper model for representing those chains explicitly, see Using ATT&CK Flow to Model the Procedure Layer Missing in ATT&CK.

If we only map what we already saw, we end up reactive:

  • detections trigger too late
  • hunts are too broad
  • response playbooks miss adjacent steps

The goal is to move from:

“We observed this technique”

to:

“Given this technique in this environment, these predecessor and successor techniques are now most probable.”

For a researcher, this creates testable hypotheses.

For an incident responder, this creates an immediate hunt plan.


Researcher view: infer what likely happened before

When a technique is observed, ask what prerequisites are usually required for it to be successful.

If you observe T1059, likely predecessor areas often include:

  • initial access (T1566 phishing, T1190 exploit public-facing app)
  • execution setup (T1204 user execution)
  • staging and delivery (T1105 ingress tool transfer)

You are not claiming certainty.

You are ranking plausible prior paths and then testing them against telemetry.

A researcher workflow:

  1. Start from the confirmed technique.
  2. Pull commonly co-occurring or prerequisite techniques.
  3. Filter by platform and identity context in your environment.
  4. Convert each relationship into a hypothesis and expected evidence pattern.
  5. Hand the prioritized set to response and hunting teams.

This gives analysts a focused “look-back” plan instead of a blind search.


Incident responder view: infer what happens next

The same logic applies forward.

If scripting execution is already confirmed, likely next objectives may include:

  • credential access (T1003)
  • discovery (T1082, T1018)
  • persistence (T1547)
  • lateral movement (T1021)

If a detection with an ATT&CK tag triggers, responders can immediately use it as a branch point:

  • “If T1059 is true, check for T1003 and T1082 on the same host and identity”
  • “If discovery evidence appears, prioritize lateral movement telemetry (T1021)”
  • “If privilege abuse appears, escalate containment scope”

These predictions become concrete response tasks:

  • pre-stage detections for probable follow-on steps
  • prioritize log enrichment where the likely next techniques live
  • run short, targeted hunts before attacker progression completes

This is how technique mapping becomes operational tempo, not static documentation.


MITRE’s Technique Inference Engine

MITRE’s Technique Inference Engine (TIE) is a machine learning model for inferring associated MITRE ATT&CK techniques from previously observed techniques.

At a practical level, TIE helps by doing one thing well: it converts an observed set of techniques into a ranked list of additional techniques that are statistically associated with them.

That gives both roles immediate value:

  • researchers get a structured hypothesis set to test
  • responders get a prioritized list of “hunt next” candidates

TIE is built on ATT&CK technique observations extracted from many CTI reports. From that data, it learns which techniques frequently appear together in real intrusions.

So when you provide one or more observed techniques, TIE returns likely associated techniques with confidence scores.

Important: this is not deterministic truth.

It is probabilistic guidance based on historical patterns. You still validate with telemetry in your own environment.

TIE output naturally fits the two directions you care about:

  • backward inference: “what likely happened before what we saw?”
  • forward inference: “what is likely to happen next?”

The model itself returns associations; your team applies sequence context (timestamps, tactic progression, host/user timeline) to decide which associations are likely predecessor vs successor behavior.

For a detailed walkthrough of how those sequences are represented in ATT&CK Flow objects, see Understanding the Structure of ATT&CK Flows to Model CTI Reports.


CTI Butler: applying TIE in live investigations

We’ve implemented technique inference in CTI Butler using MITRE’s TIE approach so analysts can move directly from ATT&CK-tagged detections to prioritized investigation paths.

Example 1: one alert, immediate forward hunt

Observed:

  • T1059 (Command and Scripting Interpreter) confirmed on an endpoint.

CTI Butler’s TIE returns high-probability associated techniques including Phishing (T1566), Masquerading (T1036), and Boot or Logon Autostart Execution (T1547).

These are association candidates, not guaranteed chronological next steps.

Responder actions:

  1. Look back for potential phishing entry evidence (T1566) tied to the same user/session.
  2. Hunt for masquerading patterns (T1036) on the affected endpoint and directly related hosts.
  3. Prioritize persistence checks for autorun/logon artifacts (T1547) to prevent re-entry.

Outcome:

  • instead of waiting for a second major alert, you pivot immediately into high-value hunts suggested by real technique associations.

Example 2: backward reconstruction for scoping

Observed:

  • T1003 (credential dumping) triggered in a server segment

CTI Butler’s TIE suggests strongly associated techniques including Command and Scripting Interpreter (T1059), Exploit Public-Facing Application (T1190), and Data Encrypted for Impact (T1486).

For this workflow, treat T1059 and T1190 as likely look-back hypotheses, and T1486 as a high-impact behavior to actively monitor during containment.

Researcher actions:

  1. Build a predecessor hypothesis list from associated techniques that fit timeline and tactic context.
  2. Map each hypothesis to required evidence (mail logs, auth anomalies, exploit traces, transfer artifacts).
  3. Run a parallel watch for destructive-impact signals (T1486) while scoping continues.
  4. Confirm or reject each hypothesis using time-bounded queries.

Outcome:

  • faster reconstruction of likely entry path, better containment prioritization, and less guesswork during root-cause analysis.

Example 3: multi-signal refinement (better than single-technique inference)

Observed set:

  • T1059 + T1082 + T1018

When multiple observed techniques are supplied, CTI Butler’s TIE can narrow predictions to paths consistent with that combination, reducing noise compared with single-tag inference.

Responder + researcher actions:

  1. Use top-ranked outputs as a shared hunt queue.
  2. Split by role: researcher validates prior-path hypotheses, responder deploys controls against near-term likely techniques.
  3. Feed confirmed findings back into detection and playbook logic.

Outcome:

  • higher precision hunts and faster coordination across intel, detection, and IR.

See CTI Butler TIE in action


An operating model you can implement now

For each ATT&CK-tagged detection:

  1. Pass observed technique(s) into TIE.
  2. Take top-N inferred techniques and attach confidence.
  3. Classify each as likely predecessor or successor using local timeline context.
  4. Create two worklists:
    • look-back validation (research)
    • look-forward containment (response)
  5. Track which inferred techniques were later confirmed.
  6. Use confirmation rates to tune thresholds and playbook defaults.

This makes ATT&CK tags operational: not just labels on alerts, but starting points for prediction-driven hunting.


tl;dr

ATT&CK gives you a common language.

TIE gives you a way to turn that language into ranked, testable paths.

For researchers, that means better hypotheses.

For incident responders, it means immediate direction on where to hunt next and what to contain first.

The key habit is simple: treat every ATT&CK-tagged detection as a pivot, then use TIE to expand that pivot into a validated attack path.


CTI Butler

The most important cyber threat intelligence knowledgebases.

Discuss this post

Head on over to the dogesec community to discuss this post.

dogesec community

Open-Source Projects

All dogesec commercial products are built in-part from code-bases we have made available under permissive licenses.

dogesec Github

Posted by:

David Greenwood

David Greenwood, Do Only Good Everyday



Never miss an update


Sign up to receive new articles in your inbox as they published.

Your subscription could not be saved. Please try again.
Your subscription has been successful.