TI map IP entity rules in Microsoft Sentinel are one of the quickest ways to turn threat intelligence (TI) into action. They work by joining known bad IP indicators (IOCs) with your telemetry (such as Azure Activity, VM Connection, Office Activity and Common Security Log) so analysts can spot suspicious infrastructure early, often before an attacker completes a full kill chain.
This article explains how the rule works, how to configure it for reliable matches, where teams commonly go wrong (including Source IP mapping issues in Common Security Log), and how to improve entity mapping even if you don’t want your SOC dependent on deep KQL expertise.
What the TI map IP entity rule actually does
At a high level, the rule:
- Pulls IP-based indicators from your TI source(s) (typically via the Threat Intelligence platform integration that lands data into the Threat Intelligence Indicator table).
- Looks back across one or more log sources for IP fields that may represent an “IP entity” (source, destination, client, remote, etc.).
- Joins the two datasets and creates an alert/incident when a log IP matches a TI IP within the relevant time windows.
The value is straightforward: you don’t just store IOCs, you operationalise them. But the difference between “noise” and “actionable” is almost always in the quality of the join and the entity mapping.
Why teams miss malicious IPs even with TI enabled
Most missed detections happen for one of four reasons:
- The IP field you’re joining on isn’t the true attacker IP (think NAT, proxies, load balancers, or a security device that logs itself as the source).
- The IP field format doesn’t match the TI format (IPv6 vs IPv4, ports attached, whitespace, embedded strings, or private IPs that will never exist in TI).
- Your lookback windows don’t overlap (the indicator is “active,” but the log search window is too small, or vice versa).
- Entity mapping is technically “present”, but operationally wrong (alerts fire, yet the incident shows the wrong entity, slowing triage and automation).
When these issues stack up, SOC teams stop trusting TI-based analytics and treat them as background noise—exactly the opposite of what you want.
Pick the right logs (and the right IP fields) for early detection.
The context you choose determines whether you detect attacker reconnaissance early or only get a hit after the damage is done.
Azure Activity: great for control-plane visibility
Azure Activity is ideal for detecting unusual management-plane behaviour originating from known hostile infrastructure (for example, suspicious actions against subscriptions, resource groups, key vaults, or networking).
Best practice: focus on the caller IP fields relevant to the operation, and ensure your analytic output maps that IP as the key entity. If you map the wrong field, you’ll create incidents that look “real” but do not triage quickly.
VM Connection: strong for network behaviour on hosts
VM Connection (often from Azure Monitor/VM Insights) can identify known bad IPs contacting servers, including lateral movement attempts or beaconing patterns.
Best practice: decide whether you care more about the remote IP (likely attacker infrastructure) or the local IP (your asset). Thenmap both entities where possible, but ensure the incident highlights the remote IP as the primary TI match.
Office Activity: useful, but be careful with proxies and front doors.
Office Activity can surface access patterns where the apparent IP is a corporate egress, a Microsoft front end, or a trusted proxy rather than the true client.
If OfficeActivity is fronted by a proxy or CASB, you may need to join on a different client IP field (or enrich logs) to avoid matching your own proxy IPs against TI and generating false positives.
Common Security Log: powerful, but the most common place for Source IP matching flaws.
Common Security Log is widely used (e.g., firewalls, IDS/IPS, secure web gateways), but IP fields vary across vendors and connectors. A frequent pitfall is assuming Source IP always represents the external attacker.
In practice, Source IP might be:
- Your internal host (true source), while the attacker is the destination
- A NAT gateway or firewall interface
- A proxy (masking the true client IP)
- A parsed field that’s present but not populated consistently
If the rule is matching the “wrong” side of the conversation, you’ll either miss real threats or generate high-confidence noise.
Configuration best practices that improve match quality fast
You don’t need to over-engineer this. A few targeted settings usually deliver a step-change in results.
Tune your indicator filters before you tune your logs
Not all TI is equal. If you ingest everything, you inherit other people’s noise.
Prioritise:
- Active and recent indicators (set sensible indicator lookback and expiry handling)
- High-confidence sources (or reputable feeds with scoring)
- Indicator types you can actually match reliably (IP tends to be high-utility, but still benefits from scoring and expiry discipline)
A learner indicator set often increases true positives and reduces incident fatigue.
Align time windows: indicator lookback vs log lookback
Two windows must align:
- How far back you consider an indicator “valid”
- How far back you search logs for matches
If your logs search 1 hour but your environment only forwards certain telemetry every few hours (or you have ingestion latency), you’ll miss matches and assume the rule “doesn’t work”.
A practical approach for many SOCs is:
- Start broader to validate value (e.g., 24 hours of logs)
- Then tighten once you understand volume and noise, drivers
Normalise IP fields to reduce silent mismatches
Even small formatting differences can break joins.
Common normalisation steps include:
- Extracting IPs from strings that include ports
- Handling IPv6 consistently (or explicitly excluding it if your TI source is IPv4-only)
- Excluding private IP ranges from TI matching logic (unless you have a very specific internal TI use case)
If you’re trying to optimise without heavy KQL work, focus on what you can do in the connector configuration and analytic rule UI first (field selection, entity mapping, and rule scope). Then only add KQL where it’s genuinely needed.
Map entities for triage and automation, not just for “passing validation”
Entity mapping is what turns a raw alert into something your SOC (and automation) can act on.
Aim for mapping that supports:
- Fast analyst decision-making (which IP is malicious, which asset is impacted)
- Playbooks/automation (blocking, enrichment, ticket creation)
- Consistent incident views across different log sources
A good pattern is to map:
- IP entity: the matched TI IP (attacker infrastructure)
- Host/account: the affected resource (where available)
- Additional context: the log-specific “other side” of the connection (source/destination), so analysts aren’t forced into a manual pivot every time
Practical strategies to avoid Source IP pitfalls in Common Security Log
If you only do one improvement, do this: validate what “source” and “destination” mean in your connector output.
Step-by-step validation (quick but effective)
- Pick three recent firewall events you understand (e.g., a user browsing, a server receiving inbound, a blocked outbound).
- Confirm which field contains the external IP in each case.
- Check whether that field is consistently populated across event types.
- Only then decide which field(s) your TI map rule should join on.
If you discover the “attacker IP” alternates between Source IP and Destination IP depending on direction, build logic that considers both—but apply safeguards to avoid doubling noise.
A common optimisation is to prioritise matches where the external/public IP is on the “remote” side of the session, and de-prioritise matches involving known corporate egress IPs (which you can maintain in a list).
Entity mapping without KQL expertise: what to do when you still want speed and consistency
Many MSPs and enterprises face the same operational problem: detections are possible, but sustaining them requires too much specialist KQL effort, and any staff change introduces risk.
There are three pragmatic approaches:
Standardise on a small set of validated rules and mappings
Instead of dozens of slightly different TI analytics, create a “golden” baseline:
- One TI map rule per major telemetry family (cloud control plane, endpoint/network, productivity)
- A consistent incident naming convention
- A consistent mapping approach (attacker IP vs impacted asset)
This reduces variance and makes triage repeatable.
Use guided investigation and remediation to reduce analyst dependency
This is where an AI assistant designed specifically for SOC work helps.
SecQube’s Harvey AI is built to support KQL-free Sentinel triage by guiding analysts through:
- What matched (which indicator, which log event)
- Why it matters (severity context and confidence)
- What to do next (enrichment steps and recommended actions)
- How to document and close (consistent ticketing/change management workflows)
For decision-makers, the key outcome is less time spent translating detections into actions—and fewer escalations that exist solely because “only one person knows the queries”.
Automate the “boring but essential” enrichment
Even if you keep the analytic rule simple, you can speed response by ensuring incidents are enriched consistently (ownership, criticality, asset context, known egress IP lists, and previous sightings).
That enrichment is what turns a TI hit from “interesting” to “actionable in minutes”.
What “faster threat detection” looks like in the real world
When TI map IP entity rules are tuned well, SOC teams typically see:
- Fewer low-confidence hits that waste analyst time
- Faster time-to-understand (because the incident shows the right IP entity and context)
- Faster time-to-contain (because response steps can be standardised and automated)
For MSPs and MSSPs, this matters even more: multi-tenant operations amplify every inefficiency. If analysts spend extra minutes per incident due to poor entity mapping or ambiguous Source IP fields, the commercial impact shows up quickly in margin and SLA pressure.
SecQube has seen environments where automation and consistent triage drastically reduce investigation time and SOC workload—freeing skilled staff to focus on higher-value work, and enabling leaner teams to deliver better outcomes.
A quick optimisation checklist you can use this week
- Validate which fields truly represent the external/attacker IP in each data source (especially Common Security Log).
- Align indicator validity windows with log search windows to avoid silent misses.
- Normalise IP formats to protect the join (ports, IPv6, whitespace, private ranges).
- Map entities for triage and automation, not just to “complete the rule”.
- Maintain a list of known corporate egress/proxy IPs to reduce predictable false positives.
- Standardise your rule set so your SOC isn’t dependent on a single KQL specialist.
Where SecQube fits: Sentinel-first, SOC-optimised, and built for decision-makers
If your goal is to improve detection speed without growing headcount, SecQube is designed for that exact outcome: an AI-powered Sentinel SOC platform that keeps your data in your environment, reduces triage time, and helps teams operate consistently across enterprises and multi-tenant MSP/MSSP models.
You can learn more about SecQube and Harvey AI at SecQube, and if you’re ready to move from “rules configured” to “SOC outcomes improved”, the natural next step is a POC trial via the Microsoft marketplace route your team already trusts.
The fastest win is rarely “more TI”. It’s better joins, better entity mapping, and a triage flow that doesn’t depend on KQL heroics.






