Security teams have spent years training people to spot suspicious emails. Now the same social engineering playbook is showing up where many executives and operators feel safest: encrypted messaging apps.
In March 2026, Dutch intelligence services MIVD and AIVD published a TLP CLEAR cybersecurity advisory describing a large-scale campaign by Russian state actors to compromise Signal and WhatsApp accounts of dignitaries, civil servants, and military personnel, with government employees among the victims and journalists assessed as likely targets as well. (english.aivd.nl) The advisory is consistent with broader FBI reporting on malicious messaging campaigns that move targets from SMS into apps such as Signal and WhatsApp. (fbi.gov)
For CISOs, CTOs, and security managers, the lesson is practical: encryption does not neutralise account takeover. You need verification protocols, device hygiene, and detection that treats messaging accounts like privileged identities.
What the campaign looks like in practice
The Dutch advisory highlights two primary attack modes, both relying on social engineering rather than breaking encryption or exploiting a vulnerability in Signal or WhatsApp. (english.aivd.nl)
Account takeover using fake support outreach
Attackers pose as Signal support or a support chatbot and pressure the target to complete a verification step. The victim is asked to share:
- The SMS verification code initiated by the attacker
- The Signal PIN, which can defeat protections such as Registration Lock
Once the attacker has those codes, they can take control of the account, access contacts, read messages sent to the victim, and impersonate the victim in one-to-one and group conversations. (english.aivd.nl)
Linked devices and QR code abuse
The second path is subtler. Attackers persuade a target to scan a QR code or click a link that actually links an attacker-controlled device to the victim's account. The victim often remains logged in and may not realise anything changed, while the attacker gains access to chats and can send messages as the victim. (english.aivd.nl)
Why does this work on high-value targets
These campaigns succeed because they exploit real workflows:
- Executives and public sector leaders routinely receive unsolicited outreach.
- People assume that in-app messages are safer than email.
- Support impersonation works because users expect security warnings.
- QR codes have become normal in corporate life, from conference badges to onboarding.
FBI guidance on malicious messaging campaigns also notes that adversaries commonly establish initial contact via SMS and then move the conversation to encrypted mobile applications such as Signal and WhatsApp. (fbi.gov)
The non-obvious business impact
Even if the attacker only compromises a handful of accounts, the blast radius can be large:
- Relationship mapping through contacts and group membership
- Credible follow-on phishing using real conversation context
- Executive impersonation for payments, access requests, or emergency change approvals
- Incident response disruption if staff coordinate containment in the compromised app
This is not just a privacy event. It is an identity and trust failure that can become a material business risk.
Treat messaging accounts used for leadership coordination and operational decisions as privileged identities. Build controls that assume adversaries will attempt account takeover and impersonation, not just eavesdropping.
Controls that actually reduce risk
Most organisations already know the basics. The gap is making them operational and enforceable.
Make verification codes non-transferable by policy and habit
Your policy should be blunt and repeated:
- Nobody in IT, security, or a vendor will ever ask for a one-time code or a PIN.
- Any request for a code is a phishing attempt, even if it appears to come from support.
This aligns with CISA's phishing guidance: users must be trained to recognise and report phishing across channels, not just email. (cisa.gov)
Require out-of-band verification for messaging requests
Create a simple protocol for leadership and high-risk roles:
- Any new request that changes urgency, payment, access, or travel must be verified via a second trusted channel.
- Use known numbers from the corporate directory, not numbers supplied in a message thread.
- For group invites, verify the inviter outside the app before joining.
The Dutch advisory explicitly recommends verifying invitations through a different trusted channel and ignoring unsolicited group invitations. (english.aivd.nl)
Turn on the right Signal and WhatsApp settings
Operationalise the settings that reduce takeover and persistence:
- Enable Signal Registration Lock where feasible.
- Regularly review linked devices and remove unknown devices immediately.
- Consider Disappearing Messages for sensitive discussions to reduce retained history exposure if compromise occurs (english.aivd.nl)
Also consider restricting discoverability and phone number exposure where your environment allows it, because phone number knowledge can lower the barrier to targeted phishing. (english.aivd.nl)
Put mobile endpoints under the same rigour as laptops
If executives and incident commanders use mobile devices for security-relevant coordination, enforce:
- MDM or MAM baselines for OS patching and screen lock
- Backup and restore controls to reduce shadow device sprawl
- Rapid offboarding procedures that include messaging account hygiene, not just email
Define an incident playbook for messaging account compromise
The Dutch advisory recommends informing contacts via another channel if a compromise is discovered. (english.aivd.nl) Convert that into a runbook:
- Remove unknown linked devices, rotate relevant credentials, and re-establish account control
- Notify security and legal, then notify contacts using email or phone, not the compromised app
- Assume impersonation occurred and hunt for follow-on fraud attempts
- Recreate critical groups if an admin account may be compromised (english.aivd.nl)
What to log and detect in Microsoft Sentinel
Messaging apps are not always rich in enterprise audit logs, so your detection strategy should focus on what you can observe:
- Reports from users, executives, and assistants are first-class signals.
- Telecom and SMS anomalies when verification codes are being abused
- Device posture drift and risky sign-in patterns for identities that support mobile workflows.
- Downstream indicators such as unusual payment requests, new vendor banking details, or urgent access escalation requests
This is where Microsoft Sentinel SOC automation becomes valuable. Even when the initial compromise happens outside your email stack, you can still standardise response with consistent case handling, enrichment, and automated containment steps.
If your team is trying to scale these investigations without turning every alert into a KQL exercise, consider operating models that support KQL-free Sentinel triage, so analysts can quickly validate user-reported phishing, correlate identity and device context, and launch repeatable workflows.
For organisations that run multitenant environments, especially MSSPs, consolidating user-reported events into a consistent ticketing and change-control workflow is often the difference between isolated fixes and durable risk reduction. If it helps, you can review approaches and resources on SecQube for running standardised Sentinel operations across teams and tenants.
The leadership message to repeat internally
Encrypted messaging is a transport. Trust is still human.
If you want one sentence to align executives and operators, use this:
Never share a verification code or PIN; never trust support outreach in a chat; and always verify sensitive requests out of band.
That is the fastest path to reducing exposure while the geopolitical and social engineering pressure continues to rise.







