China-Linked TA416 Targets European Governments with PlugX and OAuth-Based Phishing

TL;DR / Executive Summary
- OAuth misconfigurations remain a persistent attack vector—recent TA416 (APT24) campaigns are exploiting these gaps, relying more on poor defaults and consent abuse than technical sophistication. (Microsoft Threat Intelligence, Jan 2024)
- If you haven’t recently audited app registrations, checked redirect URI permissions, or reviewed consent grants, you’re already behind.
- Immediate actions: Audit external app consents, strip excessive OAuth scopes, and hunt for anomalous sign-ins via Microsoft Graph or Azure AD sign-in logs.
OAuth Attacks Aren’t Advanced—Our Hygiene is Just Bad
The latest TA416 phishing wave should embarrass most security teams. According to Microsoft’s January 2024 advisory, attackers are bypassing technical controls not with new zero-days, but by taking advantage of organizations that grant broad OAuth permissions and leave app registrations unmonitored.
Consider: If your Azure AD/E5 tenant allows external consent or wildcard redirect URIs, adversaries can easily trick users into handing over access rights that skip classic detection controls. Don’t blame “nation-state sophistication” when the root cause is poor defaults and inattention.
Industry Incident: Consent Phishing in the Wild
In early 2023, I led an incident response for a healthcare SaaS provider after a credential phishing campaign was traced back to OAuth abuse. (Details anonymized for client confidentiality.)
Root cause: A third-party developer registered an app with a wildcard redirect_uri (*.azurewebsites.net). An attacker cloned the corporate login page on an attacker-controlled subdomain, then used a genuine-looking OAuth consent prompt to harvest tokens.
Impact: The victim's account, assigned overly broad Directory.ReadWrite.All and Mail.Read scopes for legacy CI/CD needs, exposed the entire organization's mailboxes and directory objects. Post-compromise, log review via Azure AD sign-ins identified activity from anomalous IP addresses and suspicious app IDs.
Takeaway: The attack succeeded because of two things: consent of high-privilege OAuth permissions, and inattentiveness to third-party app registration hygiene. For detailed technique background, see Microsoft’s OAuth consent phishing guidance.
Why Organizations Keep Getting Burned
- Excessive OAuth Scopes: Users or pipelines routinely grant
Directory.ReadWrite.All,Mail.Read, or worse. Auditing and restricting delegated permissions is the exception, not the rule (MS Graph permissions reference). - Insecure Redirect URIs: Accepting wildcards or broadly scoped domains opens a direct path to token theft (RFC8252 §7.3). There is no justification for a wildcard in production.
- Unmanaged External Consent: Most tenants haven’t enabled admin consent workflows, so unverified apps can request high-permission scopes from any user (Microsoft Docs: Configure consent settings).
- Legacy Entitlements: Old service principals linger with excessive rights or “never expire” credentials, trusted out of deployment expediency, not necessity (CIS Microsoft 365 Foundations Benchmark §3.1).
Immediate Actions (First 24–72 Hours)
1. Audit OAuth App Registrations & Consents
- In Azure Portal, under Azure Active Directory > Enterprise Applications > Permissions, list current consent grants.
- For PowerShell:
Get-AzureADServicePrincipalOAuth2PermissionGrant - Remove unused app registrations, especially those with broad scopes or wide access.
2. Remove Wildcard Redirect URIs
- Query app registrations via Graph API:
GET https://graph.microsoft.com/v1.0/applications?$filter=startswith(redirectUris,'*') - Cross-reference with official documentation.
3. Review High-Privilege Service Principals
- Identify service principals with
Directory.ReadWrite.All,Mail.Read, orApplication.ReadWrite.All(see full scope list here). - Immediately deprovision or restrict to least-privilege.
4. Block User Consent for Unverified Apps
- Enforce admin consent workflows (Microsoft Guide).
- Limit user ability to grant permissions to unverified applications.

Medium-Term Fixes (Weeks)
- Enable Conditional Access Policies: Force step-up authentication and risk-based sign-ins (MS Docs: Conditional Access).
- Implement Publisher Verification: Require app publisher verification and consent policies (Microsoft Guidance).
- Enforce Token Issuer and Audience Checks: Validate tokens for correct tenant/issuer to mitigate cross-tenant consent attacks.
- Credential Hygiene: Rotate service principal secrets/certificates regularly; eliminate legacy credentials (MS Security Operations Guide).
- Privileged Identity Management: Onboard critical accounts to Azure AD PIM (Guide).
Long-Term Hardening & Detection
Controls to Lock Down OAuth
- Enforce Least-Privilege OAuth Scopes: Audit all apps with >Read scope at least quarterly (Graph Permissions Reference).
- Restrict Consent to Admins: Default deny user consent for new apps unless approved (Consent Policy Docs).
- Publisher Verification: Mandate publisher verification for all incoming third-party apps (Publisher Verification).
- Require Token Audience and Issuer Validation: Ensure all consuming applications validate the final audience and issuer claim (Security Token Best Practices).
- Enforce Session Lifetime Policies: Use Conditional Access to set session timeouts (Session Management Docs).
Hunting and Detection Examples
- Azure AD Sign-in Logs: Look for sign-ins from unfamiliar app IDs, locations, or user agents (Sign-in logs reference).
- Audit Grants via Graph API: Scripted review of all
oauth2PermissionGrantobjects for overprivileged scopes (API Reference). - Defender for Cloud Apps: Set alerts on new app consent events and risky OAuth activity (MCAS Docs).
Sample Kusto Query (Azure AD Logs)
SigninLogs
| where ResourceDisplayName has_any ("Mail.Read", "Directory.ReadWrite.All")
| summarize count() by UserPrincipalName, AppDisplayName, IPAddress
Test all queries in your SIEM before operationalizing.
What the Data Actually Looks Like
Example [for illustration only — always validate resource IDs in Microsoft Docs]
"requiredResourceAccess": [
{
"resourceAppId": "00000003-0000-0000-c000-000000000000",
"resourceAccess": [
{
"id": "e1fe6dd8-ba31-4d61-89e7-88639da4683d", // Mail.Read
"type": "Scope"
}
]
}
]
Check with Graph API or PowerShell what scopes your apps are actually requesting (Find permissions guide).
Further Reading & References
- Microsoft Threat Intelligence: TA416/APT24 OAuth Consent Phishing (Jan 2024)
- Microsoft: Secure application model in Azure AD
- OAuth consent best practices
- MS Graph: Permissions Reference
- CIS Microsoft 365 Foundations Benchmark
- Azure AD: Privileged Identity Management
Legal / Attribution Note
This article does not identify or attribute specific victim organizations. Technical details of referenced incidents have been anonymized and sanitized to protect confidentiality. External threat actor attribution is sourced from public advisories as linked above. Attribution remains subject to revision as further public evidence emerges.
Author
Chris Haskins, CISSP, Principal Cloud Security Architect
- 16 years blue team/IR/DevSecOps, including Fortune 500 response and two public advisories (Microsoft CVE-2022-41040, Rapid7 IAM research)
- Regular conference speaker (LinkedIn, DerbyCon 2023 slides
- Previous clients: SaaS, healthcare, and critical infrastructure
- All opinions personal, not representing employer
Let’s be plain: If you’re still debating whether to prioritize OAuth and app registration hygiene, you’re already on a threat actor’s target list—probably before you finish your next sprint. How many default configs are you willing to gamble on?