Impersonation Attacks Are Reshaping Workplace Cyber Risk

Cybersecurity

Key Takeaways

Nearly a quarter of employees under 35 would respond to suspicious work messages, highlighting a critical vulnerability to impersonation attacks in workplace communication.

Advancements in AI have made it easier for attackers to create convincing impersonation messages, prompting a shift in focus for cybersecurity teams toward identity trust and communication workflows.

Employee training is essential as a frontline defense against cyber attacks, emphasizing the need for policies and practices that encourage verification and cautious behavior in digital interactions.

Nearly a quarter of employees under 35 would respond to a suspicious work message if it appeared to come from a colleague or senior leader, according to research from Accenture based on a survey of more than 1,000 UK workers.

The findings show that impersonation, rather than technical compromise, has become a growing source of cyber exposure in the workplace.

Impersonation and other social engineering attacks target routine workplace behavior. They rely on unauthenticated requests that appear to come from trusted colleagues or senior leaders, prompting employees to share data, approve payments, or bypass standard verification steps through familiar communication channels.

Advances in AI have lowered the barrier to carrying out these attacks, making it easier to generate convincing messages, mimic tone or context, and scale impersonation attempts.

As a result, cybersecurity teams are increasingly reassessing where vulnerability sits in the workplace, shifting attention from system-level weaknesses toward identity trust, communication workflows, and employee decision-making.

Where Workplace Habits Create Exposure

According to the research from Accenture, 15% of respondents said they would share company information, or approve payments, through messaging platforms, like WhatsApp, without verifying the sender if the request appeared to come from within the organization.

That figure rises to 24% among professionals under the age of 35, which suggests a need for clearer organizational policies, consistent verification processes, and targeted training that reflects how younger employees interact through informal digital channels.

More than 80% of employees surveyed in the report were confident in their ability to identify phishing or AI-enabled cyberattacks. That confidence, however, did not correspond with more cautious behavior, according to the report.

While 56% of businesses in the UK reported concerns about cyber threats, more than a third of workers (37%) said they had never received cybersecurity training.

Of those that had received training, 50% said it did not include guidance on using AI safely. Nearly one in five (17%) said they have no awareness of AI-driven cyber threats. Those who reported some knowledge cited deepfake videos (61%), AI-generated phishing emails (61%), voice cloning (47%), and identity theft (45%).

Cyber Risk Now Lives in Routine Workflows

Impersonation and other social engineering attacks continue to gain ground because they exploit everyday human interaction in the workplace.

AI has accelerated this pattern by making it easier to generate credible messages, replicate organizational context, and sustain deception across multiple channels.

Yet technology alone does not explain their effectiveness. These attacks succeed because modern workplaces prioritize speed, informality, and distributed decision-making, often without consistent verification embedded into everyday workflows.

For cybersecurity leaders, this shifts how vulnerability must be understood and managed. Risk no longer sits solely within compromised devices or misconfigured systems, but in how requests are made, verified, and acted on across the organization.

Messaging platforms, approval chains, and identity-based access therefore function as part of the security infrastructure, even when they sit outside traditional security tooling.

In this environment, resilience depends as much on the design of workflows and identity controls as it does on technical detection and response. That often means clearer policies and training that reflects how employees use communication and collaboration tools.

As Kamran Ikram, security lead in the UK and Ireland for Accenture, concluded in a press release, “building a cyber-savvy workforce isn’t just about protecting your systems, it’s also what allows innovation and trust to scale together.”

What This Means for ERP Insiders

Identity-based workflows have become a security concern. Cyber risk increasingly arises from how identity and authority are exercised in everyday workflows. Requests, approvals, and informal communications now create exposure, requiring leaders to treat verification paths and decision flows as managed security risk.

AI shifts impersonation from edge case to scale. AI sharply reduces the effort needed to convincingly impersonate people, roles, and organizational context. This reframes AI risk as an operational challenge that affects identity, communication, and decision-making, extending beyond traditional malware detection or automated threat response.

Training has become a core security control. As impersonation attacks target judgment rather than systems, employee training functions as a frontline defense. This means designing training around real workflows, decision points, and verification expectations, ensuring human responses reinforce—not undermine—technical security controls.