Education

Does Your Personality Make You More Likely To Be Scammed?

By Contributor,Dave Winsborough

Copyright forbes

Does Your Personality Make You More Likely To Be Scammed?

This article was coauthored with Daniel Robertson of the Chaucer Group.

Samantha* led the HR department of a large regional police agency. Promoted 3 years ago, she’d led a number of initiatives to change frontline policing attitudes and behaviour, which lead to a prestigious HR award and invitations to speak at international conferences. Her LinkedIn profile blew up with recruitment offers, and while she wasn’t sure she wanted to leave, the attention was flattering. She struck up a chatty correspondence with a charming overseas recruiter with whom she shared many connections, and who tempted her with a role that read as if it had been written for her – and the salary range sounded amazing.

Some of us are more likely to be scammed
picture alliance via Getty Images

Her friends all encouraged her to go for it – after all, it cost nothing to apply and she could make a call if she got the role. She emailed the recruiter, who sent a link to a secure portal and on Saturday Samantha clicked through to the portal without a second thought. She was asked to set a password and upload a scanned copy of her ID “to verify candidate authenticity,” something she’d seen before in government hiring. She completed her candidate profile, downloaded the role description, uploaded her formal CV and sat back, pleased.

On Sunday night, Samantha’s VPN wouldn’t connect to work. She restarted her laptop. When she opened Outlook, it crashed.

Monday morning came the follow-up from IT: unusual traffic had been detected from her account, 2.6GB of outbound data, which turned out to be emails, attachments, and personnel files from the HR team’s shared drive. The so-called recruitment portal had been a credential harvesting front, designed to bypass corporate firewalls through a trusted user: Samantha. The LinkedIn account and all the connections were real – but entirely faked with the express purpose of targeting someone like Samantha. Having her work through creating a profile wasn’t simply a ploy either, it used session hijacking to keep her online while it escalated privileges in the background and embedded a script into her browser that gave attackers full access.

All of the internal HR documents were leaked to a file-sharing site registered in Belarus and sensitive, personal data on staff was compromised. 20 frontline officers reported phishing emails referencing their confidential HR data. Hundreds of hours of internal and security consultant time was spent clearing the attack.

If it were a country, Cybercrimeland would be a world-scale player.

Estimates suggest that in 2021 cybercrime’s cost exceeded $1 trillion worldwide, but by 2024 that number swelled to somewhere over $2 trillion in 2024. If it were a country, Cybercrimeland would now rank in top 20 largest economies in the world.

MORE FOR YOU

Samantha was a victim of a highly professional, personalized social engineering breach. LinkedIn, once a professional networking tool is now also reconnaissance terrain.

Phishing nets billions.

Criminals and state-sponsored actors alike are mining all our social profiles profiles to mount precision-targeted spear phishing and business email campaigns. These attacks often leverage the public profiles we all have: connections, job history, education, job title, reporting lines, career updates — and even the tone of your public posts—to create a frighteningly accurate psychological profile of you. That profile is the basis to craft highly persuasive messages specifically designed to reduce suspicion and motivate you to do something.

Perhaps the most striking trend is the growing dominance of social engineering as a mode of attack. Manipulating a human user – via fraudulent emails, messages, or calls – has become the go-to tactic for many cybercriminals. One estimate found three out of every four breaches involved a human actor, whether through errors, stolen credentials, or social engineering trickery.

Generative AI is reshaping the way criminals attack. There has been a seven-fold increase in deepfake-related fraud attempts since 2022. True story: in Hong Kong, an employee was invited by his CFO to a video conference with colleagues, and during the meeting instructed to wire a total of $25 million to five different bank accounts. While the money was real the voicemail invitation was deep-faked, and all the colleagues on the call were deep-faked AI avatars. While cybersecurity firms and governments are creating tools to detect deep-fakes, they are in an arms race with the criminals, who have one significant advantage: the psychology of victims and targets.

Warm, outgoing and a little scatty?

Phishing training delivers approximately zero benefit.

The standard corporate response to head off attacks is to provide staff with computer-based awareness training. Most of us will have completed it – in order to get quickly to the end and tick the compliance box.

The most up-to-date evidence shows that the effectiveness of training like this on real world behaviour is close to zero.

A more promising angle is to explore the psychology behind those who are more susceptible to falling for phishing scams. Personality traits may help us understand who is more vulnerable or more resilient to manipulation. In line, promoting self-awareness of a person’s traits can inform better protection strategies, from alerting us that we might be more vulnerable, to user education to tailored personality-based training interventions in companies.

Modern personality measures are based on the Big 5 framework, which defines human personality along five broad, bipolar dimensions of individual differences:

Openness to Experience ranges from intellectual curiosity and artistic sensitivity to being conventional and pragmatic.

Conscientiousness spans self-discipline, orderliness, and reliability to spontaneity, disorganization, and impulsivity.

Extraversion contrasts sociability, assertiveness, and energy with reserve, quietness, and a preference for solitude.

Agreeableness describes a continuum from compassion, trust, and cooperativeness to scepticism, competitiveness, and antagonism.

Neuroticism ranges from emotional reactivity and vulnerability to stress, to calmness, emotional stability, and resilience.

Across studies, a fairly clear pattern of three traits seem to make a person more likely to respond to phishing scams, or at greater risk of being taken in in sophisticated social engineering attacks. We can think of these people as warm, outgoing, caring, somewhat scattered and not always careful.

One robust finding is that the more disciplined, methodical and thorough people are, the less likely they are to fall for scams. On the other hand, people who are more flexible – that is, impulsive, intuitive and the kind who react quickly and ask questions later – are exactly the sort who might reactively click a malicious link.

Likewise, people who are warm and trusting, and more eager to help, are more open to exploitation. Agreeable people are also more likely to go along with authority figures, and studies have shown a link between agreeableness and falling prey to phishing.

And finally, sociable, gregarious and outgoing people might also show increased vulnerability, because they engage more readily with emails from people they don’t yet know and are motivated to make new connections.

Interestingly, a strong finding from research is that people who are confident in their belief they wouldn’t be fooled and could tell a phishing email from a genuine one were more likely to fall victim. A study of over 800,000 people showed that as confidence increased, detection rates of malicious emails fell. Trusting that “I wouldn’t fall for that” leads to:

• Reduced scrutiny of suspicious emails

• Dismissal of security warnings

• Lower engagement with training

• Failure to verify requests.

So who’s most at risk?

While anyone can fall for a scam under the right conditions, a prototypical profile is of an outging, helpful, and confident person, who is not especially detail-focused. These are people who pride themselves on being savvy and capable, and who trust their ability to tell when something’s off. They’re friendly and warm, eager to help, and unlikely to push back against authority. They also tend to skim rather than scrutinise.

What makes this combination dangerous is how naturally it maps onto the tactics used in phishing and social engineering: urgency, flattery, authority, and plausibility. The more sociable and trusting you are, the more likely you are to engage. The more confident you are, the less likely you are to double-check. The less methodical you are, the more likely you are to click without thinking. It’s not stupidity—it’s human nature. And it’s precisely this psychological pattern that cybercriminals are now modelling, targeting, and exploiting at scale.

Samantha’s story is the shape of things to come

Samantha, of course, matched this profile almost perfectly. Her warmth and openness made her approachable. Her confidence, rightly earned, made her trust her own judgement. Her responsiveness, ambition, and habit of saying yes—rather than slowing down—meant she walked straight into the trap. She wasn’t naïve, she was simply human. And in the era of AI-powered social engineering, that’s what the criminals are counting on.

*Definitely not her real name.

Editorial StandardsReprints & Permissions