I was talking with some colleagues on LinkedIn recently about simulated phishing. A company last week used a particularly tone deaf simulated phishing message at a company struggling during the COVID-19 pandemic. Employees had been furloughed, salaries were cut, so when a simulated phishing message claiming to offer bonuses was sent, the employees were furious.
Cybercriminals don’t have to play by the rules or use their own moral compass. So why not use every tactic in the book to attack your own employees?
I think the question answers itself. Sending simulated phishing messages to our employees is a privilege. We need their help in preventing cyber attacks, but we also need them to do their jobs to keep the company in business. If a simulated phishing message were aggressive enough to drive star performers away or to reduce productivity, the company as a whole suffers. Successful leaders need their teams to trust them, and there is a real risk that a program that is too aggressive will prevent a CISO from winning or keeping the trust of the business.
I’ve learned the hard way using simulated phishing campaigns on campus. It’s easy to create a perfect simulated email that anyone can click. But it’s much more challenging to craft a simulate phishing message that is convincing enough, but still has some lessons to learn from inside of it. And we know from research that users learn best when they are in an environment where they feel safe. I think in the long term we set ourselves up for failure by using aggressive tactics that make users feel victimized.
While it’s true that hackers don’t have to follow rules, security teams need to ensure they don’t go too far when sending simulated phishing emails.
One of the biggest concerns when we began sending simulated phishing messages 5 years ago was whether it would impact employee productivity, responsiveness, or morale. We didn’t want to discourage SMU users from clicking legitimate emails or to not respond to emails because of the program.
It hasn’t been a huge issue because we follow several rules:
1. We always send an email a week or two in advance of a campaign to give people a heads-up. This allows us to go back to any user who might be upset to show that we provided some notice. Bad guys don’t have to provide notice, but I want users to feel comfortable reaching out to me if they do see a phishing message in the future.
2. We do copy the templates that real hackers use to target employees. Often, the lessons that I want to be able to teach are about the context of what “angel†the bad guys are taking when reaching out.
3. We always include multiple red flags in phishing messages to ensure ample opportunity to discover messages. Bad guys today don’t have bad grammar, but I want to create opportunities to learn specific lessons. Making a phishing message too difficult to detect obscures these lessons.
4. We avoid simulating legitimate SMU business emails or using real names that users would recognize. Bad guys can use real names, however, I don’t want to create a situation where users have a negative impression of that employee moving forward. I would completely destroy my relationship with that person unless I got their permission first.
5. We never reveal the names of users who click on links in simulated phishing messages and only report on aggregate click through rates. This is very important because of the point I made earlier – if a person is afraid, they can’t learn or grow. Some believe people should be fired for clicking on phishing messages…however, most organizations aren’t willing to fire their CEOs or Presidents or Vice Presidents for clicking and often these are the most targeted as well as the biggest stakeholders in your security program. Not enforcing penalties equally destroys your credibility.
As security practitioners, we need our users to trust us. Ultimately, using deception should create more trust with the business, not less.
This month, I’ve started using an opt-in model for simulated phishing by playing a game I call, The Biggest Phisher. Our employees can win prizes like vacation days or gift cards, and since they’ve opted-in, I have more flexibility to be more aggressive with my campaigns and use fewer red flags. Users submitted their own phishing messages, so they are more willing to accept devious phishing messages. And we have weekly updates to track where users are at with an anonymous leader board with their codenames.
We can play fair and still help our users be more secure.