Lessons Learned from 5 Real Insider Threat Examples
Insider Threat Prevention

Lessons Learned from 5 Real Insider Threat Examples

An insider threat can be devastating to a business. But what are these types of attacks, and what do they teach us about how to protect data and systems?

We’ll look at a number of real-world examples that show what needs to be done, identifying the solutions that make those things happen.

What Is an Insider Threat?

An insider threat is any kind of cyberattack or data breach that happens because of someone inside a company or organization. That can mean an employee or a contractor, or someone else with proximity to trade secrets or sensitive data.

In some cases, the insider threat involves a malicious employee working on their own to damage or steal company data. Other situations involve collusion between internal and external parties, or situations where outside hackers exploit an employee or contractor to get access to sensitive information or systems.

Although there are many different types of insider threats, and each one tends to look a little different, there are some general principles and guidelines that can help companies to avoid this kind of problem.

5 Real Insider Attack Examples and Their Consequences

Some prominent insider threats affecting large companies show us how these types of attacks happen, and how they can be avoided or mitigated.


At the Coca-Cola Company, a few years ago, a high-ranking engineer was indicted for taking trade secrets delivering them to parties associated with Chinese companies and the Chinese government.

In this case, the guilty party was a principal engineer for global research. That has led some company leaders to assert that it would have been hard for them to restrict that person’s access to sensitive data.

Still, there are things to take away from this insider threat. Rather than limiting access, monitoring how the data itself is handled and utilizing stricter data controls could have detected, and even prevented the data exfiltration Coca-Cola was victim to. In this case, it’s also important to note the employee’s access level. The more access a malicious insider has, the more damage they can do. Privileged user monitoring specifically addresses this issue.

Then comes the issue of the format the sensitive data is kept in. Keeping the data in less accessible formats might have made it harder for even a top-level employee to pilfer that data and send it away unnoticed. For example, some of the chemical analysis could have been kept in non-reproducible PDF form, where simple copy-pasting would be more difficult. Some companies also glue USB ports shut to prevent the transfer of files.


Another of these big insider threat scenarios happened at Elon Musk’s Tesla electric car company, a leader in its field and an enormous brand-name in the American stock market, and actually got a response from the technology mogul himself.

Musk said the hacker caused “quite extensive and damaging sabotage” to the company by exporting large amounts of data, including photo and video assets, and by stealing away many gigabytes of Tesla data associated with the company’s MOS source code.

One takeaway from this insider threat involves being able to monitor each user account, and verify each one independently. If the hacker needed several accounts in order to pull off the caper, this type of analysis may have prevented the plan from working.

Also, logging and monitoring the use of sensitive data in real time could have caught this cybercrime in action. If human operators had been looking closely at user behavior analytics, they could have been tipped off in a number of ways: by the abnormal numbers of accounts being created, or by things like time-stamps. Another example of better IAM is privilege escalation monitoring, where observers can raise red flags when it looks like someone is doing something above their pay grade.


This one also happened in the earlier days of the coronavirus pandemic.

At Twitter, three people were charged with using the accounts of a small number of employees to exploit a phone spearfishing attack and hijack the Twitter accounts of some pretty big names, including Jeff Bezos.

Then the black hat hackers made these prominent profiles look like the individuals in question were giving away Bitcoin, and tied the accounts to a Bitcoin scam.

Mitigation research revealed that the people involved had access to internal tools, as well as data.

So one takeaway there is to protect systems, not just data, from cyberattack, and to practice good identity and access management protocols to prevent employees from running amok with corporate data. The processes through which Twitter accounts get updated would have been a good place to start in locking down the actual privileges attached to public profile publishing changes. More analysis of user behavior will often turn up abnormalities that can be used to identify suspicious actions in the network.


In the case of Cisco, an employee was able to delete 456 virtual machines and compromise parts of the company’s WebEx Teams application that handles things like video meetings and file sharing.

The 2018 attack was undertaken by an employee who had already resigned five months prior to carrying out the attack. Using his own Google cloud resources, the attacker reportedly got access to cloud systems through AWS and affected the parts of Cisco’s virtualization platform previously mentioned.

Cisco spokespersons cited a low dwell time for this attack, and said the company added safeguards after the fact. But the attack underscored the need to look closely at cloud vendors, as well as a potential lack of proper vetting for decommissioned employee accounts. Attacks that happen after someone has left the company can be avoided by correctly cutting digital ties to that company’s infrastructure.

On the VM side, there’s also a company’s capability in handling things like VM sprawl, decommissioning old machines (as well as employees as mentioned above) and carefully counting the nodes in a virtualization schema.

Ensure former employees don’t become inside threats


Target’s massive data breach happened back in 2014, and showed the world what insider threats can do, generating quite a bit of press at the time.

In the space of less than one month, hackers affected up to 110 million payment cards and personal records.

The attack involved malware on point-of-sale infrastructure that siphoned off the financial and identification data of shoppers, some 11 GB of data all told.

In Target’s case, the source of the data breach was instructive. Attackers exploited something very specific – Target’s account with a vendor that provided Internet-connected HVAC services, according to Computerworld reporting.

So in this case, the abuse of credentials happened in the field of remote facilities management.

The source of this insider threat showcases how all vendors, even those not directly connected to merchant transactions or internal core services, need to be properly vetted. It also shows how new markets like facilities management can unfortunately blossom without appropriate safeguards and controls, because companies haven’t figured out the security end of a new type of vendor provision.

As part of the company’s response, Target cited updating access controls, and limiting access to parts of the platform – but that was after the data breach occurred, and it was too late!

One aspect of hardening these systems would be to isolate the vendor’s access to only the parts of the network that deal in facilities management – so that somebody in that capacity can’t get the other sensitive data at all. The principle of data segmentation can be important here, as a safeguard against people coming in from a tangential part of the system and getting core data assets. But good identity and access management works, too, in order to keep an eye on what contractors and vendors are doing when they do have access.

insider threat prevention
Insider threat prevention through endpoint monitoring

Types of Insider Threats

As mentioned, types of insider threats vary depending on how attackers are able to leverage the assistance of someone inside the organization.

Negligent Employees and Passive Threats

In some cases, the insider threat happens because of something that employees simply failed to do. Somebody dropped the ball on proactive cybersecurity, ignoring standard practices or neglecting more active systems management.

In these negligence scenarios, there is no one inside the company actively rooting for its exposure to malware or some other kind of damage. The attackers are outsiders – they just walked in through some kind of open door because some key security component was missing: maybe the company didn’t have effective identity and access management in place, or proper firewalls, or event logging and behavior monitoring, for instance. When some deficiency is apparent, people often categorize the threat as a “negligence-related” problem.
Disgruntled Employees

Other insider threats are classic examples of what happens when someone is not happy with an employer, or, in some cases, a previous employer.

In the examples provided, we see employees stealing data or compromising systems of their own employers. We also see other cases where the attacker had already left the company prior to a threat emerging.

But in these “disgruntled employee” cases, the source of the attack is very clear – a rogue agent who had a reason to target the company did so out of some sense of malice or grievance with the company itself.

Collusion with Third Parties

Then there are those insider threats where employees or contractors are actively colluding with outsiders. They may be promised large sums of money, or recruited through the dark web, or somehow enticed to give up access or controls or permissions or something else. The mastermind may be an outsider, but he or she gets in with an insider’s help.


Another category of insider threats has more to do with active work by hackers to seek out attack vectors in any way they can. Often, people are the weakest link.

Social engineering attacks include situations where hackers pretend to be someone they’re not, or create honeypot scenarios to trap unsuspecting users, or otherwise trick someone into giving up the goods.

Spearphishing can happen through email, or through text messaging, over the phone, or through some other company platform. In these days of multichannel communications, the attack surface available to spearphishers is wide open.

However it happens, it can be devastating in terms of an insider threat that is enabled using trickery – something that’s as old as humanity itself.

Double Agents

Then there’s also a type of insider threat where there’s a ‘mole’ or ‘double agent’ inside the company. Procedures like penetration testing and physical site security may be somewhat effective, but there’s also the need to thoroughly vet employees and contractors. And in the end, if the double agent is good enough, the threat becomes even harder to detect, sort of as with the case above, with Coca-Cola. There’s just hardly any way to know if a most senior and trusted specialist might be willing to sell out secrets. It all goes back to data security that assumes the worst, and creates more ironclad protections.

Get more out of your security stack

Possible Consequences of Insider Threats

The above cases show what happens when companies suffer from insider threats. The results can harm a business profoundly in a few different ways.

First, there’s the actual cost of the attack: a widely cited Ponemon study puts the current average cost at $15 million per attack, noting in February of 2022 that the frequency of insider threats has increased 44% since 2020.

There’s also the reality that as the details are made public, the company’s name is splashed across various headlines – and not in a good way. Loss of reputation is one of the major soft costs of successful insider attacks.

Businesses may also face costs related to industry standard compliance and penalties from regulators. In any medically related business, HIPAA rules can trigger major penalties for the compromising of PHI (Protected Health Information), and in finance, various laws apply to data held at the enterprise level, such as PCI/DSS standards.

Insider Threat Statistics

More numbers tell us a lot about the incidence rate of insider attacks, with an estimated 2500 internal security breaches happening daily, and around 34% of business affected on an annual basis. There’s also the double digit rise in insider attacks over the last two years (marked from 44-47% by some expert sources).

How to Detect and Identify Insider Threats

Using User Activity Monitoring

One of the most fundamental ways to protect data is to have a robust system for looking at what users are doing on a network. Teramind’s comprehensive user activity monitoring toolkit paired with user and entity behavior analytics (UEBA) capabilities “supercharges” insider threat detection efforts through the power of big data.

Reviewers can assess what time activities take place, what files and folders were accessed and viewed, and who initiated the session. Through UEBA, Teramind aggregates user activity to establish behavior baselines then detect anomalous behaviors indicative of threats. All of this together amounts to a real source of cybersecurity knowledge that can nip insider attacks in the bud.

Another pillar of the superior business cybersecurity approach is training. This will help to prevent some of the spearphishing and social engineering attacks, or negligence-related attacks, mentioned above. Additional training can show staff how to harden parts of the system, and what not to do in order to avoid empowering hackers. The activity monitoring, as an analysis tool, can also inform the training: after reviewing where threats might be, and looking at weaker areas of operations, planners can build that into training sessions. Understanding how your workforce behaves helps create focused cybersecurity training that targets the risks and vulnerabilities that exist among users on your network.

Detecting and Identifying Insider Threats Resources

Teramind’s comprehensive monitoring platform has the resources to help companies achieve their cybersecurity goals and combat insider threats. By collecting and analyzing desktop behavior, and with tools like the proprietary risk scoring, Teramind’s user behavior intelligence system girds companies against threats from within. Automated alerts and responses to violating behaviors and highly scriptable rule logic equips companies to detect and prevent insider threats specific to their organization. Teramind DLP helps companies to strengthen … defensive posture” and preemptively meet the threats that they face.

Take Teramind for a spin with our Live Demo!