Social Engineering: A Serious Risk to Your Organization’s Secure Data…And Overall Facilities Safety - National Information Solutions Cooperative (NISC)

Social Engineering: A Serious Risk to Your Organization’s Secure Data…And Overall Facilities Safety

Being in the industries in which we serve, safety is always at the forefront of your mind. From the staff in the field to those in your offices, keeping a focus on safety is the number one priority. Another top priority is keeping your critical business data secure…And one specific attack remains a high-risk, quickly evolving tactic: Social engineering.

What is Social Engineering?

There is a litany of methods cybercriminals deploy to breach your organization. Social engineering is a common tactic, but one folks may not truly understand. Unlike other threats, social engineering is a psychological attack that can result in your own staff unknowingly opening the proverbial door to cybercriminals, putting your data – and your reputation – at risk.

While social engineering might be new to some, you are familiar with it in many forms, most notably phishing. Phishing is the most common form of social engineering and something you have likely been warned about since you were assigned your first email account. Phishing is not limited to email though, as SMS, social media and other forms of personal communications are included in this classification.

According to the FBI, upwards of 98% of cyberattacks use a form of social engineering to cause a breach. But what does that mean? It means that cybercriminals are capitalizing on the human element and manipulating it every chance they get.

Why is Social Engineering So Effective?

First and foremost, social engineering attacks target people. They are not systematic attacks against technical defenses, such as a brute force attack. Deploying social engineering tactics attempt to trick staff to click and either download and execute ransomware or inadvertently share credentials that grant hackers direct access whenever they desire. Cybercriminals are looking for an access point, and your staff is the target with social engineering.

As technology evolves, so do cybercriminals. They have moved beyond mass phishing to an attack called whaling, which is phishing personalized and targeted at one particular victim. Over a span of time, they build rapport as they set the stage for the attack. Hackers also employ water hole attacks in which they compromise an established website and trick visitors into sharing their credentials or downloading malware unbeknownst to the site owners and the users.

The bottom line is they are becoming cleverer. Sometimes the most educated, most careful person can slip up, and it isn’t for lack of knowledge or carelessness. The cybercriminals are that good at exploiting the human element.

The success of social engineering focuses on one thing: deception. They will build trust, gain access and execute their nefarious plans, often while the victim had no idea they were targeted and compromised.

What Can Your Organization Do?

Cybercriminals are harnessing technology to keep their attacks evolving, staying ahead of trends and leveraging them to harvest vital information and wreak havoc on your system.

Your staff is your first line of defense. It is critical all staff is aware that one single misstep can relinquish control over a network or grant access to critical information that can negatively affect your organization. In this regard, education is the key to safeguarding your data and securing physical assets. It also requires constant reinforcement of those policies every day with a solid cybersecurity plan securing your infrastructure.

It is imperative your staff is always on guard, questioning the legitimacy of each email, social media contact and even phone call into your organization.

Yes, phone calls.

Why?

Because it is easier now than ever to perfectly clone someone’s voice. And it could even be your CEO’s.

…Which Brings Us to Deepfakes

Machine learning is not new, and the impact it has across the world and all industries is revolutionary. Artificial Intelligence (AI) is the fastest-adopted technology ever, and it is leading the charge to develop new tools to protect against cybercriminals. But the downside is those very same hackers can use AI to assist their attacks as well.

With all the good AI brings, there is a burgeoning risk, and it is called deepfakes.

Deepfakes are media that has been created or altered by AI. You’ve probably seen one and didn’t realize it wasn’t real. Many are so convincing that you cannot tell it was computer generated unless you caught a subtle flutter of an eyelid or a slightly mechanical hand movement. While this technology may have initially been developed for entertainment purposes, it was very quickly reimagined for criminal behavior – and your organization could be targeted.

Deepfakes are bringing greater challenges to organizations in the form of identity theft and fraud. AI can be used to create fake identities and realistic documents. Moreso, the technology could be used for misinformation campaigns, spreading a false narrative about your organization, your practices and the reliability of your services. Most concerning is the opportunity it brings to manipulate videos to fabricate incidents at your organization, leading to false accusations or regulatory issues. All scenarios can damage your reputation and trust with your customers – and can lead to market manipulation.

The main concern coupled with social engineering is using deepfake technology in a social engineering attack to impersonate executives or employees, allowing access to sensitive information or breaches within your facilities and secure areas of your organization. With the ability to impersonate virtually anyone, hackers can now launch elaborate social engineering attacks with alarming precision.

How Can You Mitigate These Risks?

While deepfakes are highly targeted and hard to detect, it is not impossible. As with social engineering attacks, knowledge is power. Work with your staff to bring greater awareness to synthetic content and how to spot it, educate them to be highly critical of online personas and encourage them to report anything that seems a bit “off.”

While humans are targeted to be exploited by social engineering, there is one element cybercriminals nor AI can account for: human intuition.

Human intuition is a powerful tool. Harness it. Reward it. Encourage it. It could ultimately be the final line of defense with these new AI-powered attacks.