FBI Warns Against a New Cyberattack Vector Called Business Identity Compromise (BIC)
The FBI warns that synthetic content may be used in a “newly defined cyberattack vector” called Business Identity Compromise (BIC).
Imagine you’re on a conference call with your colleagues discussing the latest sales numbers. Information that your competitors would love to get ahold of.
Suddenly, your colleague Steve’s image flickers somewhat. It draws your attention. And when you look at it, you notice something odd. Steve’s image doesn’t look exactly right. It looks like Steve, it sounds like him, but something appears to be off about him. Upon a closer look, you see that the area around his face looks like it is shimmering, and the lines appear blurry.
You write it off as a technical glitch and continue the meeting as usual. Only to find out a week later that your organization suffered a data leak and the information you discussed during the meeting is now in the hands of your biggest competitor.
This sounds like a plot from a bad Hollywood movie. But with today’s advancements in technology, like artificial intelligence and deepfakes, it could happen.
Deepfakes (a blend of “deep learning” and “fake”) can be videos, images, or audio. Artificial intelligence creates them through a complex machine learning algorithm. This deep learning technique, called Generative Adversarial Networks (GAN), is used to superimpose synthesized content over real ones or create entirely new, highly realistic content.
And with the increasing sophistication of GANs, deepfakes can be incredibly realistic and convincing. Designed to deceive their audience, they are often used by bad actors to facilitate cyberattacks, fraud, extortion, and other scams.
The technology has been around for a couple of years and was already used to create fake graphic content featuring celebrities. Initially, it was a complicated endeavor to create a deepfake. You needed hours and hours of existing material. But it has now advanced to the point where everyone, without much technical knowledge, can use it.
Anyone with a powerful computer can use programs like DeepFaceLive and NVIDIA’s Maxine to fake their identity in real time. And for audio, you can use programs like Adobe VoCo (popularized back in 2016), which can imitate someone’s voice very well. This means you can go on a Zoom or Teams meeting and look and sound like almost anyone. Install the program, configure it, and you are done. Choose any pre-generated identity or input one you created yourself, and you are good to go. It really is that simple.
That is one of the reasons organizations are so wary of deepfakes. The ease of use. Combine that with realistic content, and it can become scary very fast. How would you like it if a scammer used your identity in a deepfake? In today’s digital age, where business is just as easily done through a phone or video call, who can you trust?
And this is one of the fundamental dangers of deepfakes. When used in an enhanced social engineering attack, they are intended to instill a level of trust in the victim. Because of this danger, the FBI sent a Public Service Announcement and issued a warning about the rising threat of synthetic content, even going as far as giving these attacks a new name: Business Identity Compromise (BIC).
So, what can you do to protect yourself from deepfakes? Can you defend against a form of attack specifically designed to fool us? Yes, you can, but with the pace of technological advances, it isn’t easy. Things that are designed to trick your senses generally succeed.
Investing in a Security Awareness Training program and implementing more robust cybersecurity measures like Zero Trust can be good places to start. Our cybersecurity professionals at Yeo & Yeo Technology can review and reconfigure your existing IT infrastructure to enhance security and meet your business goals.
Preview