Generative AI isn’t new, but releases by artificial intelligence developers such as OpenAI and Stable Diffusion in 2022 represent real leaps forward in the availability and usability of generative AI applications — advances so significant that the internet is filled with predictions of doom.
Humanity hasn’t ceased to exist in the months since chatbots became super-powered, however, and cooler heads are pondering the implications of these technologies, industry by industry.
In order to understand the cybersecurity impact, BizTech convened a roundtable of security leaders — Equifax CISO Jamil Farshchi, Mastercard Chief Security Officer Ron Green and Patricia Titus, CISO of the insurance company Markel — and asked them for their takes on the challenges and opportunities they see on the horizon.
BIZTECH: What was your initial reaction to the news of breakthroughs around generative AI such as ChatGPT?
Farshchi: My first thought was, “Man, this is really cool! There’s a lot of opportunity here.” But as a security guy, my next thought was that it’s going to generate some risks for us Social engineering is going to get really bad, and I think there will be a whole host of additional risks that will manifest themselves over time.
Titus: To be honest with you, it struck fear in my heart. The threat actors have gotten more sophisticated and better at what they’re doing. This is going to take them to a new level. Our industry has gotten a lot better about reacting very quickly to changes in the behavior of threat actors. So, I’m interested to see what will come out of the cybersecurity startup community.
Green: The generative AI stuff pops up and it’s like, “Oh, man! We’ve got to work on that, but the financial services sector can’t take as long to think about it as we did with cloud.”
The speed of availability to everybody, that was a surprise. And then, “Wow, we’ve got to get in front of this so that we can use it appropriately.” There’s a lot of goodness that comes with it, but then there’s also the opportunity for badness.
BIZTECH: What threats does AI pose to your company’s network security?
Green: First, how will bad actors intentionally use it for evil? We’re seeing the bad guys use it, not in ways that are like their machines attacking our machines, but in ways that make it even more difficult for humans to discern, for example, phishing. A lot of the things that we point out — grammar errors, spelling errors, urgency — generative AI smooths all that out for bad actors.
Second, as we take advantage of the opportunity, what might we do to cause inadvertent lapses or weaknesses in our security?
When you first adopt stuff, you don’t know quite what you’re doing. And then, because of that, you’ve created a big old hole. There are a number of rules that we put in place. We want our people to experiment with it, try it, but we’re not going to let them run wild and throw it into production.
You need to be smart about it but not too slow. In the next 12 months, I think you’ll see the financial services sector come up with the guardrails that we’ll need to enable practical business uses around generative AI.
Titus: I think we recognized that with the advent of cloud and Software as a Service applications that there is no border anymore. There’s a lot of gray space where our data and our people might be working; the traditional concept of perimeter security is gone. For every bit of good we do, someone figures out how to get around it, so I’m sure the threat actors will be fast following.
It’s my tagline: “Security is a journey, not a destination.” It’s a constant cat-and-mouse game. Deepfakes will create a new level of social engineering that we haven’t seen before. When you get into the ability to make a voice sound identical to someone’s, that’s going to be a problem.
Farshchi: I think that it’s a short- and long-term kind of discussion. In the short term, I don’t know that there’s a whole lot that I’m thinking about changing. When you look at what it offers today, what does it do? It potentially increases the volume of attacks because it reduces the barrier to entry in the space.
Most of us are dealing with tens of thousands if not millions of attacks a day anyway, so what’s a few more? In the short term, that’s not a huge deal. It’s the long-term picture where this whole thing starts to really change.
I think the winners are going to use generative AI for the greater good of their organizations. By doing that, we’re going to be able to offset — in fact, I think we’re going to be able to largely eliminate — a lot of the risks that generative AI creates, by virtue of using generative AI itself.
Most of us are dealing with tens of thousands if not millions of attacks a day anyway, so what’s a few more?” Jamil Farshchi
CISO, Equifax
BIZTECH: How are your security solutions using AI now? How would you like them to apply the technology in the future?
Green: We’re using orchestration and machine learning to help in our security operations. Some of the technologies can learn from the way that our operators review. That becomes an iterative task, and the machine can now handle that. That leaves the people to then think about the harder things that require more attention.
Farshchi: I am trying to figure out ways that we can incorporate this new technology into our security stack. When I look at AI, its strengths fundamentally lie in models and in data. And when I look at it from an internal perspective, I’m thinking to myself, who knows more about the way my environment works or the models themselves than me? Nobody does. Who has more data about my organization from a security standpoint than me? Nobody does. With the right AI toolkit, I’m able to put all that to work and see a meaningful differentiation against the attackers that would be using generative AI against me.
Titus: We’ve had ML and AI in our tools for some time, either through behavior detection or user behavior analytics. The capability is there and it’s good and it’s working. But I do think we need capabilities to inventory generative AI, to find what we don’t know. Some of the regulations, like the European Union’s AI Act, are going to be asking us to inventory AI. We’re going to need capabilities that automatically look for where someone might be using AI and we may not know.
Default (GPT-3.5)
User:
How great a challenge do hackers armed with generative AI pose to businesses?
ChatGPT:
Hackers armed with generative AI pose a significant and evolving challenge to businesses in the realm of cybersecurity. Generative AI, including variants like GPT-3 (the technology behind this AI), can create remarkably convincing and sophisticated content, mimicking human writing patterns and styles.
Here are some key challenges posed by hackers armed with generative AI to businesses:
Phishing and Social Engineering: Generative AI can craft highly convincing phishing emails, messages, or voice recordings that trick employees or customers into divulging sensitive information like passwords, credit card numbers, or access credentials.
Malware and Malicious Code Generation: Hackers can use generative AI to create malware variants that evade traditional detection methods by generating new, unique malicious code that cybersecurity systems may not recognize.
Automated Attack Variants: Generative AI can automate the creation of diverse attack vectors, making it challenging for cybersecurity defenses to keep up with evolving threats. This could include generating different attack scenarios, exploiting vulnerabilities, or devising novel intrusion strategies.
Spear Phishing and Targeted Attacks: Hackers can use generative AI to tailor attacks specifically for a particular organization or individual, making the phishing attempts highly personalized and difficult to detect.
Fake News and Disinformation Campaigns: Generative AI can be used to create highly persuasive and authentic-looking content, including fake news articles or social media posts, which can be leveraged for spreading disinformation, damaging a company’s reputation, or influencing public opinion.
Password Cracking and Brute Force Attacks: Generative AI can accelerate password cracking and brute force attacks by generating and testing a vast number of potential passwords or encryption keys.
Voice and Video Deepfakes: Generative AI can produce convincing deepfake videos or audio, potentially leading to scenarios where a hacker impersonates an executive or another trusted figure within a company to manipulate or deceive employees.
Evading AI-Based Security Systems: As AI-based security systems become more prevalent, hackers using generative AI may attempt to create attacks that evade detection by these very systems, leading to a constant arms race between cybersecurity and malicious AI.
To mitigate these challenges, businesses need to invest in advanced cybersecurity measures, including AI-powered security solutions that can detect and respond to attacks leveraging generative AI. Employee training and awareness programs, multi-factor authentication, regular security updates, and robust incident response plans are also crucial in building resilience against the evolving threat landscape. Collaboration between cybersecurity experts, AI developers, and policy makers is essential to stay ahead of emerging threats in this domain.