Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

During GISEC 2024, Telecom Review conducted an exclusive interview with Rodrigo Brito, Head of Cybersecurity Portfolio at Nokia, to discuss the company’s AI-powered security innovation and commitment to ensuring GenAI integration is implemented efficiently and securely. 

Delving into the influence of GenAI on telco security, could you provide a concise overview of Nokia's journey with AI-powered security operations? Additionally, what lies at the core of this strategy?

First of all, we need to look at the term ‘telco security.’ This is a special combination that poses some challenges. A security engineer [not only needs to] worry about solving security incidents but also needs to be an expert in telecommunications networks. Having said that, we’ve been working with AI for a while to help security engineers deal with this complexity.

We’ve been using descriptive AI to identify the outliers. Any security incident or detection that cannot be detected using rules (or [is] something unexpected), has been identified as an outlier and [is sent] to the security incident response team.

More recently, we have taken a step further with generative AI. We are using large language models (LLMs), specifically trained on telecommunications and security. We made an augmentation of the training, combining both static knowledge—including additional security knowledge, telecommunications knowledge, security regulations knowledge—and dynamic knowledge, that makes the large language model aware of the context it’s operating on.

This can answer questions such as ‘Which incidents are open?’, ‘What's the topology of the network?,’ and ‘What are the security engineers doing when they are accessing the network elements?’

The end result? The security engineer can chat with the large language model about a specific security incident. [It] understands all the dynamics of the environment and can help in speeding up and automating the resolution actions.

How crucial is it to strengthen national and international cooperation in developing cybersecurity defense strategies in today's digital era? Who is Nokia partnering with for AI-powered security innovation?

I think collaboration is key in cybersecurity. Even across countries; it is paramount to share threat intelligence, best practices for responsible AI, and technology solutions for solving threats like post-quantum security.

In terms of partnerships, industry partnerships are extremely important as well. Nokia is collaborating with our partner, Microsoft, to offer our customers a specialized and state-of-the-art security solution - augmented with our telco security knowledge and their AI platform - to build a product for owners of public and private networks.

As GenAI continues to evolve, how do you expect it to influence the role of telco security in maintaining the safety, reliability, and trustworthiness of mission-critical infrastructure?

When a network goes down, there are many impacts on society. The security engineers that are handling security incidents in such a mission-critical environment need to [ensure] that the network does not go down so that society, in general, won’t suffer. Besides [understanding] the methods [used] to attack a telco network, they also need to understand the core, transport, and radio technologies.

When we look [at] all the possible attack vectors [and] all the technologies involved, coupled with human factors, I cannot imagine a security operation center running within [a] few years from now without using GenAI to speed up incident resolution and automated responses.

How does Nokia intend to address the long-term cyber risks associated with integrating GenAI to ensure the security of its customers?

I would look into it from two perspectives. One of them is by [implementing] GenAI solutions [on] the defense side, which allows our customers to fight the bad guys who are using similar technologies to advance their attacks. We will continue providing state-of-the-art solutions together with our partners, so our customers can defend [themselves] against sophisticated attacks.

On top of that, it's also very important to have a method to augment the GenAI or LLM models. For example, it's very important to use high-quality data because the quality that we use to augment the models influences the model outcomes. Human-curated data must be verified, read, and tested. At the same time, when delivering knowledge augmentation data, it should be controlled using CI/CD.

These very important methods will prevent manipulated data [and] a manipulated outcome. We are committed to preventing this.

We have teams working [to] ensure that our GenAI/large language model within our security products, as well as all the augmentation and the data that we feed into the system, is carefully verified to ensure the quality of the outcome.

Pin It