Skip to main content

ES / EN

Artificial intelligence: ally or threat to cybercrime
Thursday, March 7, 2024 - 10:05
ciberataque

Artificial intelligence, especially generative intelligence, can play an important role in the fight against cybercrime. But, while various analysts defend its use, its use to perfect cybercriminal attacks is growing.

Ubiquitous artificial intelligence (AI), and its growing adoption in industry and society, was the technological theme of 2023. And, probably, it will also be throughout 2024.

Although it still does not represent a large percentage of the profits of the main technology firms, Microsoft, Alphabet, Amazon, Apple and Meta already reported in January that monetization from the use of AI is growing. It is estimated that global spending on AI will exceed US$500 billion by 2027, which will include human resource training, with a special emphasis on cybersecurity measures. Markets and Markets, for its part, predicts that AI will grow 23.3% annually until 2026, generating more than US$397 billion by the beginning of 2028. Within all this promise of change, however, clouds appear that obscure the horizon. Along with the advancement of its use for the benefit of companies, generative AI is taking an increasingly important role in making cyberattacks more complex. And it increasingly complicates the detection of phishing, according to the conclusions of the Digital Technologies for a New Future report, prepared by ECLAC.

Enemy or ally?

In some ways, generative AI is becoming the “best friend” of cyberattacks. "Cybercriminals are using AI to refine their techniques. Now, they are able to find vulnerabilities in the network faster, impersonate identities more accurately or automate phishing attacks," says Gery Coronel, Check Point country manager for Chile , Argentina and Peru.

The irony is that this same technology is the promise that will help detect and repel these threats, through systems that learn behavioral patterns and identify anomalies more accurately. "It is a fundamental tool in the face of an intense era of cyberattacks, increasingly difficult to manage on a human scale," warns Fabiana Ramírez, Computer Security researcher at ESET Latin America.

Today, generative AI models can learn normal patterns in network traffic, user behavior, and other security-related data. "When significant deviations from these patterns are detected, the possibility of an attack can be identified. This implies a great advance in that the identification of threats becomes a faster and more efficient process," adds Fabiana Ramírez.

In addition to generative AI, there are several approaches within the branch of artificial intelligence that are used to address cybersecurity. There are, for example, machine learning, machine learning, and neural networks that are used to model sequences of data, such as network traffic patterns or temporal sequences in event logs. "They are useful for detecting complex patterns and attacks that can evolve over time," emphasizes Ramírez.

But finding examples of cyberattacks built on the basis of harmful use of AI is not difficult either.

During 2022 and 2023, for example, deepfakes were spread, with images created with artificial intelligence that impersonated famous personalities such as Elon Musk and Bill Gates, calling for investment in non-existent projects and cryptocurrencies. Targeted phishing emails also multiplied, with success rates of up to 70%, exploits —programs or functional codes to find vulnerabilities—malicious chat bots, built and trained with AI to deceive victims.

Trained models

To counter this, the most common approach among companies has been to train machine learning models, with large data sets that include both known threats and attack patterns as well as normal, safe behavior in systems. In this way, AI can learn to distinguish between what is dangerous and what is not. "When the model is trained for these purposes, the result is models that are deployed to monitor in real time and detect anomalous activities and 
suspicious patterns that could indicate an attack. Around 80% of organizations that are adopting AI among their applications use it to detect anomalies in cybersecurity," says Pablo Prieto, Digital Business Manager at TIVIT, a technology multinational that addresses cybersecurity issues in the cloud for companies. .

Access the PDF of the Cybersecurity Special from the February edition of AméricaEconomía here.

The development and implementation of these AI solutions in cybersecurity require interdisciplinary teams with data scientists, AI engineers, cybersecurity analysts with knowledge of threats and vulnerabilities, cloud infrastructure architects, software developers and AI ethics experts, among others, according to Prieto. But such a set of professionals is not easy to obtain, much less cheap.

Nor is the estimated investment to implement generative artificial intelligence in companies. This can vary depending on several factors, such as the scope of the project, the scale of implementation, the complexity of the solutions required and the resources available, according to ESET. To this must be added the acquisition of powerful hardware , especially GPU (Graphic Processing Units), and also specific development tools for the implementation of generative models. The research firm Acumen Research estimated that the global market for AI-based cybersecurity products was projected to reach US$135 billion by 2030.

At a micro level, the investment required in AI solutions for cybersecurity can also vary widely, depending on the needs of each organization. "Some open source options with limited functionalities are low cost, while other highly sophisticated and specialized solutions can be valued in millions of dollars. What the evidence has made clear to us is that the investment in prevention is less than the costs after a attack, both from the technical point of view, as well as from the reputational point of view and the direct impacts on the business," says the TIVIT manager.

No single solution

Today, it is clear that the path of AI, both general and generative, in terms of cybersecurity is still long. "Some jurisdictions are exploring specific regulations for artificial intelligence. These can address ethical, transparency and responsibility issues," warns Rodrigo Stefanini, country manager of the Stefanini Group for Argentina and Chile.

In fact, in 2021 UNESCO issued "Recommendations for the Ethics of AI", which establishes principles and guidelines for the development of the technology in an international and ethical context. "From this document, to which around 140 countries adhered, projects and regulations emerged in some countries," highlights Ramírez.

Although this document is not binding, today the specialists consulted for this article consider that the use of AI in cybersecurity is valuable and is recommended for the early detection of threats and the analysis of large data sets. But, at the same time, it requires being integrated into a structure of holistic security strategies, "that involve both artificial intelligence and security professionals," highlights Rodrigo Stefanini.

Whether the promise of 100% effective generative AI in cybersecurity can be fulfilled this year is a complex task. "It will depend largely on the type of specific implementation, what type of models are used, the quality of the data and the type of threats they intend to detect," explains Rammírez, from ESET.

At TIVIT, for now, they highlight the existence of studies that indicate a real accuracy of AI in cybersecurity of over 99%. A figure that is improving as the level of training and customization of the model increases, in addition to the participation of multidisciplinary teams and technological capabilities, with results of 99% effectiveness. "The margin of error in these models is generally due to false positives with new threats or zero-day threats," clarifies Prieto.

What is clear is that the uses of generative AI will depend entirely on the training and objectives for which it is used; either to attack or defend. "Therefore, it is essential that there is a clear ethical framework that guides people and companies linked to these technologies in a responsible manner and for the benefit of society," concludes Prieto.

Países

Autores

Gwendolyn Ledger