Secured integration to the future

Secured integration to the future

Adversarial Intelligence: Malicious Scenarios of AI Usage

Майстер-клас: секрети випікання найсмачніших млинців!

13.08.2024

As part of an ongoing series by IT Specialist in collaboration with CLICO Ukraine, a distributor of cybersecurity, network technology, and management products, we bring you an analysis drawn from a report by Recorded Future, a global leader in cyber threat intelligence.

Research Overview

Cyber threat analysts and R&D engineers at Recorded Future joined forces to experiment with four malicious artificial intelligence (AI) scenarios, shedding light on the 'art of the possible' for cybercriminals. They rigorously tested the boundaries and capabilities of contemporary AI models, ranging from large language models (LLMs) to multimodal image models and text-to-speech (TTS) models. All experiments utilised a mix of commercially available and open-source models, deliberately avoiding any fine-tuning or retraining to simulate a realistic level of access for potential cybercriminals.
Based on the findings from these experiments, Recorded Future predicts that in 2024, criminals are most likely to exploit targeted deepfakes and conduct influence operations. Using widely accessible tools, deepfakes can be crafted to impersonate executives in social engineering campaigns, combining AI-generated audio and video with video conferencing software and VoIP services. The cost of content creation for influence operations could plummet by a factor of 100, while AI-powered tools may assist in cloning legitimate websites or fabricating fake media outlets.

Malware developers might employ AI in tandem with detection methods like YARA rules to alter malware and evade detection. Cybercriminals of varying sophistication could also leverage AI for reconnaissance, including identifying vulnerable industrial control system (ICS) equipment and geolocating sensitive sites using open-source intelligence (OSINT).

AI Usage Trends

Current limitations focus on the availability of open models that operate at the cutting edge of advanced (SOTA) models, along with techniques for circumventing security restrictions in commercial solutions. With the varied applications of deepfakes and generative AI models, it is anticipated that several sectors will make substantial investments in these technologies, thereby enhancing the capabilities of open-source tools. A similar dynamic has been observed in the offensive security tools (OST) domain, where cybercriminals have leveraged open frameworks or exploited leaks of proprietary tools such as Cobalt Strike.
Reducing costs and time will likely result in more widespread use of these attack methods by cybercriminals with varying technical proficiencies, targeting a broader range of organisations.

This year, organisations must expand their understanding of their attack surface, encompassing the voices and appearances of their executives, websites, branding, and public images of facilities and equipment. Moreover, organisations need to prepare for more advanced AI applications, such as developing self-improving malware capable of evading detection through YARA. This will necessitate implementing more covert detection methods like Sigma or Snort.

Malicious AI Usage Scenarios

Scenario I: Utilizing Deepfakes to Impersonate Executives
Deepfakes enable the creation of highly realistic video and audio forgeries. Criminals can use deepfakes to impersonate organisational leaders, potentially causing severe financial and reputational damage.
● Technologies available to the public allow the production of pre-recorded deepfakes using publicly available videos and audio, such as interviews and presentations. ● Cybercriminals can train models on short clips (less than 1 minute); however, pre-processing audio to achieve the highest quality still requires human intervention.● More advanced scenarios, such as live cloning, will likely necessitate bypassing the protective mechanisms of commercial solutions, as the latency in open models limits their effectiveness for streaming audio and video.
Scenario II: Influence Operations with Imitation of Legitimate Websites
AI empowers criminals to craft counterfeit websites that mirror the appearance of genuine ones. This enables the dissemination of disinformation and the manipulation of public opinion on a large scale.
● Through AI, vast quantities of disinformation can be generated and tailored to specific audiences, creating intricate narratives designed to achieve malicious objectives. ● Furthermore, AI can autonomously generate high-quality content, such as realistic images, from generated text, facilitating the cloning of authentic news and government websites. ● The cost of conducting disinformation campaigns could drop a hundredfold compared to traditional troll farms and human content creators.● However, developing templates that convincingly mimic legitimate websites requires human intervention to produce more credible forgeries.
Scenario III: Self-Evolving Malware Evading Detection via YARA
Generative AI can help avoid detection by YARA rules. It can also modify the malware's source code, reducing its likelihood of detection. 
● Yet, contemporary AI models struggle to produce syntactically correct code, resolve linting issues, and maintain functionality after code encryption.
Scenario IV: Reconnaissance of Industrial Control Systems and Aerial Imagery
Multimodal AI can process public images and videos to geolocate facilities and identify industrial control system (ICS) equipment. This includes determining the manufacturers, models, software, and integration methods with other systems. 
● Transforming this information into actionable intelligence for large-scale applications is challenging. Human analysis is still essential for processing this data for use in physical or cyber threats.

Risk Assessment and Security Measures

The emergence of the "art of the possible" underscores the imperative of continuously updating cybersecurity strategies. Organisations must broaden their understanding of their attack surface to include executive voices, images, websites, and branding. Implementing multi-layered and behavioural malware detection methods is crucial, as is the ongoing analysis of publicly accessible photos and videos of equipment and facilities.
Executive voices and likenesses have now become integral components of an organisation's attack surface. Organisations must assess the risk of impersonation in targeted attacks. For large transactions and sensitive operations, multiple methods of communication and verification should be employed beyond conference calls and VoIP, such as encrypted messaging or emails.

Brand Usage Monitoring

Organisations, particularly in the media and public sectors, must vigilantly monitor the use of their brand or content in influence operations. Recorded Future clients can leverage the Brand Intelligence module to track new domain registrations and online content that misuses their brand.
Critical Infrastructure Protection
Publicly available images and videos of equipment and facilities must be meticulously analysed and sanitised, especially in critical infrastructure and sensitive sectors like defence, government, energy, manufacturing, and transportation. These materials could be exploited for both physical and cyberattacks.
Investing in Malware Detection Tools
Organisations must invest in multi-layered and behavioural malware detection tools to counteract adversaries' potential development of AI-assisted malware. Detection rules published by cybersecurity researchers can be leveraged to refine malware and evade detection. Sigma, Snort, and advanced YARA rules will remain reliable indicators of malicious activity in the future.

For detailed information on each scenario and the full report, visit: https://go.recordedfuture.com/hubfs/reports/cta-2024-0319.pdf