Reply To:
Name - Reply Comment
As cyber threats escalate globally, the imperative to design, deploy and utilise AI securely has never been more pressing. There’s an urgent call to adopt proactive measures, heighten threat awareness and prioritise cybersecurity education to safeguard not only ourselves but also our organisations and data.
Recently, Microsoft unveiled its sixth edition of Cyber Signals, a quarterly cyberthreat intelligence brief, which draws from the latest Microsoft research, offering expert insights into the current threat landscape. This edition of the report underscores the importance of securing artificial intelligence (AI) technologies to prevent misuse and highlights Microsoft’s efforts in protecting AI platforms from emerging threats posed by nation-state cyber actors.
The report delves into the emergence of Large Language Models (LLMs) as tools of interest for threat actors and emphasises the growing utilisation of AI in both offensive and defensive cyber operations. Microsoft introduces guiding principles aimed at mitigating these risks, particularly addressing threats such as Advanced Persistent Threats, Advanced Persistent Manipulators and Cybercriminal Syndicates, leveraging AI platforms and APIs. These principles include identification and action against malicious threat actors’ use of AI, notification to other AI service providers, collaboration with other stakeholders and transparency.
Microsoft detects a tremendous amount of malicious traffic—more than 65 trillion cybersecurity signals per day Various AI-driven methods, including threat detection, behavioural analytics, machine learning and Zero Trust models, are employed to safeguard Microsoft and customers against cyber threats. Multifactor authentication (MFA) is rigorously applied across Microsoft, prompting attackers to resort to social engineering tactics, particularly in areas offering high value, such as free trials or promotional pricing. To counter such attacks, AI models are developed to detect them promptly.
Additionally, Microsoft employs AI to identify fake students, accounts and organisations attempting to evade detection by altering data or concealing identities. The use of GitHub Copilot, Microsoft Copilot for Security and other copilot chat features integrated into our internal engineering and operations infrastructure can help prevent incidents that could impact operations. One of the key insights gleaned from the report is the critical role of AI in addressing the global shortage of cybersecurity professionals.
With approximately four million cybersecurity experts needed worldwide, AI has emerged as a vital tool for augmenting human capabilities and enhancing productivity. Microsoft’s Copilot for Security, for instance, has demonstrated remarkable efficacy in assisting security analysts across various tasks, resulting in a 44 percent increase in accuracy and a 26 percent boost in speed.
The proliferation of AI in the hands of threat actors has led to an increase in convincingly written emails, enhancing phishing attempts by reducing obvious language and grammatical errors. This makes phishing attacks more challenging to detect, emphasising the need for ongoing employee education and public awareness campaigns. Microsoft stresses the effectiveness of public awareness campaigns in altering behaviour historically.
Moreover, Microsoft anticipates AI-driven advancements in social engineering tactics, potentially including deepfakes and voice cloning, especially if AI technologies lack responsible practices and built-in security controls. Microsoft foresees AI-driven advancements in the cybersecurity threat landscape. However, by adopting responsible AI practices and implementing robust security measures such as multifactor authentication and being proactive in prevention, progressive steps can be taken to ensure national security against evolving cyberthreats, whether traditional or AI-enabled. While the cyber threat landscape is ever-evolving, national cybersecurity can be bolstered by embracing cutting-edge technology and prioritising continuous cybersecurity education.