Investing

Securing Generative AI: Balancing Innovation with Cybersecurity

Securing Generative AI: Balancing Innovation with Cybersecurity

Generative AI has rapidly become a pivotal topic in technology discussions, especially since the release of tools like ChatGPT. This evolution has spurred companies like Microsoft to adapt, using advanced AI models from OpenAI while addressing customer inquiries about how AI transforms the security framework. Siva Sundaramoorthy, a senior cloud solutions security architect at Microsoft, frequently addresses these inquiries. He outlined the potential benefits and security risks of generative AI to cybersecurity professionals at an event in Las Vegas on October 14.

A primary concern with generative AI in security is its accuracy. The technology operates as a predictor, aiming to provide the most probable answer, which may vary based on context. Therefore, cybersecurity experts need to evaluate AI use cases by considering usage, application, and platform. Sundaramoorthy advises understanding the specific use case needing protection, considering many developers and companies are deeply involved in creating AI applications. Each enterprise may have an integrated bot or pre-trained AI model. Once the use cases are identified, AI can be safeguarded like traditional systems, albeit with additional risks.

Sundaramoorthy identifies seven adoption risks associated with generative AI: bias, misinformation, deception, lack of accountability, over-reliance, intellectual property issues, and psychological effects. Generative AI poses unique security threats across three contexts: usage can lead to unauthorized information leaks or insider threats; applications are vulnerable to data breaches or malicious inputs; platforms might suffer from injected data flaws, denial-of-service attacks, or model hijacking.

Attackers might exploit AI using prompt converters or malicious instructions to bypass content controls, potentially causing data poisoning or unauthorized access. Sundaramoorthy warns of the risks involved if AI systems are interconnected with APIs that execute external codes. "Could AI inadvertently create a backdoor?" he asks, urging a balanced approach to weighing AI's risks against its benefits. Despite these concerns, Sundaramoorthy finds value in tools like Microsoft’s Copilot, adding that the high value of AI systems makes them appealing targets for hackers.

AI integration could bring vulnerabilities, requiring user training on new technology. Sensitive data processing introduces further risks, necessitating transparency and control throughout AI's lifecycle. The supply chain for AI might introduce harmful code, and the lack of conventional compliance standards raises questions on securing AI technologies.

Sundaramoorthy stresses the importance of crafting a secure, top-down approach to integrating AI in applications as challenges like AI hallucinations persist. The full return on AI investment (ROI) remains uncertain in practical environments. Generative AI can also fail in innocuous or malevolent ways. A malevolent failure could see an intruder breaching AI defenses to extract sensitive data like passwords. Conversely, an innocuous failure might allow biased information to slip into AI outputs due to improperly filtered training data. Securing AI solutions effectively requires established methodologies, even amid uncertainties.

Standard bodies like NIST and OWASP provide frameworks for AI risk management. MITRE’s ATLAS Matrix catalogues techniques used in AI attacks. Microsoft and Google offer governance tools and frameworks like the Secure AI Framework to help organizations secure AI. Ensuring data exclusion from training models through scrubbing processes, applying least privilege principles for model fine-tuning, and enforcing stringent access controls to external data can significantly enhance AI security.

Sundaramoorthy concludes that foundational cybersecurity practices apply equally well to AI security. Reflecting on whether to employ AI, Janelle Shane, AI researcher, suggests that some security teams might choose to avoid AI due to its risks. Sundaramoorthy, however, argues that if AI accesses protected documents, the issue lies in access control rather than AI itself, emphasizing robust control measures as the solution.