Thought Leadership

Strategy Guide to Adopting Generative AI

Best practices for building and scaling secure AI applications
November 15, 2024

Enterprises are adopting generative AI to streamline operations, boost productivity, and create new value. However, generative AI applications come with inherent risks, such as biased decision-making, increased privacy breaches, jailbreaking, and more. A structured approach is necessary to navigate these challenges effectively. This guide offers actionable strategies for executives and managers to adopt generative AI in an organization.

 

Step 1: Define Your AI Maturity Level

AI adoption varies across organizations, ranging from early proof-of-concept (PoC) stages to full-scale production. Executives should implement different measures based on their organization's AI maturity.

 

  • PoC Stage: Is the AI initiative in the exploratory phase? Are you testing its potential impact on specific processes?
    • Build an initial understanding of the risks associated with large language models (LLMs).
    • Use the Enkrypt AI LLM Safety Leaderboard to review popular LLMs and identify associated risks such as jailbreaking, bias, toxicity, and malware. Compare the model's performance with these risks.
  • Production Stage: Are you leveraging AI for customer-facing applications or internal operations?
    • Deploy a monitoring system to evaluate the system's output for interpretability and user-friendliness.
    • Use the OWASP Top 10 for LLMs to identify common vulnerabilities in AI systems.
    • Implement guardrails to prevent malicious or off-topic usage. However, avoid overly strict guardrails that may compromise usability. Adjust guardrails to balance security and usability effectively.
    • Conduct red teaming to assess AI risks. Enkrypt AI red teaming provides a high-level risk overview of models along with customized risk evaluations for specific use cases.
  • Scaling Stage: Are you preparing to scale solutions across multiple teams or regions?
    • Ensure your AI security system's scalability by stress-testing for the number of requests and token loads per request.
    • Develop a comprehensive understanding of potential misuse scenarios using the MITRE ATLAS framework.
    • Implement data security layers to prevent sensitive information leaks and indirect injection attacks.
    • Test for potential misuse of various capabilities. For example, an AI system designed to send emails or schedule appointments might be exploited in unintended ways.

Internal vs. External AI Applications

Internal AI Projects involve data analysis for internal decision-making. Applications should be tested for bias and hallucinations.

 

External AI Applications pose a risk of losing customer trust. Conduct use case testing with Enkrypt AI red teaming that covers frameworks like the OWASP Top 10 for LLMs to address risks such as AI misuse and data leakage vulnerabilities. External applications should also be tested for moderation and hallucination risks, in addition to security and privacy concerns.

 

Step 2: Developing Use Case Policies

Tailor AI policies to specific use cases, as each requires different levels of scrutiny. Bias detection and content moderation should be prioritized in sensitive areas like recruitment or customer-facing applications.

 

The MITRE ATLAS framework details adversary tactics for AI systems, such as data manipulation techniques that can poison training data, leading to compromised outcomes. Mitigate these risks by monitoring irregular input and model behavior.

 

  • Create a checklist defining key requirements for AI systems based on your use case. Develop a policy outlining expected system behavior.
  • Use Enkrypt AI to ensure policy adherence.

 

Step 3: Securing the Data Powering AI Applications

Adding security layers around AI systems alone isn’t sufficient. AI applications can be compromised through the data fed into the system. For example, an AI system that uses a knowledge base to answer customer questions could be compromised by a single line of malicious text in the knowledge base.

 

  • Implement security layers for data in the knowledge base. Use Enkrypt AI Data Security Audit to scan the knowledge base.
  • Deploy recurring knowledge base scans to identify AI-specific vulnerabilities.
  • Ensure compliance with GDPR or CCPA by conducting privacy checks.

 

Step 4: Enforcing Guardrails and Monitoring (The Human Element)

Even the best AI systems require human oversight. Establish a team of AI governance officers or data stewards responsible for regularly reviewing AI outputs to ensure compliance with ethical and privacy standards.

 

The MITRE ATLAS framework outlines techniques under the "Misuse of AI," where systems can be manipulated for unintended purposes. Have processes in place for ongoing monitoring and human intervention when issues arise. Ensure employees know how to escalate problems, particularly in high-risk areas like hiring or financial decision-making.

 

Step 5: Closing the Gap Between Theory and Action

AI governance should move beyond theoretical frameworks and be embedded in daily business operations. This guide emphasizes actionable steps for executives to identify key risk areas and implement effective controls within their organizations.

 

  • Develop a cross-functional team involving IT, security, compliance, and legal departments to enforce AI policies enterprise-wide.
  • Regularly update policies as both AI technologies and regulatory environments evolve.

 

Step 6: Call to Action: Be Proactive, Not Reactive As AI evolves, so do its associated risks. Whether your organization is experimenting with PoCs or scaling AI enterprise-wide, a proactive approach is essential. Establish clear guidelines now to ensure you are well-positioned to mitigate potential AI risks. Start by adopting a comprehensive company-wide AI policy that includes guidelines for usage, monitoring, and compliance. Collaborate with key stakeholders to create a robust process for assessing AI systems and enforcing guardrails at every level.

Satbir Singh