AI Risk Removal with Guardrails

Remove and safeguard against vulnerabilities in your AI apps

Set up customized, enterprise-ready guardrails for Generative AI use cases with Enkrypt AI.

Why Guardrails?

Different Gen AI Use Cases Require Customized Guardrails

Delivering secure and reliable Gen AI apps requires continuous surveillance and active interception. Enkrypt AI provides highly accurate and domain-specific Guardrail policies for all your specific use cases.

Gen AI Use Case Examples: Guardrails Needed

Personalized Customer Service Chatbots

Intelligent search recommendation systems

Content creation for marketing

Gen AI apps for increasing operational efficiency

Data analysis and reporting

Domain-specific like legal, finance, medical

Domain Specific Requirements

Customizable topic detectors

Domain specific injection attacks

Customizable keyword detectors

Customizable detectors for sensitive information

Introducing Guardrails by Enkrypt AI

Remove Risk Before it Impacts You.

Our Guardrails capabilities empower organizations to build GenAI applications without worrying about privacy, security, or moderation.
All powered by the world's most advanced AI security and safety platform.

Compliance and Policy Adherence Guardrails

Remove Compliance Regulation Risks from your AI solutions

Once you’ve uploaded your industry regulations to Enkrypt AI and the platform has detected all the regulatory / policy risks (via Red Teaming), those known risks are then added to our Guardrails feature so they can be removed from your AI applications. See figure below.

Upload your Industry’s Regulation /
Policy PDF
Enkrypt AI Generates Compliance Tests
Enkrypt AI Detects Risks Violating  Compliance

This is just one part of our AI compliance management solution that Enkrypt AI provides for achieving automated and continuous compliance.

User Loved. Hacker Hated.

Enkrypt AI Guardrails detects and removes LLM vulnerabilities for every use case. 

Data Privacy

Sanitize sensitive information (PII, PHI) using redaction

Enhance Security

Prevent prompt injection attacks. Detect secret keys and code.

Enable Moderation

Detect and moderate harmful content. Ban toxic topics and competitors

Why Choose Enkrypt AI Guardrails

Building Trustworthy AI

Enterprise-Ready

Customized for your use case

  • Role-based Guardrails

  • Domain-specific customizations

  • Adherence to regulatory frameworks

Higher Performance

Accuracy and Latency

  • Longer context windows

  • Higher accuracy, low latency

  • Multi-lingual support

Model Agnostic

Diverse model requirements

  • Cater to any of 700k+ models on Hugging face

  • Support for Small Language Models

  • Domain-specific models

Increase Security and Safety of your RAG Chatbot

Use Guardrails throughout your RAG workflow for optimal accuracy and security

Implement guardrails at the most critical areas of your RAG workflow: (1) before data enters the Vector DB, (2) before the query reaches the embedding model, and (3) before the response. Each step is fraught with vulnerabilities that could expose your company to brand damage and revenue losses.

Get Started with Guardrails

Fast deployment, accurate results, quick time to value

You’re in a race to build AI apps at the speed of innovation. Enkrypt AI seamlessly secures your apps so you can achieve that goal. No delays. Just world domination.
01
Step 1

Try Guardrails on Playground

Login to app.enkryptai.com and visit Guardrails Playground. Try sample prompts for different Guardrails.

Figure 1:  Guardrails Playground
02
Step 2

Integrate Guardrails code into your Workflow

Copy the code from the bottom pane in Guardrails Playground. Integrate this code into your codebase. The API key is embedded in the Nodejs or Python code. API key can also be taken from Settings in app.enkryptai.com

Figure 2:  Code for Guardrails integration in Bottom Pane
Figure 3:  API Key From Settings
03
Step 3

Monitor Usage and Threats on Dashboard

Check the Guardrails usage on Guardrails Dashboard as well as Threats Prevented

Figure 4: Guardrails Usage and Threats Prevented