fine-grained access control and policy enforcement for llm and agentic apps (apps sdk + mcp)
once a request is tied to a real user and tenant (via identifiabl) and its content is safely preprocessed (via transformabl), the next question is simple:
is this request allowed?
validatabl answers that question.
it is the policy engine of gatewaystack β enforcing permissions, scopes, org-level rules, schemas, and safety constraints with user-level precision.
validatabl is a user-scoped policy enforcement layer for llm apps.
it lets you:
π¦ implementation:
ai-policy-gateway(roadmap)
as orgs shift to agentic systems using personal data, tools, and workflows, they need to ensure that:
shared api keys cannot express or enforce this.
validatabl brings real authorization to llm systems β something the industry has been missing.
validatabl evaluates every request after identity is verified but before routing and execution.
all gatewaystack modules operate on a shared RequestContext object.
validatabl is responsible for:
identity (from identifiabl), content and metadata (from transformabl), modelRequest (requested model, tools, params), policy definitionswriting: policyDecision β structured authorization outcome (allow |
deny |
modify + reasons + modifications) |
the policyDecision determines whether the request proceeds, is blocked, or is modified before reaching the model.
validatabl provides:
1. checkPermissions β verify user/model/tool access
ensures a user or agent can call a specific model, tool, or operation.
2. checkScopes β enforce scope boundaries
validates that requested actions fall within granted scopes (similar to oauth).
3. checkSchema β validate request structure
ensures tool calls, structured payloads, and params conform to expected schemas.
4. checkSafety β evaluate content against safety rules
applies safety policies on transformed content (categories, risk, sensitivity).
5. applyPolicies β run org- and tenant-wide governance rules
regulatory constraints, internal controls, business rules, environment-specific policies.
6. decision β return allow, deny, or modify
returns a structured decision object and, when configured, can modify the request
(for example, stripping fields, masking values, or downgrading access).
policyDecision field in RequestContextidentifiabl to authenticate userstransformabl to preprocess contentlimitabl to apply rate limits or quotasproxyabl to perform provider routingexplicabl to store or ship audit logsidentifiabl to verify identitytransformabl to analyze and classify contentlimitabl to enforce quotas or spend based on policy outcomesproxyabl to route or proxy traffic using the policyDecisionvalidatabl policies are defined in yaml and evaluate identity, content, and metadata:
policies:
- name: "restrict-medical-models"
priority: 1
condition: |
user.role != "physician" AND
request.model in ["gpt-4-medical", "claude-medical"]
action: deny
reason: "Medical models require physician role"
- name: "block-pii-for-contractors"
priority: 2
condition: |
user.type == "contractor" AND
content.metadata.contains_pii == true
action: deny
reason: "Contractors cannot process PII"
- name: "downgrade-free-tier"
priority: 3
condition: |
user.tier == "free" AND
request.model == "gpt-4"
action: modify
modification:
model: "gpt-3.5-turbo"
policies are defined declaratively (yaml) and loaded at startup. gatewaystack compiles them into an internal representation for fast evaluation.
explicabl can record βwhich policy version applied?β# policy 1: allow only licensed doctors to use medical models
- name: "medical-model-access"
condition: user.role == "doctor" AND user.licensed == true
action: allow
models: ["gpt-4-medical", "claude-medical"]
# policy 2: deny non-admin users from sending pii
- name: "pii-block-non-admin"
condition: user.role != "admin" AND content.contains_pii == true
action: deny
reason: "Non-admin users cannot send PII"
# policy 3: remove attachments for basic tier users
- name: "tier-based-attachments"
condition: user.tier == "basic" AND request.has_attachments
action: modify
modification: remove_attachments
validatabl uses a deny-by-default, priority-ordered evaluation:
deny or allow winsmodify actions are applied cumulativelythis ensures security by default while allowing granular control.
what does modify actually do?
policies that return modify can adjust the request while still allowing it to proceed. common modification actions include:
example:
a free-tier user requests gpt-4.
a policy modifies the request to use gpt-3.5 instead.
the request continues, but with a cheaper model and within plan limits.
modification actions are recorded in policyDecision.modifications so downstream modules and explicabl can reconstruct βwhat changed and why.β
validatabl enforces oauth-style scopes:
common scopes:
models:gpt-4 β can access gpt-4tools:calendar β can use calendar tooldata:read:org β can read org-level datadata:write:user β can write user-scoped datascopes are granted via:
scopes become part of the identity section of RequestContext and are evaluated alongside roles and tenant metadata.
validatabl evaluates policies on every request. to keep latency low:
(user, model, tool, scope) tuple for a short ttl (for example, 30β60s), when safethe default configuration prioritizes correctness and safety; caching is opt-in per policy or per tenant.
user
β identifiabl (who is calling?)
β transformabl (prepare, clean, classify, anonymize)
β validatabl (is this allowed?)
β limitabl (how much can they use? pre-flight constraints)
β proxyabl (where does it go? execute)
β llm provider (model call)
β [limitabl] (deduct actual usage, update quotas/budgets)
β explicabl (what happened?)
β response
validatabl sits at the governance checkpoint of gatewaystack β enforcing rules before traffic enters the llm layer.
validatabl receives enriched request context (simplified view of RequestContext):
{
identity: { // from identifiabl
user_id: "user_123",
org_id: "org_456",
roles: ["engineer"],
scopes: ["models:gpt-4", "tools:*"]
},
content: { // original + transformed
messages: [...],
attachments: [...]
},
metadata: { // from transformabl
contains_pii: false,
classification: ["technical"],
risk_score: 0.2,
topics: ["code", "debugging"]
},
modelRequest: { // requested model + tools
model: "gpt-4",
tools: ["web_search"],
max_tokens: 2000
}
}
policies can reference any field in this context.
example 1: healthcare compliance
a hospital uses gatewaystack to ensure only licensed physicians can access medical diagnosis models, and all requests containing patient identifiers are logged for hipaa compliance. validatabl enforces:
if user.role != "physician" then deny access to medical models
example 2: multi-tenant saas
a crm platform ensures sales reps can only access ai features for their own customers. validatabl enforces:
if request.customer_id not in user.assigned_customers then deny
example 3: cost control
a startup allows free-tier users to access gpt-3.5 but requires paid plans for gpt-4. validatabl enforces:
if user.tier == "free" and request.model == "gpt-4" then modify to "gpt-3.5"
validatabl plugs into gatewaystack and your existing llm stack without requiring application-level changes. it exposes http middleware and sdk hooks for:
for policy examples and patterns:
β policy examples library
β policy testing guide
for implementation:
β integration guide
want to explore the full gatewaystack architecture?
β view the gatewaystack github repo
want to contact us for enterprise deployments?
β reducibl applied ai studio
every request flows from your app through gatewaystack's modules before it reaches an llm provider β identified, transformed, validated, constrained, routed, and audited.