Connect an AI Application
Navigate to the Applications tab and click New Application to add your AI application or LLM model.- Application Details
- Evaluation
- Monitoring
- Guardrails
Application Name– e.g., MyUniqueAppDescription (optional)– Adds clarity: “Streamlines project management by integrating task tracking, team collaboration, and progress reporting.”Encrypt Output- When enabled, Trusys automatically encrypts any PII (Personally Identifiable Information) detected in AI responses. This prevents exposure of sensitive user data and helps ensure compliance with privacy and data protection regulations.Recommended for applications handling user-generated content, customer data, or regulated information.
This section configures how Trusys runs functional and security evaluations on your AI application.Set up the appropriate connection to validate functional behavior, accuracy, and compliance using structured test runs before production use.To setup connections - select the type of your AI application you are working with,
LLM Model
LLM Model
- Select Model – Pick your LLM from available options
- Authentication – Supply credentials to securely connect
Voice / Call-based Application
Voice / Call-based Application
Connect AI applications that interact with users over phone calls.
- Provide the Phone Number associated with the AI-powered calling system
- Trusys automatically captures and evaluates conversational interactions
Ideal for voice bots, IVR systems, and AI-powered customer support calls.
Agentic Application
Agentic Application
Connect applications built using agentic AI frameworks.Supported FrameworksThis enables Trusys to:
- Flowise
- Langflow
- Dify
- Microsoft Foundry Agents
- CrewAI
- n8n
- Select the agentic framework
- Provide framework-specific authentication details
- Configure connection parameters required by the selected framework
If your agentic framework is not listed, select Custom Agentic Application and follow the Custom AI Application flow.
- Monitor multi-agent workflows
- Evaluate agent decisions and outputs
- Track security and functional risks across agent interactions
HTTP/HTTPS Application
HTTP/HTTPS Application
Use this option when integrating an AI application that is not natively supported by Trusys or requires a custom request/response format.
- Select the
Custom Provider Type - Choose the
Request Type(e.g., GET, POST) - Enter the
Base URLof the application’s API. - Choose how to structure the request body sent to your API.
- Raw JSON — Write the request body as a JSON string directly.
- Key-Value Pairs — Add individual fields using a structured editor.
- or Raw HTTP request
Mandatory: The API request must contain the{{prompt}}variable, as Trusys passes all test cases through this placeholder. Without{{prompt}}, evaluations and monitoring will not function correctly. - Provide API
authentication detailsfor secure communication. - Define how to
transform responsedata for compatibility with Trusys’s systems. Extract or reshape the API response before it reaches Trusys. If unset, Trusys parses the response as JSON, falling back to raw text.
Example 1 (Simple)
Example 1 (Simple)
For example, given this response from a chat completions endpoint:The below configuration of Output: This is a test!
{
"choices": [
{
"message": {
"role": "assistant",
"content": "This is a test!"
},
"finish_reason": "stop",
"index": 0
}
]
}
transform response extracts just the message content providers:
- id: https
config:
url: 'https://example.com/chat/completions'
transformResponse: 'json.choices[0].message.content'
Example 2
Example 2
For applications that return streamed or event-based responses, you may need a custom function to extract the final output.
For example, this is the response:You can define a JavaScript This extracts only the final answer from the stream.Output: This is the final answer.
data: {"event":"token","data":{"content":"Hello"}}
data: {"event":"complete","data":{"answer":"This is the final answer.","confidence":0.92}}
transform response function like this:(json, text) => {
const line = text
.split('\n')
.find((l) => l.includes('"event":"complete"'));
if (!line) {
return '';
}
const payload = JSON.parse(line.replace(/^data:\s*/, ''));
return payload.data.answer || '';
}
Prefer the command line? See the Command-Line Usage guide.
For Conversation-Based ApplicationsIf your application is a conversational AI (chat-based or multi-turn), Trusys requires conversation context to accurately evaluate conversational metrics, and for HTTP-based applications this context must be passed in the request body as a structured chat history in the format shown below.
{
"chat_history": [
{% for completion in _conversation %}
{
"role": "user",
"content": "{{ completion.input }}"
},
{
"role": "assistant",
"content": "{{ completion.output }}"
},
{% endfor %}
{
"role": "user",
"content": "{{prompt}}"
}
]
}
Custom Provider
Custom Provider
Using Custom ProviderTo use a custom provider, you typically need to:
- Develop your custom provider script: Write a JavaScript or Python file that implements the
callApifunction (orcall_apifor Python) to interact with your AI model or API. This function will receive the prompt generated by the Trusys platform as an argument. The code should submit this prompt to the AI application and return the output. Please see examples provided below. - Select the custom provider: When configuring the application, choose the language in which you have implemented the custom provider.
- Run the tests using CLI: Run the evaluations using the CLI by providing the path to your custom provider file: Command-Line Usage.
- JavaScript Example (echoProvider.js)
- Python Example (echoprovider.py)
This example demonstrates a simple JavaScript custom provider that echoes the input prompt.
class EchoProvider {
id() {
return 'echo';
}
async callApi(prompt, context, options) {
return {
output: `Echo: ${prompt}`,
};
}
}
module.exports = EchoProvider;
This example demonstrates a simple Python custom provider that echoes the input prompt.
def call_api(prompt, options, context):
"""Simple provider that echoes the prompt with a prefix."""
config = options.get("config", {})
prefix = config.get("prefix", "Echo: ")
return {
"output": f"{prefix}{prompt}"
}
For Conversation-Based ApplicationsIf your custom conversational application is implemented, Trusys automatically injects an additional variable when using a text conversational prompt,
_conversation contains the full conversation history up to the current turn. Your custom provider implementation must consume this variable to maintain conversational context.If _conversation is not passed or ignored, conversational metrics may be incomplete or inaccurate.This section configures live production monitoring for your AI application.Trusys continuously analyzes real-time inputs and outputs to detect security risks, compliance violations, and performance issues using TRU PULSE.
Connect Application Logs
Select how you wish to connect your application logs to Trusys. This is crucial for data ingestion and analysis:
Folder Method
Folder Method
Input the folder path where your application logs are stored. Select the cloud provider (currently Azure and AWS are supported) and provide the necessary authentication details.
SDK Method
SDK Method
Select this option and choose an existing API key to save the configuration. This API key should be integrated into your application’s project to send logs directly to Trusys.This sets up observability and monitoring hooks automatically for your application.
- Python
- Typescript
1. Install the SDKInstall 2. Set Environment VariablesConfigure the required environment variables for exporting telemetry data:4. Initialize the SDKInitialize
openlit using pip:pip install openlit
export OTEL_EXPORTER_OTLP_ENDPOINT="https://otel.trusys.ai"
export OTEL_EXPORTER_OTLP_HEADERS="x-api-key=abc123"
export OTEL_SERVICE_NAME="xyz123"
💡 Replace3. Import the SDKIn your application code, import theabc123with your actual API key. Refer to the documentation on “How to Create an API Key” for more details. 💡 Replacexyz123with your actual application ID of your application on trusys platform.
openlit SDK:import openlit
openlit:openlit.init()
1. Install the SDK2. Set Environment VariablesConfigure the required environment variables for exporting telemetry data:4. Initialize the SDKInitialize
npm install openlit
export OTEL_EXPORTER_OTLP_ENDPOINT="https://otel.trusys.ai"
export OTEL_EXPORTER_OTLP_HEADERS="x-api-key=abc123"
export OTEL_SERVICE_NAME="xyz123"
💡 Replace3. Import the SDKIn your application code, import theabc123with your actual API key. Refer to the documentation on “How to Create an API Key” for more details. 💡 Replacexyz123with your actual application ID of your application on trusys platform.
openlit SDK:import openlit from "Openlit"
Openlit:Openlit.init();
OpenLIT Operator Integration
OpenLIT Operator Integration
Trusys supports integration with the OpenLIT Operator, enabling seamless AI observability for your Kubernetes-based applications. This integration allows you to automatically instrument your AI workloads and send telemetry data to Trusys for comprehensive evaluation and monitoring.The OpenLIT Operator provides automated instrumentation for AI applications running in Kubernetes environments. It enables zero-code observability by automatically injecting OpenTelemetry instrumentation into your pods, capturing traces and metrics from LLM calls, AI frameworks, and vector databases without requiring any code modifications.Replace
Getting Started
Prerequisites
- Kubernetes cluster with OpenLIT Operator installed
- Trusys account with monitoring enabled
- OTLP endpoint URL from your Trusys application settings
Configuration Steps
- Install OpenLIT Operator in your Kubernetes cluster using Helm
- Create AutoInstrumentation resource targeting your application pods using label selectors
- Configure OTLP endpoint to point to your Trusys monitoring endpoint
- Restart your pods to enable automatic instrumentation
- View traces in Trusys dashboard under the Traces section
Example Configuration
Configure your AutoInstrumentation resource to send data to Trusys:apiVersion: openlit.io/v1alpha1
kind: AutoInstrumentation
spec:
selector:
matchLabels:
app: your-ai-application
otlp:
endpoint: "https://your-trusys-endpoint:4318"
headers:
- name: "Authorization"
value: "Bearer YOUR_TRUSYS_API_KEY"
your-trusys-endpoint and YOUR_TRUSYS_API_KEY with your actual Trusys monitoring endpoint and API key.Define Collection Settings
Configure how Trusys collects data from your logs:
Sampling Frequency– Choose between percentage-wise or count-wise sampling.Enter Percentage or Count– Specify the exact percentage of logs to sample (e.g., 10% of logs) or the number of logs to sample (e.g., 100 logs).
By default, sampling is performed and evaluated every hour.
Define Functional Monitoring Metrics
Select functional metrics against which you want to monitor your production logs and define their respective expected values. These metrics assess the performance and accuracy of your AI application in real-time.Session Tracking for Conversational ApplicationsFor applications that maintain user sessions (such as chatbots, virtual assistants, or multi-turn conversational interfaces), implement session tracking to enable comprehensive evaluation and monitoring of complete user journeys.
Evaluation is performed on traces with the same On session end:
application.id, and sessions are identified by grouping traces that share a session.id attribute. The end of a session is explicitly marked with the ended status.On every request:from opentelemetry import trace
span = trace.get_current_span()
span.set_attribute("session.id", "<user_session_id>")
tracer = trace.get_tracer(__name__)
with tracer.start_as_current_span("session.end") as span:
span.set_attribute("session.id", "<user_session_id>")
span.set_attribute("session.status", "ended")
Upon successfully enabling monitoring, you will begin to see traces from your application appear in the Traces section, detailed evaluation results for each log, and a Monitoring Dashboard providing an overview of your application’s health, functional metric evaluations, and security evaluations.
TRUGUARD allows you to define input and output guardrails for your application by combining validators with configurable enforcement actions. This ensures unsafe, non-compliant, or undesired content is handled consistently before it reaches end users or downstream systems.To configure TRUGUARD, users must first select one or more validators and define how the system should respond when a validation fails.
Select Validators
TRUGUARD currently supports the following validators:
Each validator can be enabled independently for:
Content Moderation
Content Moderation
The Content Moderation detects unsafe or policy-violating content such as hate speech, violence, sexual content, or self-harm using predefined safety categories.Configuration Parameters
threshold: Confidence score required to flag unsafe content (0 to 1). Default is 0.categories: Specific safety categories to validate against. Supported categories:
— Violent Crimes — Non-Violent Crimes- Sex Crimes
- Child Exploitation
- Defamation
- Specialized Advice
- Privacy
- Intellectual Property
- Indiscriminate Weapons
- Hate
- Self-Harm
- Sexual Content
- Elections
- Code Interpreter Abuse
Prompt Injection
Prompt Injection
The Prompt Injection detects attempts to manipulate system instructions, override safeguards, or extract hidden prompts through malicious user input.This helps protect against jailbreaks, role confusion, and instruction hijacking.Configuration Parameters
threshold- Confidence score required to flag a prompt injection attempt. Default: 0
PII Detection
PII Detection
The PII Detection identifies Personally Identifiable Information (PII) in inputs or outputs to prevent sensitive data exposure and support privacy compliance.Configuration Parameters
entities- List of PII entity types to detect (for example, names, phone numbers, email addresses, IDs).threshold- Confidence score required to flag detected PII. Default: 0.5
Regex
Regex
The Regex detects patterns in text using regular expressions. It is useful for identifying forbidden formats, sensitive tokens, secrets, or enforcing strict input/output structures.Configuration Parameters
-
pattern- One or more regular expression patterns to validate against. -
match_type- Defines how matches are interpreted:blacklist– Fails validation when the pattern is matched (default)whitelist– Passes validation only when the pattern is matched
-
flags- Regex flags controlling matching behavior (for example, ignore case or multiline). By default, matching is case-sensitive and single-line.
Blocklist
Blocklist
The Blocklist flags predefined words or phrases that should not appear in inputs or outputs. It is commonly used to block profanity, restricted keywords, or sensitive terms.Configuration Parameters
-
blocklist- List of words or phrases to be flagged. -
Case Sentitivity- Whether matching respects letter case. Default:false -
Enable Fuzziness- Enables approximate matching to detect obfuscated or misspelled terms. Default:false -
Fuzziness- Maximum allowed Levenshtein distance when fuzzy matching is enabled. Default:1
- Input guardrails – Applied before the model processes the request
- Output guardrails – Applied before the response is returned to the user
Configure On-Fail Actions
For every enabled validator, users must define an On-Fail Action. This determines what Trusys should do when a validation fails.Available On-Fail Actions
- FIX – Clean the content : Trusys replaces the detected PII with the corresponding entity type. For example, John Doe → < PERSON >
- Filter
Trusys replaces the unsafe portion with a safe alternative.- Mask Trusys masks the unsafe portion.
- Encrypt
Trusys encrypts the detected PII so it is not exposed in plaintext.- Fail Trusys suppresses the response entirely to protect end users or the AI system.
- Ignore but log the issue
The response is returned as-is, but the failure is logged for auditing and analysis.
Run Guardrails in Your Application
Once your guardrail configurations are set in the Trusys portal, running TruGuard in your application takes just three steps.Once initialized, TruGuard is active and ready.
Set Credentials
TruGuard needs credentials to fetch guardrail configurations for your application.
Set the following environment variables:These values are available in the Trusys portal under your application details.
export TRUSYS_APPLICATION_ID=app_xxx
export TRUSYS_API_KEY=sk_xxx
Initialize TruGuard
This is the startup step that activates TruGuard.Call What happens during initialization
truguard.init() once when your application boots (for example, in main.py, app startup, or worker initialization).from trusys import truguard
truguard.init()
- TruGuard fetches input and output guardrail configurations from Trusys
- Configurations are cached in memory
- A background refresh loop keeps configs up to date
- No redeploy is required when guardrails change
For more details - See Developer Doc
Application List Overview
The Application List offers a centralized view of all AI applications and LLM models linked to your project. This list allows you to quickly access key information, including:- Functional Evaluations – Count of performance tests
- Security Evaluations – Count of security audits
- Production Monitoring – Indicates whether the app is currently monitored live
Application Details
Clicking on a specific application within the Application List will take you to the Application Details page, offering an in-depth view of its configuration, evaluation results, and monitoring settings.View Functional and Security Evaluation Details
Access a comprehensive list of the functional and security evaluations performed on your application. This includes insights from TRU EVAL (functional performance, accuracy, etc.) and TRU SCOUT (security vulnerabilities, compliance adherence). Click on each test evaluation list to view the report and details of each test run.Connection Details for Evaluation and Monitoring
This section displays all the parameters and credentials used to connect your AI application or LLM model to Trusys for both evaluation and continuous monitoring.View Monitoring Settings
This part of the Application Details page outlines the specific configurations for live monitoring of your production application. It includes details such as:Sampling RatesFrequency of data collection.Monitored MetricsPII Leak, hallucination, and specific AI performance indicators.Monitored Security CategoriesHate speech, data leakage, adversarial attacks.