Skip to main content
This section explains how to connect, review, and monitor your AI applications and LLM models within the Trusys platform.

1. Connect an AI Application

Navigate to the Applications tab and click New Application to add your AI application or LLM model.
1

Add application details

These are instructions or content that only pertain to the first step.
  • Application Name – e.g., MyUniqueApp
  • Description – Adds clarity: “Streamlines project management by integrating task tracking, team collaboration, and progress reporting.”
  • Use Case (optional) – e.g., “Ideal for teams focused on productivity and real-time updates.”
2

Choose connection type

If you’re working with an LLM Model:
  • Select Model – Pick your LLM from available options
  • Authentication – Supply credentials to securely connect
If you’re using a Custom AI Application**:
  • Select the Custom Provider Type
  • Choose the Request Type (e.g., GET, POST)
  • Enter the Base URL of the application’s API.
  • Provide API authentication details for secure communication.
  • Define how to transform response data for compatibility with Trusys’s systems.
Mandatory: The API request must contain the {{prompt}} variable, as Trusys passes all test cases through this placeholder. Without {{prompt}}, evaluations and monitoring will not function correctly.
Prefer the command line? See the Command-Line Usage guide.

Using Custom Application

To use a custom provider, you typically need to:
  1. Develop your custom provider script: Write a JavaScript or Python file that implements the callApi function (or call_api for Python) to interact with your AI model or API. This function will recieve the prompt generated by the Trusys platform as an argument. The code should submit this prompt to the AI application and return the output. Please see examples provided below.
  2. Select the custom provider: When configuring the application, choose the language in which you have implemented the custom provider.
  3. Run the tests using CLI: Run the evaluations using the CLI by providing the path to your custom provider file: Command-Line Usage.
Below are examples of how to create a simple echo provider in both JavaScript and Python.
  • JavaScript Example (echoProvider.js)
  • Python Example (echoprovider.py)
This example demonstrates a simple JavaScript custom provider that echoes the input prompt.
class EchoProvider {
id() {
return 'echo';
    }
async callApi(prompt, context, options) {
return {
  output: `Echo: ${prompt}`,
      };
  }
    }
module.exports = EchoProvider;

2. Application List Overview

The Application List offers a centralized view of all AI applications and LLM models linked to your project. This list allows you to quickly access key information, including:
  • Functional Evaluations – Count of performance tests
  • Security Evaluations – Count of security audits
  • Production Monitoring – Indicates whether the app is currently monitored live

3. Application Details

Clicking on a specific application within the Application List will take you to the Application Details page, offering an in-depth view of its configuration, evaluation results, and monitoring settings.

Connection Details for Evaluation and Monitoring

This section displays all the parameters and credentials used to connect your AI application or LLM model to Trusys for both evaluation and continuous monitoring.

View Functional and Security Evaluation Details

Access a comprehensive list of the functional and security evaluations performed on your application. This includes insights from TRU EVAL (functional performance, accuracy, etc.) and TRU SCOUT (security vulnerabilities, compliance adherence). Click on each test evaluation list to view the report and details of each test run.

View Monitoring Settings

This part of the Application Details page outlines the specific configurations for live monitoring of your production application. It includes details such as:
  • Sampling Rates Frequency of data collection.
  • Monitored Metrics PII Leak, hallucination, and specific AI performance indicators.
  • Monitored Security Categories Hate speech, data leakage, adversarial attacks.
These settings ensure Trusys provides continuous, relevant insights into your application’s real-world performance and security posture.

Evaluation

Access results from:
  • TRU EVAL – Functional performance, accuracy, etc.
  • TRU SCOUT – Security vulnerabilities and compliance findings
Each entry links to a detailed test report.

Monitoring

View your live monitoring configuration:
  • Sampling Rate – How often data is captured
  • Monitored Metrics – Metrics like PII leaks, hallucinations, performance indicators
  • Security Categories – Includes hate speech, data leakage, adversarial attacks
These settings ensure continuous awareness of your app’s real-world behavior.