Skip to main content
Enabling Monitoring To set up continuous monitoring for your AI application, navigate to the Applications section. Choose the AI application or LLM model you wish to monitor from your list of connected applications and select Enable Monitoring.
1

Connect Application Logs

Select how you wish to connect your application logs to Trusys. This is crucial for data ingestion and analysis:
Input the folder path where your application logs are stored. Select the cloud provider (currently Azure and AWS are supported) and provide the necessary authentication details.
Select this option and choose an existing API key to save the configuration. This API key should be integrated into your application’s project to send logs directly to Trusys.
1. Install the SDKInstall openlit using pip:
pip install openlit
2. Set Environment VariablesConfigure the required environment variables for exporting telemetry data:
export OTEL_EXPORTER_OTLP_ENDPOINT="https://otel.trusys.ai"
export OTEL_EXPORTER_OTLP_HEADERS="x-api-key=abc123"
export OTEL_SERVICE_NAME="xyz123"
💡 Replace abc123 with your actual API key. Refer to the documentation on “How to Create an API Key” for more details. 💡 Replace xyz123 with your actual application ID of your application on trusys platform.
3. Import the SDKIn your application code, import the openlit SDK:
import openlit
4. Initialize the SDKInitialize openlit:
openlit.init()
This sets up observability and monitoring hooks automatically for your application.
Trusys supports integration with the OpenLIT Operator, enabling seamless AI observability for your Kubernetes-based applications. This integration allows you to automatically instrument your AI workloads and send telemetry data to Trusys for comprehensive evaluation and monitoring.The OpenLIT Operator provides automated instrumentation for AI applications running in Kubernetes environments. It enables zero-code observability by automatically injecting OpenTelemetry instrumentation into your pods, capturing traces and metrics from LLM calls, AI frameworks, and vector databases without requiring any code modifications.

Getting Started

Prerequisites

  • Kubernetes cluster with OpenLIT Operator installed
  • Trusys account with monitoring enabled
  • OTLP endpoint URL from your Trusys application settings

Configuration Steps

  1. Install OpenLIT Operator in your Kubernetes cluster using Helm
  2. Create AutoInstrumentation resource targeting your application pods using label selectors
  3. Configure OTLP endpoint to point to your Trusys monitoring endpoint
  4. Restart your pods to enable automatic instrumentation
  5. View traces in Trusys dashboard under the Traces section

Example Configuration

Configure your AutoInstrumentation resource to send data to Trusys:
apiVersion: openlit.io/v1alpha1
kind: AutoInstrumentation
spec:
  selector:
    matchLabels:
      app: your-ai-application
  otlp:
    endpoint: "https://your-trusys-endpoint:4318"
    headers:
      - name: "Authorization"
        value: "Bearer YOUR_TRUSYS_API_KEY"
Replace your-trusys-endpoint and YOUR_TRUSYS_API_KEY with your actual Trusys monitoring endpoint and API key.
2

Define Collection Settings

Configure how Trusys collects data from your logs:
  • Sampling Frequency – Choose between percentage-wise or count-wise sampling.
  • Enter Percentage or Count – Specify the exact percentage of logs to sample (e.g., 10% of logs) or the number of logs to sample (e.g., 100 logs).
By default, sampling is performed and evaluated every hour.
3

Define Functional Monitoring Metrics

Select functional metrics against which you want to monitor your production logs and define their respective expected values. These metrics assess the performance and accuracy of your AI application in real-time.
4

Define Security Monitoring Metrics

Select vulnerable categories you wish to monitor for your application. This ensures continuous vigilance against potential security threats and compliance breaches.
Session Tracking for Conversational ApplicationsFor applications that maintain user sessions (such as chatbots, virtual assistants, or multi-turn conversational interfaces), implement session tracking to enable comprehensive evaluation and monitoring of complete user journeys. Evaluation is performed on traces with the same application.id, and sessions are identified by grouping traces that share a session.id attribute. The end of a session is explicitly marked with the ended status.On every request:
from opentelemetry import trace
span = trace.get_current_span()
span.set_attribute("session.id", "<user_session_id>")
On session end:
tracer = trace.get_tracer(__name__)
with tracer.start_as_current_span("session.end") as span:
        span.set_attribute("session.id", "<user_session_id>")
        span.set_attribute("session.status", "ended")
Upon successfully enabling monitoring, you will begin to see traces from your application appear in the Traces section, detailed evaluation results for each log, and a Monitoring Dashboard providing an overview of your application’s health, functional metric evaluations, and security evaluations.