Skip to main content
Boards allow users to create customized dashboards to monitor and analyze AI applications in a way that best fits their workflows.
With Boards, you can combine widgets from different Trusys modules into a single view and collaborate with other members of your workspace.
Boards help teams centralize insights across monitoring, observability, security, evaluation, and guardrails.

Create a Board

To create a new board:
  1. Navigate to Boards from the left navigation menu.
  2. Click Create Board.
  3. Select the Application for which the board will be created.
  4. Enter the following details:
    • Board Title
    • Board Description
  5. Click Create to start building your board.
Once created, you can begin adding widgets to customize the dashboard.

Adding Widgets

Boards are built using widgets, which display metrics, insights, or evaluation results from different Trusys modules. To add widgets:
  1. Click + Add Widget
  2. Select a widget category
  3. Choose a widget
  4. Add it to the board
  5. Resize or reposition the widget as needed
Widgets can be resized and rearranged to build a layout that best suits your monitoring needs.

Widget Categories

Widgets are grouped into the following categories:
Health AnalysisCurrent Health Score Overall application health score derived from evaluation metrics over time.Traces Evaluated Percentage of traces that passed functional evaluation checks.Sessions Evaluated Pass vs fail distribution of evaluated user sessions.
Security MonitoringSecurity Probes Count of total security probes conducted.Vulnerabilities Detected Total vulnerabilities identified with severity tagging.Vulnerabilities Over Time Trend of security vulnerabilities detected across the selected time window.Category-wise Volume Analysis Distribution of security events across categories such as safety, compliance, and trust.
Functional MonitoringEvaluation Coverage Volume of evaluated vs non-evaluated sessions over time.Metric Failure Analysis Highlights metrics contributing most to failures. Shows pass and failure volume for each evaluation metric.Metric Volume Analysis Distribution of metric scores across different metric groups.
Metric EvaluationMetric Call Outcome Breakdown of call outcomes across evaluation results such as positive, negative, or neutral.Metric Avg Response Time (Pass/Fail) Pass vs fail distribution for response-time-based metrics.
System HealthSystem Health Overall success vs failure rate of requests across the selected time range.Failed Requests Total number of failed requests detected during the selected period.Passed Requests Total number of successfully processed requests.Total Traces Number of traces captured for observability and debugging.Total Cost Total inference and execution cost incurred.Total Request Duration Aggregate execution time across all requests.Total Tokens Total tokens consumed across all model calls.Avg Cost per Request Average cost incurred per request.Avg Duration per Request Average latency per request.Avg Tokens per Request Average token usage per request.
Traffic & LatencyTraces per Time Volume of traces generated over time to spot traffic trends and spikes.Time to First Token (TTFT) Measures how quickly the model starts responding after request initiation.Time to First Token Distribution Distribution of TTFT values to understand latency spread.Response Time End-to-end response latency over time.Total Response Time Distribution Frequency distribution of total response times.
Agent & Tool AnalysisAgent and Tool Calls Comparison of agent executions vs tool invocations over time.Token Usage Tokens consumed per day across all agents and tools.Agent Run Breakdown Execution count split by agent.Tool Call Breakdown Invocation count split by tool.
Performance BottlenecksSlowest Prompt Analysis Identifies prompts with the highest execution time and their evaluation results.Expensive Prompt Analysis Prompts with highest cost impact along with evaluation outcomes.
Response Time AnalysisResponse Time Analysis by Model Response time trends broken down by model including percentile analysis.Response Time Analysis by Agent Latency trends per agent to identify slow orchestrations.
Token Usage AnalysisToken Consumption per Model Percentage share of tokens consumed by each model.Token Consumption Trend per Model Token usage trends over time per model.Token Consumption per Agent Token usage split across agents.Token Consumption Trend per Agent Token usage over time per agent.
Security Evaluations Completed Total number of security evaluation runs executed.Security Evaluations Trend Trend of security evaluation runs executed over time.Recent Security Evaluations Latest security evaluation runs with completion status.Total Probes Total number of security probes executed across evaluations.Probes Passed Number of probes that passed security validation.Probes Failed Number of probes that failed security checks.Probe Results Over Time Trend of passed vs failed probes across days to identify attack spikes.Top Security Control Failures Security controls or vulnerability checks with the highest failure rates.
Functional Evaluations Completed Total number of functional evaluation runs executed in the selected time range.Functional Evaluations Trend Trend of functional evaluation runs executed over time.Recent Evaluations List of the most recent functional evaluation runs with status and metadata.Total Test Cases Total number of functional test cases evaluated across all runs.Passed Test Cases Number of test cases that passed functional evaluation.Failed Test Cases Number of test cases that failed functional evaluation.Test Case Results Over Time Daily distribution of passed vs failed test cases to track quality trends.Top Metric Failures Metrics with the highest failure rate highlighting weakest functional areas.
Guardrail Request & Validation OverviewGuardrails Requests Trend Tracks total incoming requests vs validations over time to understand guardrail coverage.Total Requests Total number of requests evaluated by guardrails.Validation Checks Total number of guardrail validation checks executed.Exceptions Number and percentage of requests that failed guardrail validation.
Guardrail PerformanceGuardrail Latency End-to-end latency introduced by guardrails, split by input and output rails with percentile views.Input Rail Latency Distribution Distribution of latency added by input guardrails across requests.Output Rail Latency Distribution Distribution of latency added by output guardrails across requests.
Guardrail Failure & Category InsightsGuardrail Exceptions Over Time Daily trend of guardrail exceptions split by input and output guardrails.Input Validator Exception Analysis Breakdown of which input validators contribute most to guardrail failures.Output Validator Exception Analysis Breakdown of which output validators contribute most to guardrail failures.
Widgets depend on the modules enabled for the selected application.If a module is not enabled for that application, widgets from that module will not be available for selection.For example:
  • If Guardrails is not enabled, Guardrail widgets cannot be added.
  • If Security Evaluation is not configured, those widgets will not appear.

Collaborating on Boards

Boards are collaborative and can be shared with other members of the workspace. You can:
  • Invite users to collaborate on a board
  • Remove users from a board
  • Allow team members to view and interact with the board
This enables teams across engineering, security, and operations to work from a shared monitoring dashboard.

View Your Boards

All boards you create or are invited to appear in the Boards section. From the Boards page you can:
  • View the list of available boards
  • Open and interact with a board
  • Edit existing boards
Use this section to manage your custom dashboards and quickly access the insights that matter most to your team.