{% extends "base.html" %} {% from "components/help_macros.html" import tooltip, help_panel, help_step, help_tip %} {% set active_page = 'metrics' %} {% block title %}Token Usage & Context Analytics - Deep Research System{% endblock %} {% block extra_head %} {% endblock %} {% block content %}

Token Usage & Context Analytics

← Back to Metrics
Time Range:
{% call help_panel('context-how', 'Understanding Context Limits', icon='exclamation-triangle', collapsed=true, dismissible=true) %}
{{ help_step(1, "Context Window", "Every AI model has a limit on how much text it can process at once (measured in tokens). This is the 'context window'.") }} {{ help_step(2, "Truncation", "When your input exceeds the limit, text is cut off (truncated). High truncation means lost information.") }} {{ help_step(3, "Prevention", "Use larger-context models, reduce query size, or be more specific in your research questions.") }}
{{ help_tip("Color guide: Per-model truncation rate uses Green (<10%) / Orange (10–20%) / Red (>20%). The scatter chart instead colours each request by its context utilisation (prompt tokens ÷ limit): Green circle (<50%) / Amber triangle (50–80% or unknown) / Red diamond (>80%) / Grey triangle (no context limit reported by the provider). Shapes give a redundant signal for colourblind users; lower opacity marks requests served by local providers.") }} {% endcall %}

Loading token usage data...

{% endblock %} {% block component_scripts %} {% endblock %}