Using RAG features
The admin console integrates knowledge bases, model configuration, chat with retrieval, and agent designer. Below is a concise map of where to click and what to configure.
Knowledge bases
Create a knowledge base, upload or import documents, choose chunking and parsing options, and wait for indexing to finish.
Bind embedding and optional rerank models per KB or use system defaults so retrieval uses your configured vector store.
Models & providers
Configure LLM, embedding, rerank, and other providers under system settings. Many OpenAI-compatible endpoints and local runtimes (e.g. Ollama) are supported via the registry pattern.
Conversational RAG
Start a RAG conversation from the chat UI: select knowledge bases, ask questions, and review inline citations that map to retrieved chunks.
Agents & workflows
Use the agent canvas to compose flows from templates or scratch: Begin, Retrieval, LLM, Message, branches, HTTP tools, and more. Save versions and run with user queries.
Users & permissions
JWT authentication, dynamic routes, and Casbin-backed authorization align menu visibility and APIs with roles—reuse this for multi-tenant or internal deployments.