# Changelog

All notable changes to the SmythOS CORE Runtime Engine will be documented in this file.

The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## [1.8.0] 2026-02-23

### Observability

- **New: Observability Subsystem** with a full OpenTelemetry (OTel) connector — agent spans, skill propagation, session/workflow tracing, and error tracking now available out of the box
- Sensitive data redaction for OTel logs — new `enableRedaction` option and `redactHeaders` helper to strip secrets and PII from telemetry
- Agent name, team ID, org tier, and org slot added to OTel spans for richer multi-tenant tracing
- Agent.Skill spans propagated via HTTP headers across service boundaries
- Enhanced context previewing in OTel via `prepareContext` method
- Full input/output context logged in OTel spans, including tool arguments and LLM responses
- Improved OTel error tracking with consolidated error reporting and error event listeners
- OTel span hierarchy for conversation sessions fully implemented
- Graceful handling when OTel endpoint is not configured

### LLM & Model Support

- **Abort Controller**: all LLM connectors now support an `abortSignal` parameter and emit a `TLLMEvent.Abort` event — cancellation is first-class
- **Finish reason normalization**: standardized `TLLMFinishReason` enum across all LLM connectors
- **Event emitter standardization**: LLM connectors never throw — errors are always emitted as events
- **Fallback/proxy pattern**: custom model connectors now support a fallback proxy architecture for resilience
- **Structured output**: implemented structured output extraction for Anthropic's latest models; `_debug` and `_error` fields excluded from structured outputs
- **Anthropic**: handle `model_context_window_exceeded` stop reason gracefully
- **Anthropic**: prefill and JSON instructions now only applied to legacy models
- **Anthropic**: support negative values for `temperature` and `top_p`
- **Opus 4.5 / 4.6**: thinking effort parameter support
- **GPT-5.2**: `xhigh` reasoning effort support
- **Gemini 3**: `reasoningEffort` config and `thoughtSignature` attachment for function calling
- **Google AI**: fixed system instruction propagation, message part extraction, and `functionResponse.response` structure
- **Google AI**: fixed multiple tool call logging, tier/cache handling, and image token usage reporting
- **Claude 4**: enabled streaming in Classifier and LLM Assistant components
- **Perplexity**: provide actual API error messages; allow either `frequency_penalty` or `presence_penalty` (not both)
- Flash model family detected via a more generic pattern
- `modelEntryName` exposed for runtime model identification
- `readyPromise` added to `LLMContext` class for safer initialization sequencing

### Connectors & Storage

- **New: SQLite Agent Data Connector** — lightweight local agent data persistence using SQLite
- **RAG v2**: embeddings credentials now resolved from either vault or internal config; metadata handling fixed; namespace parsing corrected
- **DataPools v2**: conditional rollout with updated namespace processing and datasource indexer component
- **Pinecone**: `delete namespace` and `delete datasource` operations fixed; constructor params made optional
- **Milvus**: `delete datasource` operation fixed
- **Secret Manager**: fixed secrets fetching flow and managed vault connector; `smythos` set as default prefix
- Legacy namespace IDs resolved correctly
- Vector embedders: legacy OpenAI embedder entries hidden from selection UI
- OAuth2 credentials manager: `scope` field now supported

### Components & Runtime

- **Chat**: fixed attachments being mixed with text file inputs
- **APIEndpoint**: debug message cleaned up to prevent bloating the debug context
- **Sub-Agent**: JSON response mode now supported
- **WebScrape**: `country` proxy option added
- **Search components**: template variables now supported for search location fields
- **ForEach / LogicAnd / Async**: improved debug logging
- Agent variables now resolved before type inference in all component contexts
- `APIEndpoint` and `ServerlessCode` variable resolution fixed
- ConversationHelper: fixed SSE event handler memory leak; errors from `toolsPromise` now propagated correctly
- Maximum tool call limit per session implemented (defaults to `Infinity`)
- `TemplateString` parser now correctly handles falsy values (`0`, `false`, `""`)
- SMYTH_PATH now accepts dot-segments for watching models from the default location
- Base64 detection no longer relies on data length heuristic
- Empty LLM response errors now include the field name for easier debugging
- Agent cache support added to the Smyth SDK

### Code Quality

- OTel class refactored: removed redundant `lastContext` variable, simplified agent data access, cleaned up unused attributes
- `LLMConnector`: corrected ordering of `structuredOutputs` inside `prepareParams()`
- Fixed HookAsync; added support for hookable classes
- Secrets Manager usage example and documentation added

---

## [1.7.43] 2026-01-22

### LLM

- **Event emitter standardization**: LLM connectors now never throw — all errors are emitted as events instead
- **Fallback proxy pattern**: initial implementation of a fallback architecture for custom LLM connectors

### Conversation & Agent

- ConversationHelper: errors from `toolsPromise` are now correctly propagated (previously swallowed)
- OTel: error handler added to OTel class, consolidated error reporting logic

---

## [1.7.42] 2026-01-20

### LLM

- **Abort Controller**: implemented `abortSignal` support and `TLLMEvent.Abort` event across all LLM connectors
- **Finish reason normalization**: introduced `TLLMFinishReason` enum and standardized finish reason values from all connectors

### Observability

- Agent name added to OTel telemetry logs for improved tracking
- OTel error tracking enhanced: error events captured at conversation-level spans

### SDK

- Agent cache support added to the Smyth SDK

---

## [1.7.41] 2026-01-08

### Connectors

- **New: SQLite Agent Data Connector** — lightweight persistent storage for ephemeral and SDK agents

### LLM — Google AI

- Fixed `functionResponse.response` structure for Google AI requests
- Fixed text part extraction from Google AI responses
- Fixed system instruction propagation for Google AI

### Observability

- OTel spans now include session ID and workflow details for richer tracing
- Improved debug logging for `ForEach`, `LogicAnd`, and `Async` components

---

## [1.7.40] 2025-12-04

### LLM & Model Support

- **GPT-5.2**: `xhigh` reasoning effort level support
- **Claude 4**: streaming enabled for Classifier and LLM Assistant components
- Flash model family (Gemini) now detected via generic pattern — no need for explicit model listing
- **Gemini**: fixed multiple-tool-call logging; fixed infinite tool call loop
- Maximum tool call limit per session (`_maxToolCallsPerSession`), defaults to `Infinity`

### Observability

- OTel spans now include `orgTier` and `orgSlot` attributes for multi-tenant tracking
- OTel: Agent.Skill spans now propagated via HTTP headers across service boundaries
- Team ID added to OTel spans
- OTel: graceful handling when no endpoint is configured

### Connectors & Storage

- **Secret Manager**: `smythos` set as default secret prefix
- **RAG v2** (work-in-progress): namespace parsing fixes for NKV, improved embeddings credentials resolution
- Legacy namespace IDs resolved correctly

### Components & Runtime

- **TemplateString** parser: correctly handles falsy values (`0`, `false`, `""`)
- **Sub-Agent component**: JSON response mode now supported
- **WebScrape**: `country` proxy option added
- **Search components**: template variables supported for search location fields
- `modelEntryName` property exposed on LLM connectors for runtime model identification
- LLM response event handling improved

### Documentation

- Secrets Manager example and documentation added

---

## [1.7.20] 2025-11-26

### Runtime

- Agent variables are now resolved before performing type inference (fixes incorrect type coercion)
- Empty LLM response errors now include the field name for easier debugging
- Base64 detection: removed unreliable data-length heuristic

### Configuration

- `SMYTH_PATH` now accepts dot-segments (`.`) to watch models from the default location
- OTel output logging added for LLM responses

---

## [1.7.18] 2025-11-19

### LLM — Google AI / Gemini

- Google AI: tier and cache now handled correctly per-request
- **Gemini 3**: `reasoningEffort` config support
- **Gemini 3**: `thoughtSignature` attachment for function calling (required by the Gemini 3 API)

### Connectors & Storage

- **RAG v2** (WIP): embeddings credentials resolved from either vault or internal config; metadata fix
- **Pinecone**: constructor parameters made optional
- Vector embedders: legacy OpenAI embedder entries hidden from the selection UI

---

## [1.7.15] 2025-11-13

### Observability

- **New: Observability Subsystem** — OpenTelemetry (OTel) connector added to `@smythos/sre`
- OTel spans cover agent execution, LLM calls, skill invocations, and error events
- OTel connector hotfixes applied shortly after initial rollout

### Connectors

- **Pinecone**: fixed `delete namespace` and `delete datasource` operations
- **Milvus**: fixed `delete datasource` operation
- **DataPools v2**: datasource indexer component work-in-progress

### Runtime

- `APIEndpoint` and `ServerlessCode` component: agent variable resolution fixed
- `HookAsync`: fixed; hookable class support added

---

## [1.7.9] 2025-11-09

- Fixed edge cases issues with SRE core initialization
- Add support for custom chunkSize and chunkOverlap for VectorDB embeddings
- normalized the embeddings parameters for VectorDB connectors
- JSONVaultConnector now detects missing vault and prompts the user to create it

---

## [1.7.7] 2025-11-08

### Runtime

- Hotfix: SRE core initialization race condition with `ConnectorService` global instances
- VectorDB connector global instance handling stabilized

---

## [1.7.4] 2025-11-06

### LLM

- Custom models: fixed resolution in the SDK
- Fallback model: parameters are now correctly filtered before the fallback call
- `TLLMParams` split into more granular types for improved readability and type safety

### Runtime

- Global variable fixes across multiple components

---

## [1.7.2] 2025-11-04

### Agent & Conversation

- `agentData` added to Conversation prompt hooks for richer hook context
- `getOpenAPIJSON()` function tweaks

### Components

- `BinaryInput`: handle missing MIME type when asset is loaded from a URL

### Connectors

- **Pinecone**: fallback to default metadata when retrieving a datasource that lacks metadata

### Documentation

- LocalCache connector documentation added

---

## [1.7.1] 2025-10-30

- Core structures for triggers
- Added Scheduler Connector
- Added Advanced SRE hooks (Aspect Oriented Programming pattern to monitor and interact with internal SRE calls from outside)
- SRE core is now accessible from the sdk package though @smythos/sdk/core
- Implemented OAuth2 credentials manager for SRE (this will become the standard wrapper to handle all oAuth2 credentials)
- APIEndpoint now supports custom code process (harmonizing SDK and SRE)
- Multiple fixes for conversation manager
- Fix for Gemini LLM infinite loop tool calls
- Support local LLM credentials
- add a killReason message when an SRE agent is killed
- AgentDataConnector handles ephemeral agents data (for SDK agents)
- Update Milvus data format to match the latest Milvus sdk release

---

## [1.6.13] 2025-10-17

### LLM & Models

- **Google AI**: fixed content structure for requests to prevent infinite function call loops
- **GPT-5 family**: PDF attachment support added
- Custom LLM credential resolution from vault keys
- Token limit validation now applies to legacy models only (lifted for newer models)
- `@google/generative-ai` dependency removed; fully migrated to `@google/genai`

### Connectors & Runtime

- **Electron**: enhanced support; fixed incorrect vault search directory display
- **OAuth**: vault key resolution for OAuth flows
- SDK: ability to programmatically enable and disable planner mode

### Triggers (experimental)

- Gmail and WhatsApp trigger improvements
- Trigger processing aligned with normal component execution (no input mapping required)
- Scheduler: support for suspending job runs in local mode

---

## [1.6.11] 2025-10-11

### Hooks & Configuration

- **Advanced SRE Hooks** introduced (Aspect-Oriented Programming pattern): monitor and intercept internal SRE calls from outside the runtime
- Hooks added to `Agent` class and `ModelsProviderConnector`
- JSON vault connector improvements and documentation

### Models

- JSON models provider: sanity checks for invalid JSON paths and automatic path auto-search
- Default models path support (`SMYTH_PATH` env variable)
- Models provider hotfix for invalid JSON model resolve conditions

### Triggers (experimental)

- Gmail trigger: experimental email fetch support
- WhatsApp trigger updates
- Conversation manager: `addTool` function tool parser fixed

---

## [1.6.6] 2025-10-02

### Connectors

- **AWS Lambda**: retry logic added for IAM role propagation on first run
- **AWS Lambda**: retry logic added for Lambda function deployment
- User custom models: fetched and resolved from external source

---

## [1.6.1] 2025-09-30

### LLM

- **Ollama**: native connector added with text completion and tool use support
- Fallback model execution implemented for user-configured custom LLMs
- Increased fallback token budget for custom LLM connectors

### Triggers (experimental)

- Initial trigger infrastructure; Gmail trigger experiments

---

## [1.6.0] 2025-09-29

### Features

- Add Memory Components
- Add Milvus VectorDB Connector
- Add support to Google embeddings
- Add support to OpenAI responses API
- Agent Runtime optimizations : better memory management + stability fixes

### Code and tooling

- Added multiple unit tests for a better code coveragge
- Updated dependencies
- Updated .cursor/rules/sre-ai-rules.mdc to enhance the qualty of AI based contributions

---

## [1.5.79] 2025-09-22

### Runtime & Configuration

- `SMYTH_PATH` environment variable: define the default `.smyth` directory location
- Default models path support added
- Memory components fixes

### LLM / Models

- `JSONModelProvider`: fixed race condition on model loading; fixed resolve condition for invalid JSON
- SDK Chat: fixed race condition leading to undefined agent team

### VectorDB

- Fixed `vectorDBInstance` not returning texts properly
- Additional embedding models supported for Google Gemini
- VectorDB documentation added

### MCP

- `MCPClient`: deprecated settings marked as optional
- Sanity check added for duplicate tool definitions in the Conversation manager
- MCP logs improved

### Fixes

- `APICall`: oAuth hotfix
- `OpenAI` LLM: fixed non-streaming requests via Responses API
- Debug data no longer missing in certain edge cases

---

## [v1.5.60]

### Features

- Fixed memory leak in Agent context manager
- Optimized performances and resolved a rare case causing CPU usage spikes

---

## [v1.5.50]

### Features

- Added support for OpenAI Responses API
- Added support for GPT-5 family models with reasoning capabilities .
- MCP Client component : support for Streamable HTTP transport

---

## [v1.5.31]

### LLM & Model Support:

- Added support for new models (Claude 4, xAI/Grok 4, and more).
- Improved model configuration, including support for unlisted/custom models
- better handling of Anthropic tool calling.
- Enhanced multimodal and streaming capabilities for LLMs.

### Components & Connectors:

- Introduced AWS Lambda code component and connector.
- Added serverless code component.
- Enhanced and unified connectors for S3, Redis, LocalStorage, and JSON vault.
- Added support for local storage cache and improved NKV (key-value) handling.

### Fixes

- Numerous bug fixes for LLM connectors, model selection, and streaming.
- Fixed issues with S3 connector initialization, serverless code component, and vault key fetching.
- Improved error handling for binary input, file uploads, and API calls.
- Fixed issues with usage reporting, especially for user-managed keys and custom models.

### Improvements

- Optimized build processes.
- Improved strong typing and code auto-completion.

---

## [v1.5.0] SmythOS becomes open source!

### Features

- Moved to a monorepo structure
- Implemented an SDK that provides an abstracted interface for all SmythOS components
- Implemented a CLI to help running agents and scaffolding SDK and SRE projects along

---

## [v1.4.0]

### Features

- New connectors : JSON Account connector, RAMVec vectordb, localStorage
- Conversation manager: better handling of agent chats
- logger becomes a connector
- Add support for usage reporting
- LLM : new models provider connector allows loading custom models including local models

---

## [v1.2.0]

### Features

- New connectors : AWS Secret Manager Vault, Redis, and RAM Cache
- Conversation manager: better handling of agent chats
- All connectors inherit from SecureConnector using a common security layer
- LLM : support for anthropic, Groq and Gemini

---

## [v1.1.0]

### Features

- New connectors : S3, Pinecone, and local vault
- LLM : implemented common LLM interface to support more providers

---

## [v1.0.0]

### Features

- Initial release
- LLM : support for openai API
- Smyth Runtime Core
- Connectors Serivece
- Subsystems architecture
- Security & ACL helpers
- Implemented services : AgentData, Storage, Account, VectorDB
