vibe.assistant.structures¶
Assistant dataclasses and structures.
This module contains all dataclasses and code-less/code-light structures used by the assistant subsystem, separated to avoid circular import issues.
This includes: - Assistant service structures (StreamingResponse, ProviderWithConfig, AssistantTurn) - Message types (SystemMessage, UserMessage, AssistantMessage, ToolResult) - Tool definitions (Tool, ToolParameter, ToolCall) - Message serialization/deserialization helpers
StreamingResponse ¶
Response object for streaming assistant responses.
ProviderWithConfig ¶
Provider instance with its endpoint configuration.
AssistantTurn ¶
Represents a complete assistant turn with context.
ToolParameter ¶
Parameter definition for a tool.
Represents a single parameter in a tool's schema using JSON Schema concepts. Conversion to provider-specific formats happens in provider adaptors.
Tool ¶
Provider-agnostic tool definition.
Represents a function that can be called by the LLM. This is converted to provider-specific formats by each LLMProvider implementation.
ToolCall ¶
A tool invocation from the LLM.
This is the internal representation of a tool call. Providers may use different formats in their APIs (e.g., OpenAI has nested function objects).
The thought_signature field is used by Gemini 3 Pro to preserve reasoning context across multi-turn function calling. When present in a response, it must be passed back exactly as received in subsequent requests.
ToolResult ¶
Output from tool execution.
Stores the result of executing a tool that was called by the LLM.
SystemMessage ¶
System instructions for the LLM.
Contains template context, current draft state, available questions, etc. Built by ModelContextManager from template_data and draft_blocks.
UserMessage ¶
User input (text or structured form data).
Can contain either plain text or structured form data from user input. May be marked as auto_reply when system automatically generates corrective messages.
AssistantMessage ¶
LLM response with optional tool calls.
Contains the text content generated by the LLM and any tool calls it made. The content field can be an empty string when the LLM only makes tool calls.
message_to_dict ¶
message_to_dict(message: Message) -> dict[str, Any]
Convert any message to dict for storage using asdict().
Runtime type checking ensures message is a valid Message type. Bytes values (e.g., thought_signature) are encoded as base64 for JSON compatibility.
dict_to_message ¶
dict_to_message(data: dict[str, Any]) -> Message
Convert dict (from session storage) back to typed Message object.
Validates the dict structure with Pydantic before constructing dataclass. This ensures session data is well-formed and catches corruption early.
Generic deserialization that reconstructs nested dataclass instances (like ToolCall objects within AssistantMessage.tool_calls).
| Raises: |
|
|---|
message_from_dict ¶
message_from_dict(data: dict[str, Any]) -> Message
Deserialize message from dict.
Uses the 'type' discriminator field to determine which message class to use.
ensure_message ¶
ensure_message(item: object) -> Message
Ensure item is a Message object, converting from dict if needed.
Uses singledispatch for type-based dispatch instead of isinstance chains. This replaces the common pattern: if isinstance(item, Message): return item elif isinstance(item, dict): return dict_to_message(item) else: raise TypeError(...)
| Parameters: |
|
|---|
| Returns: |
|
|---|
| Raises: |
|
|---|
ensure_messages ¶
serialize_messages ¶
Serialize list of messages to dicts for session storage.
Runtime type checking ensures messages list contains valid Message types.
deserialize_messages ¶
Deserialize list of messages from session storage.
Each message dict is validated with Pydantic before constructing dataclasses. Runtime type checking ensures input is a list of dicts.
| Raises: |
|
|---|