vibe.llm_providers.anthropic

Anthropic LLM provider integration.

AnthropicConverter

Converter for Anthropic Claude API format.

Anthropic has specific requirements: - System messages passed separately (not in messages array) - Messages must alternate between user and assistant roles - Tool calls are content blocks within assistant messages - Tool results are content blocks within user messages

This converter returns (system_prompt, messages) tuple from convert_all.

convert_system

convert_system(msg: SystemMessage) -> str

Return system content as string (accumulated separately).

convert_user

convert_user(msg: UserMessage) -> dict[str, Any]

Convert user message to Anthropic user role format.

convert_assistant

convert_assistant(msg: AssistantMessage) -> dict[str, Any]

Convert assistant message with optional tool_use content blocks.

convert_tool_result

convert_tool_result(msg: ToolResult) -> dict[str, Any]

Convert tool result to Anthropic tool_result content block in user message.

convert_all

convert_all(messages: list[Message]) -> tuple[str | None, list[dict[str, Any]]]

Convert all messages to Anthropic format.

Returns:
  • str | None

    Tuple of (system_prompt, anthropic_messages) where:

  • list[dict[str, Any]]
    • system_prompt: Combined system message content or None
  • tuple[str | None, list[dict[str, Any]]]
    • anthropic_messages: List of messages with alternating roles

AnthropicProvider

LLMProvider implementation for Anthropic's Claude 3 models.

Configuration options (common - via ProviderConfig): - api_key: Your Anthropic API key (required) - model: The model name (e.g., "claude-3-sonnet-20240229") - max_tokens: The maximum number of tokens to generate (default: 4096) - temperature: Controls randomness (0.0 to 1.0, default: 0.7) - tools: Enable tool calling (default: True)

convert_messages_to_provider_format

convert_messages_to_provider_format(messages: list[Message], tools: list[Tool] | None = None) -> tuple[str | None, list[dict[str, Any]]]

Convert internal messages to Anthropic native format.

Uses AnthropicConverter for clean separation of conversion logic.

Parameters:
  • messages (list[Message]) –

    List of typed Message objects

  • tools (list[Tool] | None, default: None ) –

    Optional list of Tool objects (not used in conversion)

Returns:
  • str | None

    Tuple of (system_prompt, anthropic_messages)

  • list[dict[str, Any]]
    • system_prompt: Combined system message content or None
  • tuple[str | None, list[dict[str, Any]]]
    • anthropic_messages: List of messages in Anthropic format

stream_generate

stream_generate(messages: list[Message], sequence_number: int, *, session_id: str, assistant_name: str, endpoint_name: str, turn_id: str, previous_response_id: str | None = None, tool_outputs: list[ToolOutput] | None = None, unanswered_predefined_questions: list[dict[str, Any]] | None = None) -> Generator[StreamChunk, None, None]

Stream LLM responses with tool call support.

Parameters:
  • messages (list[Message]) –

    Conversation history as typed Message objects

  • sequence_number (int) –

    Turn sequence number for conversation ordering

  • session_id (str) –

    Session identifier for logging and correlation

  • assistant_name (str) –

    Assistant display name for logging

  • endpoint_name (str) –

    LLM endpoint identifier for logging and metrics

  • turn_id (str) –

    Unique turn identifier for request/response correlation

  • previous_response_id (str | None, default: None ) –

    Not used by Anthropic provider

  • tool_outputs (list[ToolOutput] | None, default: None ) –

    Not used by Anthropic provider (tool results in message history)

  • unanswered_predefined_questions (list[dict[str, Any]] | None, default: None ) –

    Not used by Anthropic provider

Yields:
  • StreamChunk

    StreamChunk objects containing text deltas or tool call invocations

get_capabilities

get_capabilities() -> ProviderCapabilities

Return Anthropic provider capability flags for feature gating.

get_usage_stats

get_usage_stats() -> UsageStats | None

Return usage statistics from the last API call.