vibe.llm_providers.gemini

Google Gemini LLM provider integration.

GeminiConverter

Converter for Google Gemini API format.

Uses google.genai.types for SDK-specific Content objects. Defined here (not in message_converter.py) because it depends on google.genai.types.

convert_system

convert_system(msg: SystemMessage) -> Content | None

Convert system message to Gemini user content (no native system role).

convert_user

convert_user(msg: UserMessage) -> Content | None

Convert user message to Gemini user content with text part.

convert_assistant

convert_assistant(msg: AssistantMessage) -> Content | None

Convert assistant message to Gemini model content with optional FunctionCall parts.

convert_tool_result

convert_tool_result(msg: ToolResult) -> Part

Convert tool result to Gemini FunctionResponse Part.

Note: Returns a Part, not Content. These are batched in convert_all.

convert_all

convert_all(messages: list[Message]) -> list[Content]

Convert all messages to Gemini format with tool result batching.

Tool results are accumulated and flushed as a single user Content.

GeminiProvider

LLMProvider implementation for Google's Gemini API using the modern google.genai SDK.

Configuration options (common - via ProviderConfig): - api_key: Google API key (required) - model: Model name (default: "gemini-2.0-flash-exp") - temperature: Controls randomness - max_tokens: Maximum tokens to generate - tools: Enable tool calling (default: True)

Configuration options (Gemini-specific): - thinking_budget: Token budget for extended thinking (Gemini 2.0+) - thinking_level: Thinking intensity ("low", "medium", "high") for Gemini 3.x - include_thoughts: Include thinking content in response (default: True)

convert_messages_to_provider_format

convert_messages_to_provider_format(messages: list[Message], tools: list[Tool] | None = None) -> list[Any]

Convert internal messages and tools to Google Gemini types.Content format.

Uses GeminiConverter for clean separation of conversion logic.

Parameters:
  • messages (list[Message]) –

    List of typed Message objects

  • tools (list[Tool] | None, default: None ) –

    Optional list of Tool objects (uses self.tools if not provided)

Returns:
  • list[Any]

    List of types.Content objects for Gemini API

stream_generate

stream_generate(messages: list[Message], sequence_number: int, *, session_id: str, assistant_name: str, endpoint_name: str, turn_id: str, previous_response_id: str | None = None, tool_outputs: list[ToolOutput] | None = None, unanswered_predefined_questions: list[dict[str, Any]] | None = None) -> Generator[StreamChunk, None, None]

Stream LLM responses with tool call support.

Parameters:
  • messages (list[Message]) –

    Conversation history as typed Message objects

  • sequence_number (int) –

    Turn sequence number for conversation ordering

  • session_id (str) –

    Session identifier for logging and correlation

  • assistant_name (str) –

    Assistant display name for logging

  • endpoint_name (str) –

    LLM endpoint identifier for logging and metrics

  • turn_id (str) –

    Unique turn identifier for request/response correlation

  • previous_response_id (str | None, default: None ) –

    Not used by Google provider

  • tool_outputs (list[ToolOutput] | None, default: None ) –

    Not used by Google provider (tool results in message history)

  • unanswered_predefined_questions (list[dict[str, Any]] | None, default: None ) –

    Not used by Google provider

Yields:
  • StreamChunk

    StreamChunk objects containing text deltas or tool call invocations

get_capabilities

get_capabilities() -> ProviderCapabilities

Return Google provider capability flags for feature gating.

get_ui_config_schema

get_ui_config_schema() -> dict[str, ConfigOption]

Return config schema filtered by model family.

Gemini models have different thinking configurations: - Gemini 3.x: Use thinking_level (low/high), thinking is mandatory - Gemini 2.5 Pro: Use thinking_budget (budget-based), thinking is mandatory - Gemini Flash: Use thinking_enabled toggle + thinking_budget (optional)

get_usage_stats

get_usage_stats() -> UsageStats | None

Return usage statistics from the last API call.