vibe.llm_providers.gemini¶
Google Gemini LLM provider integration.
GeminiConverter ¶
Converter for Google Gemini API format.
Uses google.genai.types for SDK-specific Content objects. Defined here (not in message_converter.py) because it depends on google.genai.types.
convert_system ¶
convert_system(msg: SystemMessage) -> Content | None
Convert system message to Gemini user content (no native system role).
convert_user ¶
convert_user(msg: UserMessage) -> Content | None
Convert user message to Gemini user content with text part.
convert_assistant ¶
convert_assistant(msg: AssistantMessage) -> Content | None
Convert assistant message to Gemini model content with optional FunctionCall parts.
convert_tool_result ¶
convert_tool_result(msg: ToolResult) -> Part
Convert tool result to Gemini FunctionResponse Part.
Note: Returns a Part, not Content. These are batched in convert_all.
GeminiProvider ¶
LLMProvider implementation for Google's Gemini API using the modern google.genai SDK.
Configuration options (common - via ProviderConfig): - api_key: Google API key (required) - model: Model name (default: "gemini-2.0-flash-exp") - temperature: Controls randomness - max_tokens: Maximum tokens to generate - tools: Enable tool calling (default: True)
Configuration options (Gemini-specific): - thinking_budget: Token budget for extended thinking (Gemini 2.0+) - thinking_level: Thinking intensity ("low", "medium", "high") for Gemini 3.x - include_thoughts: Include thinking content in response (default: True)
convert_messages_to_provider_format ¶
convert_messages_to_provider_format(messages: list[Message], tools: list[Tool] | None = None) -> list[Any]
Convert internal messages and tools to Google Gemini types.Content format.
Uses GeminiConverter for clean separation of conversion logic.
| Parameters: |
|---|
| Returns: |
|
|---|
stream_generate ¶
stream_generate(messages: list[Message], sequence_number: int, *, session_id: str, assistant_name: str, endpoint_name: str, turn_id: str, previous_response_id: str | None = None, tool_outputs: list[ToolOutput] | None = None, unanswered_predefined_questions: list[dict[str, Any]] | None = None) -> Generator[StreamChunk, None, None]
Stream LLM responses with tool call support.
| Parameters: |
|
|---|
| Yields: |
|
|---|
get_capabilities ¶
get_capabilities() -> ProviderCapabilities
Return Google provider capability flags for feature gating.
get_ui_config_schema ¶
get_ui_config_schema() -> dict[str, ConfigOption]
Return config schema filtered by model family.
Gemini models have different thinking configurations: - Gemini 3.x: Use thinking_level (low/high), thinking is mandatory - Gemini 2.5 Pro: Use thinking_budget (budget-based), thinking is mandatory - Gemini Flash: Use thinking_enabled toggle + thinking_budget (optional)
get_usage_stats ¶
get_usage_stats() -> UsageStats | None
Return usage statistics from the last API call.