vibe.llm_providers.ollama¶
Ollama LLM provider integration.
OllamaProvider ¶
LLM Provider for Ollama locally hosted models.
Configuration options (common - via ProviderConfig): - model: Model name (e.g., "llama3.2", "mistral") - required - base_url: Ollama server URL (default: "http://localhost:11434") - temperature: Controls randomness (0.0 - 1.0, default: 0.7) - timeout: Request timeout in seconds (default: 30) - tools: Enable native tool calling (default: False for JSON mode) - api_key: Not used by Ollama (local models don't need authentication)
Configuration options (Ollama-specific): - system: System message override
convert_messages_to_provider_format ¶
convert_messages_to_provider_format(messages: list[Message], tools: list[Tool] | None = None) -> list[dict[str, Any]]
Convert internal messages and tools to Ollama-compatible format.
Uses InternalFormatConverter (message_to_dict) for conversion.
| Parameters: |
|---|
| Returns: |
|
|---|
stream_generate ¶
stream_generate(messages: list[Message], sequence_number: int, *, session_id: str, assistant_name: str, endpoint_name: str, turn_id: str, previous_response_id: str | None = None, tool_outputs: list[ToolOutput] | None = None, unanswered_predefined_questions: list[dict[str, Any]] | None = None) -> Generator[StreamChunk, None, None]
Stream LLM responses with tool call support.
Uses tools if enabled, otherwise falls back to basic string parsing.
| Parameters: |
|
|---|
| Yields: |
|
|---|
get_capabilities ¶
get_capabilities() -> ProviderCapabilities
Return Ollama (local) provider capabilities.
get_usage_stats ¶
get_usage_stats() -> UsageStats | None
Return usage statistics from the last API call.