vibe.assistant.services.assistant_service

Core assistant service providing business logic without Flask dependencies.

This service handles all assistant functionality including message processing, streaming responses, and security protection while maintaining clean separation from HTTP concerns.

AssistantService

Core assistant service handling business logic using Flask session.

This service provides clean, testable business logic for assistant functionality using standard Flask session management.

__init__

__init__(assistant_name: str, config: AssistantConfig, template_data: TemplateData, all_question_definitions: dict[str, Any], auto_reply: bool = True) -> None

Initialize assistant service with explicit dependencies.

Parameters:
  • assistant_name (str) –

    Name of the assistant assistant

  • config (AssistantConfig) –

    Assistant configuration

  • template_data (TemplateData) –

    Template data for rendering

  • all_question_definitions (dict[str, Any]) –

    Complete question definitions

  • auto_reply (bool, default: True ) –

    If True, automatically handle NoFollowUpException and PrematureFinalizeException by adding corrective messages and retrying. If False, reraise these exceptions to let the caller handle them.

get_current_draft

get_current_draft() -> str

Get the current draft content from the draft_blocks state.

Returns:
  • str

    Current draft content as string with semantic block IDs for the model

get_assistant_provider

get_assistant_provider(assistant_config: dict[str, Any]) -> LLMProvider

Get the appropriate assistant provider based on configuration.

Parameters:
  • assistant_config (dict[str, Any]) –

    assistantconfiguration

Returns:
  • LLMProvider

    Configured assistant provider instance

Raises:
  • ValueError

    If configuration is invalid

process_user_input

process_user_input(form_data: dict[str, Any]) -> AssistantTurn

Process user input and prepare for assistant response.

Parameters:
  • form_data (dict[str, Any]) –

    Form data from user submission

Returns:

prepare_provider

prepare_provider(turn: AssistantTurn) -> ProviderWithConfig

Step 1: Get provider and endpoint config for streaming.

This method can be called separately to allow inspection/modification of provider configuration before streaming begins.

Parameters:
  • turn (AssistantTurn) –

    Assistant turn with processed user input

Returns:
  • ProviderWithConfig

    ProviderWithConfig with provider instance and endpoint configuration

Raises:
  • Exception

    If provider initialization fails

get_unanswered_predefined_questions

get_unanswered_predefined_questions(provider: LLMProvider) -> list[dict[str, Any]]

Get list of unanswered predefined questions for current session.

Parameters:
  • provider (LLMProvider) –

    Provider instance for context manager

Returns:
  • list[dict[str, Any]]

    List of predefined question dicts with 'id', 'label', and 'type' keys

build_messages

build_messages(turn: AssistantTurn, provider: LLMProvider) -> list[Message]

Step 2: Build prompt messages from current session state.

This method can be called separately to allow inspection/modification of messages before streaming begins.

Parameters:
  • turn (AssistantTurn) –

    Assistant turn with processed user input

  • provider (LLMProvider) –

    Provider instance for getting capabilities and config

Returns:
  • list[Message]

    List of typed Message objects (not converted to provider format)

start_streaming

start_streaming(turn: AssistantTurn, provider: LLMProvider, messages: list[Message], unanswered_questions: list[dict[str, Any]], emit_initial_bubbles: bool = True) -> StreamingResponse

Step 3: Start the streaming response with prepared provider and messages.

Parameters:
  • turn (AssistantTurn) –

    Assistant turn with processed user input

  • provider (LLMProvider) –

    Configured provider instance

  • messages (list[Message]) –

    Typed Message objects for LLM

  • unanswered_questions (list[dict[str, Any]]) –

    Pending questions for this turn

  • emit_initial_bubbles (bool, default: True ) –

    Whether to emit initial UI bubbles before streaming

Returns:

prepare_streaming_response

prepare_streaming_response(turn: AssistantTurn) -> StreamingResponse

Prepare all components needed for streaming assistant response.

High-level convenience method that calls prepare_provider, build_messages, and start_streaming in sequence. Use this for standard workflow.

For custom workflows (inspection, modification), use the step-by-step methods instead.

This method now supports dual-mode operation: - If template cannot render: Stream system questions - If template can render: Stream LLM response

Parameters:
  • turn (AssistantTurn) –

    Assistant turn with processed user input

Returns:

reset_conversation

reset_conversation() -> None

Reset the conversation by deleting all state.

finalize_draft

finalize_draft() -> str

Finalize the current draft and return it.

Returns:
  • str

    Final draft content

AssistantServiceFactory

Factory for creating AssistantService instances from various contexts.

from_flask_context

from_flask_context(assistant_name: str, template_data: TemplateData, all_question_definitions: dict[str, Any], auto_reply: bool = True) -> AssistantService

Create AssistantService from current Flask context.

Parameters:
  • assistant_name (str) –

    Name of the assistant assistant

  • template_data (TemplateData) –

    Template data for rendering

  • all_question_definitions (dict[str, Any]) –

    Complete question definitions

  • auto_reply (bool, default: True ) –

    If True, automatically handle NoFollowUpException and PrematureFinalizeException

Returns:

for_testing

for_testing(assistant_name: str, template_data: TemplateData, all_question_definitions: dict[str, Any], session_id: str = 'test-session', template_id: str = 'test-template') -> AssistantService

Create AssistantService for testing.

Parameters:
  • assistant_name (str) –

    Name of the assistant assistant

  • template_data (TemplateData) –

    Template data for rendering

  • all_question_definitions (dict[str, Any]) –

    Complete question definitions

  • session_id (str, default: 'test-session' ) –

    Test session ID

  • template_id (str, default: 'test-template' ) –

    Test template ID

Returns: