vibe.assistant.services.assistant_service¶
Core assistant service providing business logic without Flask dependencies.
This service handles all assistant functionality including message processing, streaming responses, and security protection while maintaining clean separation from HTTP concerns.
AssistantService ¶
Core assistant service handling business logic using Flask session.
This service provides clean, testable business logic for assistant functionality using standard Flask session management.
__init__ ¶
__init__(assistant_name: str, config: AssistantConfig, template_data: TemplateData, all_question_definitions: dict[str, Any], auto_reply: bool = True) -> None
Initialize assistant service with explicit dependencies.
| Parameters: |
|
|---|
get_current_draft ¶
get_current_draft() -> str
Get the current draft content from the draft_blocks state.
| Returns: |
|
|---|
get_assistant_provider ¶
get_assistant_provider(assistant_config: dict[str, Any]) -> LLMProvider
Get the appropriate assistant provider based on configuration.
| Parameters: |
|
|---|
| Returns: |
|
|---|
| Raises: |
|
|---|
process_user_input ¶
process_user_input(form_data: dict[str, Any]) -> AssistantTurn
Process user input and prepare for assistant response.
| Parameters: |
|
|---|
| Returns: |
|
|---|
prepare_provider ¶
prepare_provider(turn: AssistantTurn) -> ProviderWithConfig
Step 1: Get provider and endpoint config for streaming.
This method can be called separately to allow inspection/modification of provider configuration before streaming begins.
| Parameters: |
|
|---|
| Returns: |
|
|---|
| Raises: |
|
|---|
get_unanswered_predefined_questions ¶
get_unanswered_predefined_questions(provider: LLMProvider) -> list[dict[str, Any]]
Get list of unanswered predefined questions for current session.
| Parameters: |
|
|---|
| Returns: |
|
|---|
build_messages ¶
build_messages(turn: AssistantTurn, provider: LLMProvider) -> list[Message]
Step 2: Build prompt messages from current session state.
This method can be called separately to allow inspection/modification of messages before streaming begins.
| Parameters: |
|
|---|
| Returns: |
|
|---|
start_streaming ¶
start_streaming(turn: AssistantTurn, provider: LLMProvider, messages: list[Message], unanswered_questions: list[dict[str, Any]], emit_initial_bubbles: bool = True) -> StreamingResponse
Step 3: Start the streaming response with prepared provider and messages.
| Parameters: |
|
|---|
| Returns: |
|
|---|
prepare_streaming_response ¶
prepare_streaming_response(turn: AssistantTurn) -> StreamingResponse
Prepare all components needed for streaming assistant response.
High-level convenience method that calls prepare_provider, build_messages, and start_streaming in sequence. Use this for standard workflow.
For custom workflows (inspection, modification), use the step-by-step methods instead.
This method now supports dual-mode operation: - If template cannot render: Stream system questions - If template can render: Stream LLM response
| Parameters: |
|
|---|
| Returns: |
|
|---|
finalize_draft ¶
finalize_draft() -> str
Finalize the current draft and return it.
| Returns: |
|
|---|
AssistantServiceFactory ¶
Factory for creating AssistantService instances from various contexts.
from_flask_context ¶
from_flask_context(assistant_name: str, template_data: TemplateData, all_question_definitions: dict[str, Any], auto_reply: bool = True) -> AssistantService
Create AssistantService from current Flask context.
| Parameters: |
|
|---|
| Returns: |
|
|---|
for_testing ¶
for_testing(assistant_name: str, template_data: TemplateData, all_question_definitions: dict[str, Any], session_id: str = 'test-session', template_id: str = 'test-template') -> AssistantService
Create AssistantService for testing.
| Parameters: |
|
|---|
| Returns: |
|
|---|