vibe.review.question_answerer

AI-suggested answers for review template questions.

This module provides LLM-based answer suggestions for questions defined in review templates. It reuses core VIBE handlers for option extraction and validation, following the SystemProxyProvider pattern of exposing core functionality to LLM interactions.

Supported question types (mapped to core handlers): - bool: Yes/No questions (BoolHandler) - tristate: Yes/No/Unknown questions (TristateHandler) - select/radio: Single-choice from predefined options (EnumHandler) - multichoice: Multiple-choice from predefined options (MultiChoiceHandler)

Each answer includes: - The suggested value (validated against handler's allowed_values) - Confidence level (H/M/L) - Reasoning explaining the answer - Supporting document part IDs as evidence

QuestionSpec

Specification of a template question for AI answering.

Uses core VIBE handlers to extract valid options and labels, ensuring consistency between AI suggestions and human input validation.

from_definition

from_definition(question_id: str, definition: dict[str, Any]) -> QuestionSpec

Create QuestionSpec from a template question definition.

Uses core VIBE handlers to normalize options and extract allowed values, ensuring AI suggestions use the same validation as human input.

QuestionAnswerResult

Result of AI-suggested answer for a question.

confidence_float

confidence_float: float

Convert confidence letter to float (0.0-1.0).

to_dict

to_dict() -> dict[str, Any]

Convert to dictionary for JSON serialization.

QuestionAnswerer

Answers template questions using LLM analysis of document content.

Uses the same patterns as RequirementClassifier: 1. Build a prompt with question context and document parts 2. Generate a question-type-specific JSON schema 3. Call LLM with structured output 4. Parse and return the result

Usage

answerer = QuestionAnswerer(llm_client) result = answerer.answer( question=QuestionSpec(...), document_parts=[...], )

__init__

__init__(llm_client: BaseLLMClient | None = None, language: str = 'sv') -> None

Initialize the question answerer.

Parameters:
  • llm_client (BaseLLMClient | None, default: None ) –

    LLM client for answering. If None, creates default.

  • language (str, default: 'sv' ) –

    Language code for prompts ("sv" or "en").

answer

answer(question: QuestionSpec, document_parts: list[dict[str, Any]], max_parts: int = 10) -> QuestionAnswerResult

Generate an AI-suggested answer for a question.

Parameters:
  • question (QuestionSpec) –

    The question specification.

  • document_parts (list[dict[str, Any]]) –

    Relevant document parts with id, text, section_heading.

  • max_parts (int, default: 10 ) –

    Maximum number of parts to include in prompt.

Returns:

close

close() -> None

Close owned resources.

generate_question_schema

generate_question_schema(question: QuestionSpec) -> dict[str, Any]

Generate a JSON schema for the question answer based on question type.

Uses the handler's allowed_values and normalized_options to ensure the schema matches what the core handlers accept for validation.

The schema always includes: - answer: The suggested value (type varies by question type) - confidence: H/M/L confidence level - reasoning: Explanation citing document evidence - supporting_part_ids: List of document part IDs that support the answer - needs_user_input: Boolean flag when model cannot determine answer

Parameters:
  • question (QuestionSpec) –

    The question specification (with handler info).

Returns:
  • dict[str, Any]

    JSON schema dict suitable for LLM structured output.

answer_question

answer_question(question: QuestionSpec, document_parts: list[dict[str, Any]], llm_client: BaseLLMClient | None = None, language: str = 'sv') -> QuestionAnswerResult

Perform a one-off question answering.

Create answerer, answer, and clean up. For multiple questions, use QuestionAnswerer directly.