vibe.review.services.review_service¶
ReviewService - business logic for the Review workbench.
This service provides a stable boundary for web routes: - Loads templates and requirements - Applies "template is truth" requirement applicability - Manages review sessions and classifications - Streams batch classification progress via SSE
TemplateProviderProtocol ¶
Protocol for template providers used by ReviewService.
get_template_data ¶
get_template_data(template_id: str, version_spec: str | None = None, user_context: dict[str, Any] | None = None, session_context_for_check: dict[str, Any] | None = None, refresh: bool = False, skip_static_validation: bool = False) -> TemplateData
Return template data for a given template ID.
UploadedDocument ¶
Metadata and content for an uploaded document.
AssessmentItem ¶
Unified representation of an assessment item (question or requirement).
In review mode, questions and requirements are conceptually similar: both are items that need evaluation, can have AI suggestions, and affect relevance of other items via probing.
| Attributes: |
|
|---|
ReviewService ¶
Facade for the Review subsystem.
list_sessions ¶
list_sessions(limit: int = 50) -> list[ReviewSessionModel]
Return most recent review sessions ordered by last update.
get_session ¶
get_session(session_id: int) -> ReviewSessionModel | None
Load session by ID with context restored from question reviews.
delete_session ¶
delete_session(session_id: int) -> bool
Delete a review session and all associated data.
Cascade deletion handles: documents, document parts, classifications, question reviews.
Returns True if deleted, False if not found.
create_session ¶
create_session(template_id: str, documents: list[UploadedDocument], *, context: dict[str, Any] | None = None) -> ReviewSessionModel
Create new review session with uploaded documents for a template.
build_template_context ¶
build_template_context(session_id: int) -> NestedValue
Build a NestedValue context for rendering the final compliance report.
The context structure mirrors standard VIBE interview context: - Top-level keys from session.context (question answers) - review_session: session metadata - requirements: dict of requirement_id -> classification data - documents: list of document info
| Raises: |
|
|---|
stream_session_ingestion ¶
stream_session_ingestion(session_id: int, *, redirect_url: str) -> Iterator[str]
Ingest all documents for a session, streaming progress via SSE.
This is request-scoped (no background job runner). For scanned PDFs, OCR is performed page-by-page so progress can be streamed for long runs.
| Parameters: |
|
|---|
get_review_templates ¶
get_review_templates() -> list[dict[str, str]]
Return templates that declare interview_mode: review.
get_template_info ¶
get_template_info(template_id: str) -> TemplateInfo | None
Return metadata for a specific review template.
Returns None only if the template is not a review template. Raises exceptions for other errors (template not found, config invalid, etc.)
get_template_requirement_set ¶
get_template_requirement_set(template_id: str) -> tuple[RequirementSet, Any]
Load and cache requirements from template config, returning set and template data.
load_session_requirements ¶
load_session_requirements(review_session: ReviewSessionModel) -> list[Requirement]
Load applicable requirements for a session using template probing.
Implements "template is truth": req() calls executed during template render determine which requirement IDs are applicable for the session context.
| Raises: |
|
|---|
get_requirement ¶
get_requirement(template_id: str, req_id: str) -> Requirement | None
Look up a single requirement by ID from template's requirement set.
get_requirement_groups ¶
get_requirement_groups(template_id: str) -> dict[str, RequirementGroup]
Get requirement groups for a template.
get_reference_texts ¶
get_reference_texts(reference_ids: list[str] | str | None, language: str = 'sv') -> list[dict[str, str | None]]
Look up reference texts by their IDs.
| Parameters: |
|
|---|
| Returns: |
|
|---|
get_template_questions ¶
get_template_questions(template_id: str) -> dict[str, Any]
Get template questions for a review template.
Review uses ordinary template questions: as the source of truth for
context keys that drive requirement relevance via template probing.
Questions can be defined at: - Top level: questions: { q_id: { ... } } - Within groups: groups: { g_id: { questions: { q_id: { ... } } } }
Questions from groups include a group_id field indicating their parent group.
get_assessment_stream ¶
get_assessment_stream(review_session: ReviewSessionModel) -> list[AssessmentItem]
Get the unified assessment stream for a session.
Returns an ordered list of AssessmentItem objects representing both questions and requirements. Questions come first (from template config), followed by requirements (from template probing).
This is the core data structure for unified navigation in the review UI.
get_assessment_navigation ¶
get_assessment_navigation(review_session: ReviewSessionModel, current_type: Literal['question', 'requirement'], current_id: str) -> AssessmentNavigation
Get navigation info for the current assessment item.
Returns an AssessmentNavigation with: - position: 1-based position in stream - total: total items in stream - prev_type, prev_id: previous item (None if at start) - next_type, next_id: next item (None if at end) - current_item: the current AssessmentItem
get_first_assessment_item ¶
get_first_assessment_item(review_session: ReviewSessionModel) -> AssessmentItem | None
Get the first item in the assessment stream.
suggest_question_answer ¶
suggest_question_answer(session_id: int, question_id: str, *, max_parts: int = 10) -> QuestionAnswerResult | None
Generate an AI-suggested answer for a template question.
Uses document content from the session to suggest an answer, following the same retrieval patterns as requirement classification.
| Parameters: |
|
|---|
| Returns: |
|
|---|
calculate_progress ¶
calculate_progress(review_session: ReviewSessionModel) -> ReviewProgress
Compute review completion as human-verified count vs applicable requirements.
get_classification_stats ¶
get_classification_stats(session_id: int) -> ClassificationStats
Get classification statistics for a session.
Returns counts by result type (yes/no/partial/not_applicable/pending), AI vs human verification counts, and percentages.
get_accuracy_stats ¶
get_accuracy_stats(template_id: str | None = None, requirement_id: str | None = None) -> AccuracyStats
Calculate AI classification accuracy based on human overrides.
Delegates to ReviewAnalyticsService.
get_problematic_requirements ¶
get_problematic_requirements(template_id: str, min_override_rate: float = 0.2, min_samples: int = 5) -> list[RequirementAccuracyStats]
Find requirements where AI frequently gets it wrong.
Delegates to ReviewAnalyticsService.
get_reviews_by_requirement ¶
get_reviews_by_requirement(session_id: int) -> dict[str, RequirementReviewModel]
Return dict mapping requirement_id to its review record for a session.
get_or_create_review ¶
get_or_create_review(session_id: int, requirement_id: str) -> RequirementReviewModel
Fetch existing review or create new pending review for a requirement.
save_human_classification ¶
save_human_classification(*, session_id: int, requirement_id: str, classification: ClassificationResult, confidence: float, notes: str) -> RequirementReviewModel
Record human classification, setting override if AI suggestion existed.
get_question_review ¶
get_question_review(session_id: int, question_id: str) -> QuestionReviewModel | None
Get a QuestionReviewModel for a question, or None if not found.
get_or_create_question_review ¶
get_or_create_question_review(session_id: int, question_id: str) -> QuestionReviewModel
Get or create a QuestionReviewModel for a question.
save_human_question_answer ¶
save_human_question_answer(*, session_id: int, question_id: str, answer: object, notes: str | None = None) -> QuestionReviewModel
Save a human answer for a template question.
Creates or updates a QuestionReviewModel and syncs to session context.
save_ai_question_suggestion ¶
save_ai_question_suggestion(session_id: int, result: QuestionAnswerResult) -> QuestionReviewModel
Save an AI-suggested answer for a template question.
Stores the suggestion in QuestionReviewModel and automatically applies it to session.context (pre-selecting the answer). The user can still change the answer via the form.
get_matched_parts ¶
get_matched_parts(review_session: ReviewSessionModel, requirement_id: str, *, per_document: int = 5, total_limit: int = 10, skip_reranking: bool = True) -> list[MatchedPart]
Get matched document parts for a requirement.
Only returns parts if there are stored supporting_part_ids from a previous AI classification. Does NOT do live retrieval - that's expensive and the matched sections list is not useful until AI has evaluated relevance.
get_question_matched_parts ¶
get_question_matched_parts(review_session: ReviewSessionModel, question_id: str) -> list[MatchedPart]
Get matched document parts for a question.
Only returns parts if there are stored supporting_part_ids from a previous AI assessment.
remove_matched_part ¶
remove_matched_part(session_id: int, item_type: Literal['question', 'requirement'], item_id: str, part_db_id: int) -> bool
Remove a document part from an assessment item's matched parts.
Sets is_parts_curated=True to indicate user has manually curated the list. Returns True if the part was removed, False if not found or error.
add_matched_part ¶
add_matched_part(session_id: int, item_type: Literal['question', 'requirement'], item_id: str, part_db_id: int) -> bool
Add a document part to an assessment item's matched parts.
Validates that the part belongs to the same session. Sets is_parts_curated=True to indicate user has manually curated the list. Returns True if the part was added, False if invalid or error.
stream_batch_classification ¶
stream_batch_classification(session_id: int) -> Iterator[str]
Classify all requirements yielding SSE events for progress updates.
classify_single_requirement ¶
classify_single_requirement(session_id: int, requirement_id: str, *, collect_timings: bool = False) -> RequirementReviewModel | None | tuple[RequirementReviewModel | None, dict[str, float]]
Run AI classification for a single requirement.
| Parameters: |
|
|---|
| Returns: |
|
|---|
stream_ai_assessment ¶
stream_ai_assessment(session_id: int, item_type: Literal['question', 'requirement'], item_id: str, render_html_callback: Callable[[Any], str]) -> Iterator[str]
Unified SSE streaming for AI assessment of questions and requirements.
This method provides a single entry point for AI-assisted document analysis, handling both requirement compliance checks and question answering with the same search → rerank → assess pipeline.
Questions now get reranking (previously they didn't), improving the quality of document context for the LLM.
Yields SSE events at each stage: - "progress": Stage updates (searching, reranking, assessing) - "complete": Final result with rendered HTML - "error": If something goes wrong - "close": Stream end marker
| Parameters: |
|
|---|
| Yields: |
|
|---|
export_results_xlsx ¶
export_results_xlsx(session_id: int) -> bytes | None
Export review results as an Excel file.
Returns bytes of the xlsx file, or None if export failed.
render_document_html ¶
render_document_html(document: DocumentModel, *, highlight_part_id: str | None = None) -> str
Render document content as HTML with data-part-id attributes for navigation.
Each section is wrapped in a div with data-part-id matching the DocumentPartModel.part_id for scroll/highlight functionality.
prepare_documents_for_display ¶
prepare_documents_for_display(documents: list[DocumentModel], *, highlight_part_id: str | None = None) -> None
Prepare documents for display by generating parsed_html for each.
Modifies documents in-place by setting the parsed_html attribute.
open_document_binary_for_download ¶
open_document_binary_for_download(*, session_id: int, document_id: int) -> tuple[Any, str, str]
Open a persisted uploaded binary for download.
Enforces ownership (document must belong to session) and uses the filestore boundary (ReviewFileStore) rather than exposing paths.
render_report ¶
render_report(session_id: int) -> tuple[bytes | str, str, str]
Render the compliance report using the template and review context.
Combines the template with the built context (classifications, questions) to produce the final report document.
| Parameters: |
|
|---|
| Returns: |
|
|---|
| Raises: |
|
|---|
list_examples ¶
list_examples(template_id: str, *, requirement_id: str | None = None, classification: ClassificationResult | None = None, min_quality: float | None = None, limit: int = 50, offset: int = 0) -> tuple[list[ExampleModel], int]
List examples for a template with optional filtering.
| Parameters: |
|
|---|
| Returns: |
|
|---|
update_example ¶
update_example(example_id: int, *, document_excerpt: str | None = None, classification: ClassificationResult | None = None, reasoning: str | None = None, quality_score: float | None = None) -> ExampleModel | None
Update an example's editable fields.
Returns the updated example or None if not found.