**Description:** Requirements for detecting hallucinations and unfounded claims. Use both heuristic and LLM-based validation. **Acceptance Criteria:** - [ ] `GroundedInSourceRequirement` - output must be traceable to provided sources - [ ] `NoFabricatedFactsRequirement` - LLM-as-judge for unsupported claims - [ ] `CitationRequirement` - requires citations for factual claims - [ ] `ConsistencyRequirement` - output consistent with prior context - [ ] `UncertaintyAcknowledgementRequirement` - admits when unsure - [ ] Integration with existing `mellea/stdlib/components/intrinsic/rag.py` - [ ] Tests with known hallucination examples
Description:
Requirements for detecting hallucinations and unfounded claims. Use both heuristic and LLM-based validation.
Acceptance Criteria:
GroundedInSourceRequirement- output must be traceable to provided sourcesNoFabricatedFactsRequirement- LLM-as-judge for unsupported claimsCitationRequirement- requires citations for factual claimsConsistencyRequirement- output consistent with prior contextUncertaintyAcknowledgementRequirement- admits when unsuremellea/stdlib/components/intrinsic/rag.py