Audit
The Audit section defines Undercover.co.id’s evaluation and diagnostic work related to how entities are interpreted by generative AI systems.
Audit activities focus on assessment before intervention. They are designed to identify structural, informational, and governance-level risks prior to any remediation or optimization effort.
Audits do not attempt to influence AI outputs directly. Instead, they examine how AI systems currently recognize, summarize, and reference an entity, and where inconsistencies or failures may occur.
Typical audit scopes include:
- entity recognition and classification,
- canonical source integrity,
- reference substitution risk,
- answer stability over time,
- exposure to misinformation or entity spoofing.
Audit pages within this section describe specific evaluation frameworks and outputs. Each audit is documented with clear boundaries, evidence standards, and reviewable deliverables.
