Hallucination

Definition

When a language model generates text that is fluent but factually incorrect or unsupported by the input.

Hallucination occurs when a language model produces confident-sounding output that is fabricated or contradicts the source material. In speech recognition, hallucination can manifest as the model generating plausible-sounding words or phrases that were never spoken in the audio — especially during silence or background noise.

In text refinement, hallucination might cause the model to add information not present in the original dictation, change the meaning of a sentence, or invent quotes or statistics. Ummless mitigates refinement hallucination by using carefully designed prompts that instruct the model to preserve the speaker's original meaning and only modify style, grammar, and formatting.

Frequently Asked Questions

What causes AI hallucination?

Hallucination occurs when a model's training data patterns lead it to generate plausible-sounding but incorrect content, especially when the input is ambiguous or the model is uncertain.

Related Terms

Related Content