-
GROOD - GRadient-aware Out-Of-Distribution detection
Publications ·Out-of-Distribution (OOD) detection, the task of identifying inputs that a model has not been trained on, is fundamental for the safe and reliable deployment of deep learning models in real-world applications like autonomous driving and healthcare. Models that perform well on familiar, in-distribution...
-
Decoding LLM Hallucinations An In-Depth Survey Summary
Paper Review ·The rapid advancement of Large Language Models (LLMs) has brought transformative capabilities, yet their tendency to “hallucinate”—generating outputs that are nonsensical, factually incorrect, or unfaithful to provided context—poses significant risks to their reliability, especially in information-critical applications . A comprehensive survey by Huang (Huang et al., 2025) systematically explores this phenomenon, offering a detailed taxonomy, analyzing root causes,...
-
Topology of Out-of-Distribution Examples in Deep Neural Networks
Paper Review ·As deep neural networks (DNNs) become more common, concerns about their robustness, particularly when facing unfamiliar inputs, are growing. These models often exhibit overconfidence when making incorrect predictions on out-of-distribution (OOD) examples. This...