No Labels, No Problem?
Designing an Explainable Unsupervised Anomaly Detection System for Ambient Assisted Living
Ambient Assisted Living technologies are crucial for supporting independent living, yet automatically detecting health-related anomalies is challenging. The scarcity of labeled data makes supervised machine learning impractical, while standard unsupervised models often lack the interpretability needed for caregivers to trust their alerts.
To address this, we are developing a web-based system that uses unsupervised algorithms to detect anomalies in sensor data, integrated with Explainable AI (XAI) to contextualize alerts. This research aims to contribute validated design guidelines for creating trust-aware, unsupervised monitoring systems that balance algorithmic precision with human interpretability.
Working Paper.