INTERNET OF THINGS, cilt.37, 2026 (SCI-Expanded, Scopus)
Anomaly detection in non-stationary time-series streams is difficult when events are rare, score distributions drift, and operators can inspect only a fixed fraction of alarms. We introduce a deployment-oriented framework built on an unsupervised, train-referenced proximity score computed in a compact Principal Component Analysis (PCA) embedding and stabilized by a causal rolling Median Absolute Deviation (MAD) scale. The score is converted into decision-ready outputs through a strictly leakage-safe protocol: per-series temporal splitting, train-only isotonic calibration on a calibration-fit block, and deterministic tie-safe top-k selection to enforce an explicit alert budget (e.g., q=1%) on a held-out evaluation block. Across three public labeled benchmarks-the Numenta Anomaly Benchmark (NAB), the Server Machine Dataset (SMD), and the UCR Time Series Anomaly Archive (UCR)-we report pooled pointwise area under the receiver operating characteristic curve (ROC-AUC) and area under the precision-recall curve (PR-AUC), together with fixed-budget precision and recall at an alert fraction q = 1%, and series-aware bootstrap confidence intervals. The method achieves the best PR-AUC on SMD (0.500) and NAB (0.359) under the leak-free protocol, while remaining computationally feasible compared with forecasting-residual baselines. On UCR, results indicate an explicit trade-off on shape-driven anomalies, motivating operating-policy tuning. For the unlabeled Intel Berkeley Lab telemetry, we make no detection-accuracy claims; instead, we evaluate budgeted screening utility via a Composite Triage Score that summarizes coverage, stability, concentration, event duration, and runtime.