Hussein Fawaz, Omran Ayoub, Davide Andreoletti, Silvia Giordano
SUPSI, Switzerland
IEEE WONS 2026
Other version: [[IEEE – forthcoming]] [[Code – forthcoming]]
TL;DR
Making machine learning models more reliable for IoT security improves detection quality, but it also increases energy consumption. This paper studies that trade-off in edge network intrusion detection systems.
Reliability vs. energy at the edge
Edge IoT devices operate under strict energy constraints, yet they are increasingly expected to run reliable ML-based security mechanisms such as network intrusion detection systems (NIDS). Techniques that improve reliability—such as uncertainty-aware models or calibration—often increase computational cost.
This raises a key question: how much energy does reliability actually cost?
Methodology
We evaluate multiple ML-based intrusion detection pipelines deployed at the edge and systematically enhance their reliability using techniques such as calibration-aware learning and uncertainty modeling. We then measure:
- Detection performance
- Reliability metrics
- Energy consumption under realistic edge workloads
Key findings
- Improving reliability consistently increases energy usage, but not always proportionally
- Some reliability techniques offer better energy–reliability trade-offs
- Feature and model choices strongly influence energy efficiency
Implications
Our results highlight the importance of energy-aware trustworthy AI for edge security deployments, where reliability must be balanced against limited resources.
Future work
We plan to release an open-source benchmarking framework for evaluating reliability–energy trade-offs in edge NIDS.