Anahita
Education
Collaborations
Publications
Skills
About
VNN: Verification-Friendly Neural Networks with Hard Robustness Guarantees

VNN: Verification-Friendly Neural Networks with Hard Robustness Guarantees

Presented at The International Conference on Machine Learning (ICML) 2024

Formal verification of neural networks often struggles to scale, limiting practical robustness guarantees. This work introduces a post-training approach that transforms existing models into verification-friendly networks without sacrificing accuracy. As a result, robustness can be established faster and for significantly more inputs.

PDF Code
SafeDeep: A Scalable Robustness Verification Framework for Deep Neural Networks

SafeDeep: A Scalable Robustness Verification Framework for Deep Neural Networks

Presented at IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2023

Deep neural networks are increasingly used in safety-critical systems, yet their behavior under small perturbations remains difficult to guarantee. This paper introduces SafeDeep, a verification framework designed to deliver tighter bounds for stronger robustness guarantees. By bridging the gap between theory and practice, the work shows how formal verification can become a practical tool for deploying trustworthy deep learning systems.

PDF Code
Formal Local Implication Between Two Neural Networks

Formal Local Implication Between Two Neural Networks

Presented at European Conference on Artificial Intelligence (ECAI) 2025

Modern AI systems often rely on compact or modified neural networks, yet it remains unclear whether these models behave safely relative to their original versions. This paper introduces formal local implication, a new verification concept that proves one network makes a correct decision whenever another does, across an entire input region. The approach enables rigorous comparison of original and compact models, providing stronger safety assurances for deploying neural networks in resource-constrained and medical applications

PDF Code
Robustness and Privacy Interplay in Patient Membership Inference

Robustness and Privacy Interplay in Patient Membership Inference

Presented at IEEE International Joint Conference on Neural Networks (IJCNN) 2025

Modern personalized deep learning models, especially in healthcare, face hidden privacy risks that go beyond standard membership inference attacks because patient identity can be inferred even without access to training data. This paper reveals a deep connection between a model’s robustness and its vulnerability to patient membership inference, and proposes a framework that exploits this interplay to identify which patient’s data was used to train a model. By evaluating real biomedical applications like epileptic seizure and cardiac arrhythmia detection, the work shows that privacy and safety are fundamentally linked.

PDF

Full list of my publications is available on my google scholar profile.