- Internet Traffic Analysis and Secure E-voting
- Adversarial Robustness in Machine Learning
- Privacy-Preserving Technologies in Data
- Network Security and Intrusion Detection
- Anomaly Detection Techniques and Applications
- Advanced Malware Detection Techniques
- Cryptography and Data Security
- Privacy, Security, and Data Protection
- Spam and Phishing Detection
- Advanced Neural Network Applications
- Stochastic Gradient Optimization Techniques
- Security and Verification in Computing
- Peer-to-Peer Network Technologies
- Data Quality and Management
- Software System Performance and Reliability
- Information and Cyber Security
- Hate Speech and Cyberbullying Detection
- Caching and Content Delivery
- Explainable Artificial Intelligence (XAI)
- Domain Adaptation and Few-Shot Learning
- Smart Grid Security and Resilience
- User Authentication and Security Systems
- Ethics and Social Impacts of AI
- Mobile Crowdsensing and Crowdsourcing
- Physical Unclonable Functions (PUFs) and Hardware Security
Princeton University
2016-2025
Nanjing University
2020-2021
Center for Information Technology
2017-2020
NEC (United States)
2018
Uttaranchal University
2016
University of Illinois Urbana-Champaign
2007-2012
University of California, Berkeley
2012
Berkeley College
2012
J.C. Bose University of Science & Technology, YMCA
2010
Oil and Natural Gas Corporation (India)
1984
Federated learning distributes model training among a multitude of agents, who, guided by privacy concerns, perform using their local data but share only parameter updates, for iterative aggregation at the server. In this work, we explore threat poisoning attacks on federated initiated single, non-colluding malicious agent where adversarial objective is to cause misclassify set chosen inputs with high confidence. We number strategies carry out attack, starting simple boosting agent's update...
A promising approach to mitigate the privacy risks in Online Social Networks (OSNs) is shift access control enforcement from OSN provider user by means of encryption. However, this creates challenge key management support complex policies involved OSNs and dynamic groups. To address this, we propose EASiER, an architecture that supports fine-grained group membership using attribute-based novel feature our architecture, however, it possible remove a without issuing new keys other users or...
Sign recognition is an integral part of autonomous cars. Any misclassification traffic signs can potentially lead to a multitude disastrous consequences, ranging from life-threatening accident even large-scale interruption transportation services relying on In this paper, we propose and examine security attacks against sign systems for Deceiving Autonomous caRs with Toxic Signs (we call the proposed DARTS). particular, introduce two novel methods create these toxic signs. First,...
The arms race between attacks and defenses for machine learning models has come to a forefront in recent years, both the security community privacy community. However, one big limitation of previous research is that domain have typically been considered separately. It thus unclear whether defense methods will any unexpected impact on other domain. In this paper, we take step towards resolving by combining two domains. particular, measure success membership inference against six...
We propose the use of data transformations as a defense against evasion attacks on ML classifiers. present and investigate strategies for incorporating variety including dimensionality reduction via Principal Component Analysis to enhance resilience machine learning, targeting both classification training phase. empirically evaluate demonstrate feasibility linear mechanism using multiple real-world datasets. Our key findings are that is (i) effective best known from literature, resulting in...
Abstract We propose F alcon , an end-to-end 3-party protocol for efficient private training and inference of large machine learning models. presents four main advantages – (i) It is highly expressive with support high capacity networks such as VGG16 (ii) it supports batch normalization which important complex AlexNet (iii) guarantees security abort against malicious adversaries, assuming honest majority (iv) Lastly, new theoretical insights design that make allow to outperform existing...
Warning: this paper contains data, prompts, and model outputs that are offensive in nature. Recently, there has been a surge of interest integrating vision into Large Language Models (LLMs), exemplified by Visual (VLMs) such as Flamingo GPT-4. This sheds light on the security safety implications trend. First, we underscore continuous high-dimensional nature visual input makes it weak link against adversarial attacks, representing an expanded attack surface vision-integrated LLMs. Second,...
In this work, we investigate if statistical privacy can enhance the performance of ORAM mechanisms while providing rigorous guarantees. We propose a formal and framework for developing protocols with security viz., differentially private (DP-ORAM). present Root ORAM, family DP-ORAMs that provide tunable, multi-dimensional trade-off between desired bandwidth overhead, local storage system security. We theoretically analyze to quantify both its performance. experimentally demonstrate...
Understanding social network structure and evolution has important implications for many aspects of system design including provisioning, bootstrapping trust reputation systems via networks, defenses against Sybil attacks. Several recent results suggest that augmenting the with user attributes (e.g., location, employer, communities interest) can provide a more fine-grained understanding networks. However, there have been few studies to systematic these effects at scale.
Sybil attacks are a fundamental threat to the security of distributed systems. Recently, there has been growing interest in leveraging social networks mitigate attacks. However, existing approaches suffer from one or more drawbacks, including bootstrapping either only known benign nodes, failing tolerate noise their prior knowledge about and not being scalable. In this paper, we aim overcome these drawbacks. Toward goal, introduce SybilBelief, semi-supervised learning framework, detect...
We consider the effect attackers who disrupt anonymous communications have on security of traditional high- and low-latency communication systems, as well Hydra-Onion Cashmere systems that aim to offer reliable mixing, Salsa, a peer-to-peer network. show denial service (DoS) lowers anonymity messages need get retransmitted be delivered, presenting more opportunities for attack. uncover fundamental limit mix networks, showing they cannot tolerate majority nodes being malicious. Cashmere,...
Differential privacy (DP) is a widely accepted mathematical framework for protecting data privacy.Simply stated, it guarantees that the distribution of query results changes only slightly due to modification any one tuple in database.This allows protection, even against powerful adversaries, who know entire database except tuple.For providing this guarantee, differential mechanisms assume independence tuples -a vulnerable assumption can lead degradation expected levels especially when...
The increasingly sophisticated Advanced Persistent Threat (APT) attacks have become a serious challenge for enterprise IT security.Attack causality analysis, which tracks multi-hop causal relationships between files and processes to diagnose attack provenances consequences, is the first step towards understanding APT taking appropriate responses.Since analysis time-critical mission, it essential design tracking systems that extract useful information in timely manner.However, prior work...
As a research community, we are still lacking systematic understanding of the progress on adversarial robustness which often makes it hard to identify most promising ideas in training robust models. A key challenge benchmarking is that its evaluation error-prone leading overestimation. Our goal establish standardized benchmark robustness, as accurately possible reflects considered models within reasonable computational budget. To this end, start by considering image classification task and...
A multitude of privacy breaches, both accidental and malicious, have prompted users to distrust centralized providers online social networks (OSNs) investigate decentralized solutions. We examine the design a fully (peer-to-peer) OSN, with special focus on security. In particular, we wish protect confidentiality, integrity, availability user content relationships. propose DECENT, an architecture for OSNs that uses distributed hash table store data, features cryptographic protections...
Nowadays, many computer and communication systems generate graph data. Graph data span different domains, ranging from online social network networks like Facebook to epidemiological used study the spread of infectious diseases. are shared regularly for purposes including academic research business collaborations. Since may be sensitive, owners often use various anonymization techniques that compromise resulting utility anonymized To make matters worse, there several state-of-the-art...
The Tor network is a widely used system for anonymous communication. However, known to be vulnerable attackers who can observe traffic at both ends of the communication path. In this paper, we show that prior attacks are just tip iceberg. We present suite new attacks, called Raptor, launched by Autonomous Systems (ASes) compromise user anonymity. First, AS-level adversaries exploit asymmetric nature Internet routing increase chance observing least one direction Second, natural churn in lie...
Online social networks (OSNs) such as Facebook and Google+ have transformed the way our society communicates. However, this success has come at cost of user privacy; in today's OSNs, users are not control their own data, depend on OSN operators to enforce access policies. A multitude privacy breaches spurred research into privacy-preserving alternatives for networking, exploring a number techniques storing, disseminating, controlling data decentralized fashion. In paper, we argue that...