Z. Berkay Celik

ORCID: 0000-0001-7362-8905
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Advanced Malware Detection Techniques
  • Network Security and Intrusion Detection
  • Adversarial Robustness in Machine Learning
  • Anomaly Detection Techniques and Applications
  • Security and Verification in Computing
  • Privacy-Preserving Technologies in Data
  • Digital and Cyber Forensics
  • IoT and Edge/Fog Computing
  • User Authentication and Security Systems
  • Internet Traffic Analysis and Secure E-voting
  • Software Testing and Debugging Techniques
  • Vehicular Ad Hoc Networks (VANETs)
  • Context-Aware Activity Recognition Systems
  • Information and Cyber Security
  • Privacy, Security, and Data Protection
  • Real-Time Systems Scheduling
  • Cryptography and Data Security
  • Smart Grid Security and Resilience
  • Software System Performance and Reliability
  • Formal Methods in Verification
  • Green IT and Sustainability
  • Advanced Neural Network Applications
  • Education Practices and Challenges
  • Autonomous Vehicle Technology and Safety
  • Software Reliability and Analysis Research

Purdue University West Lafayette
2019-2025

University of Arizona
2022-2023

Virginia Tech
2023

Indiana University – Purdue University Indianapolis
2022

Pennsylvania State University
2011-2019

Institute of Electrical and Electronics Engineers
2019

Regional Municipality of Niagara
2019

IEEE Computer Society
2019

Adnan Menderes University
2016-2018

Istanbul Technical University
2013

Deep learning takes advantage of large datasets and computationally efficient training algorithms to outperform other approaches at various machine tasks. However, imperfections in the phase deep neural networks make them vulnerable adversarial samples: inputs crafted by adversaries with intent causing misclassify. In this work, we formalize space against (DNNs) introduce a novel class craft samples based on precise understanding mapping between outputs DNNs. an application computer vision,...

10.1109/eurosp.2016.36 article EN 2016-03-01

Machine learning (ML) models, e.g., deep neural networks (DNNs), are vulnerable to adversarial examples: malicious inputs modified yield erroneous model outputs, while appearing unmodified human observers. Potential attacks include having content like malware identified as legitimate or controlling vehicle behavior. Yet, all existing example require knowledge of either the internals its training data. We introduce first practical demonstration an attacker a remotely hosted DNN with no such...

10.1145/3052973.3053009 article EN Proceedings of the 2022 ACM on Asia Conference on Computer and Communications Security 2017-03-31

Broadly defined as the Internet of Things (IoT), growth commodity devices that integrate physical processes with digital connectivity has changed way we live, play, and work.To date, traditional approach to securing IoT treated individually.However, in practice, it been recently shown interactions among are often real cause safety security violations.In this paper, present IOTGUARD, a dynamic, policy-based enforcement system for IoT, which protects users from unsafe insecure device states by...

10.14722/ndss.2019.23326 article EN 2019-01-01

Broadly defined as the Internet of Things (IoT), growth commodity devices that integrate physical processes with digital systems have changed way we live, play and work. Yet existing IoT platforms cannot evaluate whether an app or environment is safe, secure, operates correctly. In this paper, present Soteria, a static analysis system for validating (collection apps working in concert) adheres to identified safety, security, functional properties. Soteria three phases; (a) translation...

10.48550/arxiv.1805.08876 preprint EN other-oa arXiv (Cornell University) 2018-01-01

Recent advances in machine learning have led to innovative applications and services that use computational structures reason about complex phenomenon. Over the past several years, security machine-learning communities developed novel techniques for constructing adversarial samples--malicious inputs crafted mislead (and therefore corrupt integrity of) systems built on computationally learned models. The authors consider underlying causes of samples future countermeasures might mitigate them.

10.1109/msp.2016.51 article EN IEEE Security & Privacy 2016-05-01

Broadly defined as the Internet of Things (IoT), growth commodity devices that integrate physical processes with digital connectivity has had profound effects on society--smart homes, personal monitoring devices, enhanced manufacturing and other IoT apps have changed way we live, play, work. Yet extant platforms provide few means evaluating use (and potential avenues for misuse) sensitive information. Thus, consumers organizations little information to assess security privacy risks these...

10.48550/arxiv.1802.08307 preprint EN other-oa arXiv (Cornell University) 2018-01-01

EXplainable AI (XAI) methods have been proposed to interpret how a deep neural network predicts inputs through model saliency explanations that highlight the input parts deemed important arrive at decision for specific target. However, it remains challenging quantify correctness of their interpretability as current evaluation approaches either require subjective from humans or incur high computation cost with automated evaluation. In this paper, we propose backdoor trigger patterns--hidden...

10.1145/3447548.3467213 article EN 2021-08-12

Users seek security & privacy (S&P) advice from online resources, including trusted websites and content-sharing platforms. These resources help users understand S&P technologies tools suggest actionable strategies. Large Language Models (LLMs) have recently emerged as information sources. However, their accuracy correctness been called into question. Prior research has outlined the shortcomings of LLMs in answering multiple-choice questions user ability to inadvertently circumvent model...

10.1145/3627106.3627196 article EN cc-by Annual Computer Security Applications Conference 2023-12-02

In a smart home system, multiple users have access to devices, typically through dedicated app installed on mobile device. Traditional control mechanisms consider one unique trusted user that controls the devices. However, multi-user multi-device settings pose fundamentally different challenges traditional single-user systems. For instance, in environment, conflicting, complex, and dynamically changing demands which cannot be handled by techniques. To address these challenges, this paper, we...

10.1145/3395351.3399358 article EN 2020-07-08

Abstract Abstract: Users trust IoT apps to control and automate their smart devices. These necessarily have access sensitive data implement functionality. However, users lack visibility into how is used, often blindly the app developers. In this paper, we present IoTWATcH, a dynamic analysis tool that uncovers privacy risks of in real-time. We designed built IoTWATcH through comprehensive survey addressing needs users. IoTWATCH operates four phases: (a) it provides with an interface specify...

10.2478/popets-2021-0009 article EN cc-by-nc-nd Proceedings on Privacy Enhancing Technologies 2020-11-09

Multiple users have access to multiple devices in a smart home system – typically through dedicated app installed on mobile device. Traditional control mechanisms consider one unique, trusted user that controls the devices. However, multi-user multi-device settings pose fundamentally different challenges traditional single-user systems. For instance, environment, conflicting, complex, and dynamically-changing demands cannot be handled by techniques. Moreover, from platforms/vendors can share...

10.1145/3543513 article EN ACM Transactions on Internet of Things 2022-06-15

This paper presents a framework for evaluating the transport layer feature space of malware heartbeat traffic. We utilize these features in prototype detection system to distinguish traffic from generated by legitimate applications. In contrast previous work, we eliminate at risk producing overly optimistic results, detect previously unobserved anomalous behavior, and rely only on tamper-resistant making it difficult sophisticated avoid detection. Further, characterize evolution evasion...

10.1109/milcom.2015.7357464 article EN 2015-10-01

Concerns about safety and security have led to questions the risk of embracing Internet Things (IoT). We consider needs techniques for verifying correct operation IoT devices environments within physical spaces they inhabit.

10.1109/msec.2019.2911511 article EN IEEE Security & Privacy 2019-06-10

improperly allowed them to activate the anti-stall system [17].Unfortunately, previous fuzzing approaches cannot discover this type of violations for following two reasons.First, they do not consider entire input space RV's control software, including user commands, configuration parameters, and environmental factors.Second, only focus on finding memory corruption bugs or stability issues.Therefore, detect safety policy violations, e.g., a drone is deploying parachute at too-low altitude.We...

10.14722/ndss.2021.24096 article EN 2021-01-01

In smart homes, when an actuator's state changes, it sends event notification to the IoT hub report this change (e.g., door is unlocked).Prior works have shown that notifications are vulnerable spoofing and masking attacks.In spoofing, adversary reports a fake did not physically occur.In masking, suppresses of occurred.These attacks create inconsistencies between physical cyber states actuators, enabling indirectly gain control over safety-critical devices by triggering apps.To mitigate...

10.14722/ndss.2023.23070 article EN 2023-01-01

EXplainable AI (XAI) methods have been proposed to interpret how a deep neural network predicts inputs through model saliency explanations that highlight the parts of deemed important arrive decision at specific target. However, it remains challenging quantify correctness their interpretability as current evaluation approaches either require subjective input from humans or incur high computation cost with automated evaluation. In this paper, we propose backdoor trigger patterns--hidden...

10.48550/arxiv.2009.10639 preprint EN other-oa arXiv (Cornell University) 2020-01-01
Coming Soon ...