Erick Galinkin

ORCID: 0000-0003-1268-9258
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Network Security and Intrusion Detection
  • Advanced Malware Detection Techniques
  • Information and Cyber Security
  • Adversarial Robustness in Machine Learning
  • Ethics and Social Impacts of AI
  • Anomaly Detection Techniques and Applications
  • Cybercrime and Law Enforcement Studies
  • Explainable Artificial Intelligence (XAI)
  • Privacy-Preserving Technologies in Data
  • Crime Patterns and Interventions
  • Smart Grid Security and Resilience
  • Computability, Logic, AI Algorithms
  • Bayesian Modeling and Causal Inference
  • Spam and Phishing Detection
  • Cognitive Science and Education Research
  • Web Application Security Vulnerabilities
  • Military Defense Systems Analysis
  • Opinion Dynamics and Social Influence
  • Digital and Cyber Forensics
  • Stochastic Gradient Optimization Techniques
  • Sexual Assault and Victimization Studies
  • Topic Modeling
  • Machine Learning and Data Classification
  • Auction Theory and Applications

Drexel University
2021-2024

Boston University
2021

Rapid City Regional Hospital
2021

Anomaly detection is a method for identifying malware and other anomalies such as memory leaks on computing hosts and, more recently, Internet of Things (IoT) devices. Due to its lightweight resource use efficacy, anomaly promising detect small, resource-constrained hosts. Using Principal Component Analysis (PCA) reduce the features, hence dimensionality detector, common during feature engineering process classic machine learning methods, Support Vector Machines (SVM). However, Neural...

10.1145/3477314.3508377 article EN Proceedings of the 37th ACM/SIGAPP Symposium on Applied Computing 2022-04-25

10.24251/hicss.2023.910 article EN Proceedings of the ... Annual Hawaii International Conference on System Sciences/Proceedings of the Annual Hawaii International Conference on System Sciences 2024-01-01

Security and ethics are both core to ensuring that a machine learning system can be trusted. In production learning, there is generally hand-off from those who build model deploy model. this hand-off, the engineers responsible for deployment often not privy details of thus, potential vulnerabilities associated with its usage, exposure, or compromise. Techniques such as theft, inversion, misuse may considered in deployment, so it incumbent upon data scientists understand these risks they...

10.48550/arxiv.2007.04693 preprint EN cc-by arXiv (Cornell University) 2020-01-01

These past few months have been especially challenging, and the deployment of technology in ways hitherto untested at an unrivalled pace has left internet watchers aghast. Artificial intelligence become byword for technological progress is being used everything from helping us combat COVID-19 pandemic to nudging our attention different directions as we all spend increasingly larger amounts time online. It never more important that keep a sharp eye out on development this field how it shaping...

10.48550/arxiv.2006.14662 preprint EN cc-by arXiv (Cornell University) 2020-01-01

10.24251/hicss.2023.911 article EN Proceedings of the ... Annual Hawaii International Conference on System Sciences/Proceedings of the Annual Hawaii International Conference on System Sciences 2024-01-01

The adoption of large language models (LLMs) in many applications, from customer service chat bots and software development assistants to more capable agentic systems necessitates research into how secure these systems. Attacks like prompt injection jailbreaking attempt elicit responses actions that are not compliant with the safety, privacy, or content policies organizations using model their application. In order counter abuse LLMs for generating potentially harmful replies taking...

10.48550/arxiv.2412.01547 preprint EN arXiv (Cornell University) 2024-12-02

With computing now ubiquitous across government, industry, and education, cybersecurity has become a critical component for every organization on the planet. Due to this ubiquity of computing, cyber threats have continued grow year over year, leading labor shortages skills gap in cybersecurity. As result, many product vendors security organizations looked artificial intelligence shore up their defenses. This work considers how characterize attackers defenders one approach automation defense...

10.48550/arxiv.2412.01542 preprint EN arXiv (Cornell University) 2024-12-02

As Large Language Models (LLMs) and generative AI become more widespread, the content safety risks associated with their use also increase. We find a notable deficiency in high-quality datasets benchmarks that comprehensively cover wide range of critical areas. To address this, we define broad risk taxonomy, comprising 13 9 sparse categories. Additionally, curate AEGISSAFETYDATASET, new dataset approximately 26, 000 human-LLM interaction instances, complete human annotations adhering to...

10.48550/arxiv.2404.05993 preprint EN arXiv (Cornell University) 2024-04-08

The well-worn George Box aphorism ``all models are wrong, but some useful'' is particularly salient in the cybersecurity domain, where assumptions built into a model can have substantial financial or even national security impacts. Computer scientists often asked to optimize for worst-case outcomes, and since largely focused on risk mitigation, preparing scenario appears rational. In this work, we demonstrate that worst case rather than most probable may yield suboptimal outcomes learning...

10.48550/arxiv.2409.19237 preprint EN arXiv (Cornell University) 2024-09-28

10.24251/hicss.2024.910 article EN Proceedings of the ... Annual Hawaii International Conference on System Sciences/Proceedings of the Annual Hawaii International Conference on System Sciences 2024-01-01

10.24251/hicss.2024.911 article EN Proceedings of the ... Annual Hawaii International Conference on System Sciences/Proceedings of the Annual Hawaii International Conference on System Sciences 2024-01-01

Explainability in machine learning has become incredibly important as learning-powered systems ubiquitous and both regulation public sentiment begin to demand an understanding of how these make decisions. As a result, number explanation methods have begun receive widespread adoption. This work summarizes, compares, contrasts three popular methods: LIME, SmoothGrad, SHAP. We evaluate with respect to: robustness, the sense sample complexity stability; understandability, that provided...

10.48550/arxiv.2203.03729 preprint EN cc-by arXiv (Cornell University) 2022-01-01

Legislation and public sentiment throughout the world have promoted fairness metrics, explainability, interpretability as prescriptions for responsible development of ethical artificial intelligence systems. Despite importance these three pillars in foundation field, they can be challenging to operationalize attempts solve problems production environments often feel Sisyphean. This difficulty stems from a number factors: metrics are computationally difficult incorporate into training rarely...

10.48550/arxiv.2203.02958 preprint EN cc-by arXiv (Cornell University) 2022-01-01

The 2nd edition of the Montreal AI Ethics Institute's State captures most relevant developments in field since July 2020. This report aims to help anyone, from machine learning experts human rights activists and policymakers, quickly digest understand ever-changing field. Through research article summaries, as well expert commentary, this distills reporting surrounding various domains related ethics AI, including: society, bias algorithmic justice, disinformation, humans labor impacts,...

10.48550/arxiv.2011.02787 preprint EN cc-by arXiv (Cornell University) 2020-01-01

Differentially private models seek to protect the privacy of data model is trained on, making it an important component security and privacy. At same time, scientists machine learning engineers use uncertainty quantification methods ensure are as useful actionable possible. We explore tension between via dropout by conducting membership inference attacks against with without differential find that large slightly increases a model's risk succumbing in all cases including differentially models.

10.48550/arxiv.2103.09008 preprint EN cc-by-sa arXiv (Cornell University) 2021-01-01

The 3rd edition of the Montreal AI Ethics Institute's State captures most relevant developments in since October 2020. It aims to help anyone, from machine learning experts human rights activists and policymakers, quickly digest understand field's ever-changing developments. Through research article summaries, as well expert commentary, this report distills reporting surrounding various domains related ethics AI, including: algorithmic injustice, discrimination, ethical labor impacts,...

10.48550/arxiv.2105.09059 preprint EN cc-by arXiv (Cornell University) 2021-01-01

The attention that deep learning has garnered from the academic community and industry continues to grow year over year, it been said we are in a new golden age of artificial intelligence research. However, neural networks still often seen as "black box" where occurs but cannot be understood human-interpretable way. Since these machine systems increasingly being adopted security contexts, is important explore interpretations. We consider an Android malware traffic dataset for approaching...

10.48550/arxiv.2009.07753 preprint EN cc-by arXiv (Cornell University) 2020-01-01

The Geometry of Uncertainty is unlike any book on mathematics and computer science I've ever read. It's certainly not a textbook in the traditional sense - there are no exercises, very little presented as "right way" to do something. In many ways, like survey paper: it critically analyzes decades research while some author's preferences reflected, perspectives with direct guidance about which approach ought be favored. People have long used phrase "The Bible X" describe most well-known or...

10.1145/3544979.3544983 article EN ACM SIGACT News 2022-06-10

In the cybersecurity setting, defenders are often at mercy of their detection technologies and subject to information experiences that individual analysts have. order give an advantage, it is important understand attacker's motivation likely next best action. As a first step in modeling this behavior, we introduce security game framework simulates interplay between attackers noisy environment, focusing on factors drive decision making for variants with full knowledge observability,...

10.48550/arxiv.2212.04281 preprint EN cc-by arXiv (Cornell University) 2022-01-01
Coming Soon ...