Chris Clifton

ORCID: 0000-0001-7274-1471
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Privacy-Preserving Technologies in Data
  • Cryptography and Data Security
  • Privacy, Security, and Data Protection
  • Internet Traffic Analysis and Secure E-voting
  • Data Mining Algorithms and Applications
  • Advanced Database Systems and Queries
  • Data Management and Algorithms
  • Imbalanced Data Classification Techniques
  • Semantic Web and Ontologies
  • Data Quality and Management
  • Complexity and Algorithms in Graphs
  • Mobile Crowdsensing and Crowdsourcing
  • Blockchain Technology Applications and Security
  • Adversarial Robustness in Machine Learning
  • Access Control and Trust
  • Network Security and Intrusion Detection
  • Ethics and Social Impacts of AI
  • Machine Learning and Data Classification
  • Algorithms and Data Compression
  • Machine Learning and Algorithms
  • Millimeter-Wave Propagation and Modeling
  • Microwave Engineering and Waveguides
  • Web Data Mining and Analysis
  • Service-Oriented Architecture and Web Services
  • Anomaly Detection Techniques and Applications

IEEE Computer Society
2024

Purdue University West Lafayette
2013-2022

Mitre (United States)
1996-2005

Rutgers, The State University of New Jersey
2005

University of Tennessee System
2004

University of Tennessee at Knoxville
2004

Hewlett-Packard (United States)
2004

Lawrence Livermore National Laboratory
2004

Queen's University
2004

University City Science Center
2004

Privacy considerations often constrain data mining projects. This paper addresses the problem of association rule where transactions are distributed across sources. Each site holds some attributes each transaction, and sites wish to collaborate identify globally valid rules. However, must not reveal individual transaction data. We present a two-party algorithm for efficiently discovering frequent itemsets with minimum support levels, without either revealing values.

10.1145/775047.775142 article EN 2002-07-23

Data mining can extract important knowledge from large data collections ut sometimes these are split among various parties. Privacy concerns may prevent the parties directly sharing and some types of information about data. We address secure association rules over horizontally partitioned The methods incorporate cryptographic techniques to minimize shared, while adding little overhead task.

10.1109/tkde.2004.45 article EN IEEE Transactions on Knowledge and Data Engineering 2004-07-27

Privacy preserving mining of distributed data has numerous applications. Each application poses different constraints: What is meant by privacy, what are the desired results, how distributed, constraints on collaboration and cooperative computing, etc. We suggest that solution to this a toolkit components can be combined for specific privacy-preserving This paper presents some such toolkit, shows they used solve several problems.

10.1145/772862.772867 article EN ACM SIGKDD Explorations Newsletter 2002-12-01

Privacy and security concerns can prevent sharing of data, derailing data mining projects. Distributed knowledge discovery, if done correctly, alleviate this problem. The key is to obtain valid results, while providing guarantees on the (non)disclosure data. We present a method for k-means clustering when different sites contain attributes common set entities. Each site learns cluster each entity, but nothing about at other sites.

10.1145/956750.956776 article EN 2003-08-24

Advances in information technology, and its use research, are increasing both the need for anonymized data risks of poor anonymization. We present a metric, δ-presence, that clearly links quality anonymization to risk posed by inadequate show existing techniques inappropriate situations where δ-presence is good metric (specifically, knowing an individual database poses privacy risk), algorithms effectively anonymizing meet δ-presence. The evaluated context real-world scenario, demonstrating...

10.1145/1247480.1247554 article EN 2007-06-11

Data mining technology has given us new capabilities to identify correlations in large data sets. This introduces risks when the is be made public, but are private. We introduce a method for selectively removing individual values from database prevent discovery of set rules, while preserving other applications. The efficacy and complexity this discussed. also present an experiment showing example methodology.

10.1145/604264.604271 article EN ACM SIGMOD Record 2001-12-01

The problem of secure distributed classification is an important one. In many situations, data split between multiple organizations. These organizations may want to utilize all the create more accurate predictive models while revealing neither their training / databases nor instances be classified. Naive Bayes Classifier a simple but efficient baseline classifier. this paper, we present privacy preserving for horizontally partitioned data.

10.1137/1.9781611972740.59 article EN 2004-04-22

There has been concern over the apparent conflict between privacy and data mining. is no inherent conflict, as most types of mining produce summary results that do not reveal information about individuals. The process may use private data, leading to potential for priv acy breaches. Secure Multiparty Computation shows can be produced without revealing used generate them. problem general techniques secure multiparty computation scale data-mining size computations. This paper presents an...

10.3233/jcs-2005-13401 article EN Journal of Computer Security 2005-10-04

Integrating data from multiple sources has been a longstanding challenge in the database community. Techniques such as privacy-preserving mining promises privacy, but assume integration accomplished. Data methods are seriously hampered by inability to share be integrated. This paper lays out privacy framework for integration. Challenges context of this discussed, existing accomplishments Many these challenges opportunities

10.1145/1008694.1008698 article EN 2004-06-13

Privacy and security concerns can prevent sharing of data, derailing data-mining projects. Distributed knowledge discovery, if done correctly, alleviate this problem. We introduce a generalized privacy-preserving variant the ID3 algorithm for vertically partitioned data distributed over two or more parties. Along with proof security, we discuss what would be necessary to make protocols completely secure. also provide experimental results, giving first demonstration practical complexity...

10.1145/1409620.1409624 article EN ACM Transactions on Knowledge Discovery from Data 2008-10-01

Recently, there has been a growing debate over approaches for handling and analyzing private data. Research identified issues with syntactic anonymity models. Differential privacy promoted as the answer to privacy-preserving data mining. We discuss here involved criticisms of both approaches, conclude that have their place. identify research directions will enable greater access while improving guarantees.

10.1109/icdew.2013.6547433 article EN 2013-04-01

Association-rule mining has proved a highly successful technique for extracting useful information from very large databases. This success is attributed not only to the appropriateness of objectives, but fact that number new query-optimization ideas, such as “a-priori” trick, make association-rule run much faster than might be expected. In this paper we see same tricks can extended more general context, allowing efficient databases many different kinds patterns. The idea, called “query...

10.1145/276304.276306 article EN 1998-06-01

10.1016/j.datak.2007.03.009 article EN Data & Knowledge Engineering 2007-04-03

Privacy-preserving data mining has concentrated on obtaining valid results when the input is private. An extreme example Secure Multiparty Computation-based methods, where only are revealed. However, this still leaves a potential privacy breach: Do themselves violate privacy? This paper explores issue, developing framework under which question can be addressed. Metrics proposed, along with analysis that those metrics consistent in face of apparent problems.

10.1145/1014052.1014126 article EN 2004-08-22

Data mining is under attack from privacy advocates because of a misunderstanding about what it actually and valid concern how generally done. This article shows technology the security community can change data for better, providing all its benefits while still maintaining privacy.

10.1109/msp.2004.108 article EN IEEE Security & Privacy 2004-11-01
Coming Soon ...