Christopher G. Brinton

ORCID: 0000-0003-2771-3521
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Privacy-Preserving Technologies in Data
  • Stochastic Gradient Optimization Techniques
  • Online Learning and Analytics
  • Advanced MIMO Systems Optimization
  • Cooperative Communication and Network Coding
  • Advanced Wireless Communication Technologies
  • Age of Information Optimization
  • Wireless Signal Modulation Classification
  • Indoor and Outdoor Localization Technologies
  • Adversarial Robustness in Machine Learning
  • Wireless Communication Security Techniques
  • IoT and Edge/Fog Computing
  • Mobile Crowdsensing and Crowdsourcing
  • Topic Modeling
  • Intelligent Tutoring Systems and Adaptive Learning
  • Energy Harvesting in Wireless Networks
  • Distributed Sensor Networks and Detection Algorithms
  • Advanced Graph Neural Networks
  • Recommender Systems and Techniques
  • Cryptography and Data Security
  • Energy Efficient Wireless Sensor Networks
  • Speech and Audio Processing
  • Data Stream Mining Techniques
  • Privacy, Security, and Data Protection
  • Blockchain Technology Applications and Security

Purdue University West Lafayette
2019-2025

Zhejiang University
2023-2024

Lunenfeld-Tanenbaum Research Institute
2023-2024

McMaster University
2024

Rensselaer Polytechnic Institute
2024

Swarthmore College
2024

Indian Institute of Technology Kharagpur
2024

Texas A&M University
2024

American Society For Engineering Education
2024

Universidad Nacional de Colombia
2024

We study user behavior in the courses offered by a major massive online open course (MOOC) provider during summer of 2013. Since social learning is key element scalable education on MOOC and done via discussion forums, our main focus understanding forum activities. Two salient features these activities drive research: (1) high decline rate: for each studied, volume declined continuously throughout duration course; (2) high-volume, noisy discussions: at least 30 percent produced new threads...

10.1109/tlt.2014.2337900 article EN IEEE Transactions on Learning Technologies 2014-07-10

Federated learning has emerged recently as a promising solution for distributing machine tasks through modern networks of mobile devices. Recent studies have obtained lower bounds on the expected decrease in model loss that is achieved each round federated learning. However, convergence generally requires large number communication rounds, which induces delay training and costly terms network resources. In this paper, we propose fast-convergent algorithm, called <inline-formula...

10.1109/jsac.2020.3036952 article EN publisher-specific-oa IEEE Journal on Selected Areas in Communications 2020-11-09

Federated learning has generated significant interest, with nearly all works focused on a "star" topology where nodes/devices are each connected to central server. We migrate away from this architecture and extend it through the network dimension case there multiple layers of nodes between end devices Specifically, we develop multi-stage hybrid federated (MH-FL), intra- inter-layer model that considers as multi-layer cluster-based structure. MH-FL structures among in clusters, including...

10.1109/tnet.2022.3143495 article EN publisher-specific-oa IEEE/ACM Transactions on Networking 2022-02-04

Machine learning (ML) is widely used for key tasks in Connected and Automated Vehicles (CAV), including perception, planning, control. However, its reliance on vehicular data model training presents significant challenges related to in-vehicle user privacy communication overhead generated by massive volumes. Federated (FL) a decentralized ML approach that enables multiple vehicles collaboratively develop models, broadening from various driving environments, enhancing overall performance,...

10.1109/tiv.2023.3332675 article EN IEEE Transactions on Intelligent Vehicles 2023-11-14

Federated learning (FL) has been gaining attention for its ability to share knowledge while maintaining user data, protecting privacy, increasing efficiency, and reducing communication overhead. Decentralized FL (DFL) is a decentralized network architecture that eliminates the need central server in contrast centralized (CFL). DFL enables direct between clients, resulting significant savings resources. In this paper, comprehensive survey profound perspective are provided DFL. First, review...

10.1109/jiot.2024.3407584 article EN IEEE Internet of Things Journal 2024-05-30

We study student performance prediction in Massive Open Online Courses (MOOCs), where the objective is to predict whether a user will be Correct on First Attempt (CFA) answering question. In doing so, we develop novel techniques that leverage behavioral data collected by MOOC platforms. Using video-watching clickstream from one of our MOOCs, first extract summary quantities (e.g., fraction played, number pauses) for each user-video pair, and show how certain intervals/sets values these...

10.1109/infocom.2015.7218617 article EN 2015-04-01

We present a novel method for predicting the evolution of student's grade in massive open online courses (MOOCs). Performance prediction is particularly challenging MOOC settings due to per-student assessment response sparsity and need personalized models. Our overcomes these challenges by incorporating another, richer form data collected from each student-lecture video-watching clickstreams-into machine learning feature set, using that train time series neural network learns both prior...

10.1109/jstsp.2017.2700227 article EN publisher-specific-oa IEEE Journal of Selected Topics in Signal Processing 2017-01-01

Machine learning (ML) tasks are becoming ubiquitous in today's network applications. Federated has emerged recently as a technique for training ML models at the edge by leveraging processing capabilities across nodes that collect data. There several challenges with employing conventional federated contemporary networks, due to significant heterogeneity compute and communication exist devices. To address this, we advocate new paradigm called fog learning, which will intelligently distribute...

10.1109/mcom.001.2000410 article EN IEEE Communications Magazine 2020-12-01

Student video-watching behavior and quiz performance are studied in two Massive Open Online Courses (MOOCs). In doing so, frameworks presented by which clickstreams can be represented: one based on the sequence of events created, another positions visited. With event-based framework, recurring subsequences student extracted, contain fundamental characteristics such as reflecting (i.e., repeatedly playing pausing) revising plays skip backs). It is found that some these behaviors significantly...

10.1109/tsp.2016.2546228 article EN IEEE Transactions on Signal Processing 2016-03-24

The conventional federated learning (FedL) architecture distributes machine (ML) across worker devices by having them train local models that are periodically aggregated a server. FedL ignores two important characteristics of contemporary wireless networks, however: (i) the network may contain heterogeneous communication/computation resources, while (ii) there be significant overlaps in devices' data distributions. In this work, we develop novel optimization methodology jointly accounts for...

10.1109/infocom42981.2021.9488906 article EN IEEE INFOCOM 2022 - IEEE Conference on Computer Communications 2021-05-10

Federated learning has emerged as a popular technique for distributing machine (ML) model training across the wireless edge. In this paper, we propose <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">two timescale hybrid federated learning</i> ( <monospace xmlns:xlink="http://www.w3.org/1999/xlink">TT-HF</monospace> ), semi-decentralized architecture that combines conventional device-to-server communication paradigm with device-to-device (D2D)...

10.1109/jsac.2021.3118344 article EN publisher-specific-oa IEEE Journal on Selected Areas in Communications 2021-10-06

Federated learning (FL) is vulnerable to backdoor attacks, where adversaries alter model behavior on target classification labels by embedding triggers into data samples. While these attacks have received considerable attention in horizontal FL, they are less understood for vertical FL (VFL), devices hold different features of the samples, and only server holds labels. In this work, we propose a novel attack VFL which (i) does not rely gradient information from (ii) considers potential...

10.48550/arxiv.2501.09320 preprint EN arXiv (Cornell University) 2025-01-16

We present the design, implementation, and preliminary evaluation of our Adaptive Educational System (AES): Mobile Integrated Individualized Course (MIIC). MIIC is a platform for personalized course delivery which integrates lecture videos, text, assessments, social learning into mobile native app, collects clickstream-level behavioral measurements about each student as they interact with material. These can subsequently be used to update student's user model, in turn determine content...

10.1109/tlt.2014.2370635 article EN IEEE Transactions on Learning Technologies 2014-11-13

Fog computing promises to enable machine learning tasks scale large amounts of data by distributing processing across connected devices. Two key challenges achieving this are (i) heterogeneity in devices' compute resources and (ii) topology constraints on which devices can communicate. We the first address these developing a network-aware distributed optimization methodology where process for task locally send their learnt parameters server aggregation at certain time intervals. Unlike...

10.1109/infocom41043.2020.9155372 article EN IEEE INFOCOM 2022 - IEEE Conference on Computer Communications 2020-07-01

Fog computing promises to enable machine learning tasks scale large amounts of data by distributing processing across connected devices. Two key challenges achieving this goal are (i) heterogeneity in devices' compute resources and (ii) topology constraints on which devices communicate with each other. We address these developing a novel network-aware distributed methodology where optimally share local send their learnt parameters server for periodic aggregation. Unlike traditional federated...

10.1109/tnet.2021.3075432 article EN publisher-specific-oa IEEE/ACM Transactions on Networking 2021-05-20

In this paper, we study a new latency optimization problem for blockchain-based federated learning (BFL) in multi-server edge computing. system model, distributed mobile devices (MDs) communicate with set of servers (ESs) to handle both machine (ML) model training and block mining simultaneously. To assist the ML resource-constrained MDs, develop an offloading strategy that enables MDs transmit their data one associated ESs. We then propose decentralized aggregation solution at layer based...

10.1109/jsac.2022.3213344 article EN IEEE Journal on Selected Areas in Communications 2022-10-12

We investigate training machine learning (ML) models across a set of geo-distributed, resource-constrained clusters devices through unmanned aerial vehicles (UAV) swarms. The presence time-varying data heterogeneity and computational resource inadequacy among device motivate four key parts our methodology: (i) <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">stratified UAV swarms</i> leader, worker, coordinator UAVs, (ii)...

10.1109/tnsm.2022.3216326 article EN IEEE Transactions on Network and Service Management 2022-11-03

Federated learning (FedL) has emerged as a popular technique for distributing model training over set of wireless devices, via iterative local updates (at devices) and global aggregations the server). In this paper, we develop parallel successive (PSL), which expands FedL architecture along three dimensions: (i) Network, allowing decentralized cooperation among devices device-to-device (D2D) communications. (ii) Heterogeneity, interpreted at levels: (ii-a) Learning: PSL considers...

10.1109/tnet.2023.3286987 article EN IEEE/ACM Transactions on Networking 2023-07-10

To improve the efficiency of reinforcement learning, we propose a novel asynchronous federated learning framework termed AFedPG, which constructs global model through collaboration among $N$ agents using policy gradient (PG) updates. handle challenge lagged policies in settings, design delay-adaptive lookahead and normalized update techniques that can effectively heterogeneous arrival times gradients. We analyze theoretical convergence bound characterize advantage proposed algorithm terms...

10.48550/arxiv.2404.08003 preprint EN arXiv (Cornell University) 2024-04-09

While network coverage maps continue to expand, many devices located in remote areas remain unconnected terrestrial communication infrastructures, preventing them from getting access the associated data-driven services. In this paper, we propose a ground-to-satellite cooperative federated learning (FL) methodology facilitate machine service management over regions. Our orchestrates satellite constellations provide following key functions during FL: (i) processing data offloaded ground...

10.1109/jsac.2024.3365901 article EN IEEE Journal on Selected Areas in Communications 2024-02-13

We study learning outcome prediction for online courses. Whereas prior work has focused on semester-long courses with frequent student assessments, we focus short-courses that have single outcomes assigned by instructors at the end. The lack of performance data and generally small enrollments makes behavior learners, captured as they interact course content one another in Social Learning Networks (SLN), essential prediction. Our method defines several (machine) features based processing...

10.1109/tlt.2018.2793193 article EN IEEE Transactions on Learning Technologies 2018-01-15

This paper considers improving wireless communication and computation efficiency in federated learning (FL) via model quantization. In the proposed bitwidth FL scheme, edge devices train transmit quantized versions of their local parameters to a coordinating server, which, turn, aggregates them into global synchronizes devices. The goal is jointly determine bitwidths employed for quantization set participating training at each iteration. We pose this as an optimization problem that aims...

10.1109/twc.2023.3297790 article EN IEEE Transactions on Wireless Communications 2023-08-04

A fundamental challenge to providing edge-AI services is the need for a machine learning (ML) model that achieves personalization (i.e., individual clients) and generalization unseen data) properties concurrently. Existing techniques in federated (FL) have encountered steep tradeoff between these objectives impose large computational requirements on edge devices during training inference. In this paper, we propose SplitGP, new split solution can simultaneously capture capabilities efficient...

10.1109/infocom53939.2023.10229027 article EN IEEE INFOCOM 2022 - IEEE Conference on Computer Communications 2023-05-17

Federated learning (FL) has emerged as a key technique for distributed machine (ML). Most literature on FL focused ML model training (i) single task/model, with (ii) synchronous scheme updating parameters, and (iii) static data distribution setting across devices, which is often not realistic in practical wireless environments. To address this, we develop DMA-FL considering dynamic multiple downstream tasks/models over an asynchronous update architecture. We first characterize convergence...

10.1109/tccn.2024.3391329 article EN IEEE Transactions on Cognitive Communications and Networking 2024-04-19
Coming Soon ...