Chenguang Xi

ORCID: 0000-0003-0504-1015
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Distributed Control Multi-Agent Systems
  • Cooperative Communication and Network Coding
  • Stochastic Gradient Optimization Techniques
  • Sparse and Compressive Sensing Techniques
  • Neural Networks Stability and Synchronization
  • Topic Modeling
  • Advanced Memory and Neural Computing
  • Energy Efficient Wireless Sensor Networks
  • Indoor and Outdoor Localization Technologies
  • Advanced Database Systems and Queries
  • Semiconductor Lasers and Optical Devices
  • Optical Wireless Communication Technologies
  • Optimization and Search Problems
  • Image Retrieval and Classification Techniques
  • Visual Attention and Saliency Detection
  • Natural Language Processing Techniques
  • Advanced Image and Video Retrieval Techniques
  • Speech and Audio Processing
  • Geophysical Methods and Applications
  • Data Stream Mining Techniques
  • Complexity and Algorithms in Graphs
  • Distributed Sensor Networks and Detection Algorithms
  • Semantic Web and Ontologies
  • Logic, programming, and type systems
  • Game Theory and Applications

Tufts University
2011-2019

Indian Statistical Institute
2016

Massachusetts Institute of Technology
2016

Odessa National O.S.Popov Academy of Telecommunication
2016

IBM (United States)
2007

In this paper, we consider distributed optimization problems where the goal is to minimize a sum of objective functions over multi-agent network. We focus on case when inter-agent communication described by strongly-connected, \emph{directed} graph. The proposed algorithm, ADD-OPT (Accelerated Distributed Directed Optimization), achieves best known convergence rate for class problems,~$O(\mu^{k}),0<\mu<1$, given strongly-convex, with globally Lipschitz-continuous gradients, where~$k$ number...

10.1109/tac.2017.2737582 article EN IEEE Transactions on Automatic Control 2017-08-08

This paper considers a distributed optimization problem over multiagent network, in which the objective function is sum of individual cost functions at agents. We focus on case when communication between agents described by directed graph. Existing algorithms for graphs require least knowledge neighbors' outdegree each agent (due to requirement column-stochastic matrices). In contrast, our algorithm requires no such knowledge. Moreover, proposed achieves best known rate convergence this...

10.1109/tac.2018.2797164 article EN publisher-specific-oa IEEE Transactions on Automatic Control 2018-01-23

We propose Directed-Distributed Projected Subgradient (D-DPS) to solve a constrained optimization problem over multi-agent network, where the goal of agents is collectively minimize sum locally known convex functions. Each agent in network owns only its local objective function, commonly set. focus on circumstance when communications between are described by directed network. The D-DPS combines surplus consensus overcome asymmetry caused communication analysis shows convergence rate be O( ln k /√k ).

10.1109/tac.2016.2615066 article EN publisher-specific-oa IEEE Transactions on Automatic Control 2016-10-04

This paper develops a fast distributed algorithm, termed DEXTRA, to solve the optimization problem when n agents reach agreement and collaboratively minimize sum of their local objective functions over network, where communication between is described by directed graph. Existing algorithms restricted graphs with convergence √ rates O(ln k/ √k) for general convex k/k) are strongly convex, k number iterations. We show that, appropriate step-size, DEXTRA converges at linear rate O(τ <sup...

10.1109/tac.2017.2672698 article EN publisher-specific-oa IEEE Transactions on Automatic Control 2017-02-22

In this paper, we discuss distributed optimization over directed graphs, where doubly-stochastic weights cannot be constructed. Most of the existing algorithms overcome issue by applying push-sum consensus, which utilizes column-stochastic weights. The formulation requires each agent to know (at least) its out-degree, may impractical in e.g., broadcast-based communication protocols. contrast, describe FROST (Fast Row-stochastic-Optimization with uncoordinated STep-sizes), an algorithm...

10.1186/s13634-018-0596-y article EN cc-by EURASIP Journal on Advances in Signal Processing 2019-01-14

This paper develops a fast distributed algorithm, termed \emph{DEXTRA}, to solve the optimization problem when~$n$ agents reach agreement and collaboratively minimize sum of their local objective functions over network, where communication between is described by a~\emph{directed} graph. Existing algorithms restricted directed graphs with convergence rates $O(\ln k/\sqrt{k})$ for general convex k/k)$ when are strongly-convex, where~$k$ number iterations. We show that, appropriate step-size,...

10.48550/arxiv.1510.02149 preprint EN other-oa arXiv (Cornell University) 2015-01-01

Recent advancements in large language models (LLMs) have spurred growing interest automatic theorem proving using Lean4, where effective tree search methods are crucial for navigating proof spaces. While the existing approaches primarily rely on value functions and Monte Carlo Tree Search (MCTS), potential of simpler like Best-First (BFS) remains underexplored. This paper investigates whether BFS can achieve competitive performance large-scale tasks. We present \texttt{BFS-Prover}, a...

10.48550/arxiv.2502.03438 preprint EN arXiv (Cornell University) 2025-02-05

This paper considers distributed convex optimization problems over a multi-agent network, with each agent possessing dynamic objective function. The agents aim to collectively track the minimum of sum locally known time-varying functions by exchanging information between neighbors. We focus on scenarios when communication among is described directed network. devise an algorithm discrete time-sampling scheme such that distance any estimate and optimal solutions converges steady state error...

10.1109/cdc.2016.7798277 article EN 2016-12-01

In recent advancements in Conversational Large Language Models (LLMs), a concerning trend has emerged, showing that many new base LLMs experience knowledge reduction their foundational capabilities following Supervised Fine-Tuning (SFT). This process often leads to issues such as forgetting or decrease the model's abilities. Moreover, fine-tuned models struggle align with user preferences, inadvertently increasing generation of toxic outputs when specifically prompted. To overcome these...

10.48550/arxiv.2403.02513 preprint EN arXiv (Cornell University) 2024-03-04

Distributed Gradient Descent (DGD) is a well established algorithm to solve the minimization of sum multi-agents' objective functions in network, with assumption that network undirected, i.e., requiring weight matrices be doubly-stochastic. In this paper, we present distributed algorithm, called Directed-Distributed (D-DGD), same problem over directed graphs. each iteration D-DGD, augment an additional variable at agent record change state evolution. The simultaneously constructs...

10.1109/allerton.2015.7447120 article EN 2015-09-01

In this paper, we propose Distributed Mirror Descent (DMD) algorithm for constrained convex optimization problems on a (strongly-)connected multi-agent network. We assume that each agent has private objective function and constraint set. The proposed DMD employs locally designed Bregman distance at agent, thus can be viewed as generalization of the well-known Projected Subgradient (DPS) methods, which use identical Euclidean distances agents. At iteration DMD, optimizes its own adjusted with...

10.48550/arxiv.1412.5526 preprint EN other-oa arXiv (Cornell University) 2014-01-01

We develop a fast distributed algorithm, termed DEXTRA, to solve optimization problems when N agents reach agreement and collaboratively minimize the sum of their local objectives over network, where communications between are described by directed graph. Existing algorithms, including Gradient-Push (GP) Directed-Distributed Gradient Descent (D-DGD), this problem restricted graphs with convergence rate O(ln k/√k). Our analysis shows that DEXTRA converges at linear O(ϵk) for some constant ϵ <...

10.1109/acc.2016.7526694 article EN 2022 American Control Conference (ACC) 2016-07-01

The click-through rate (CTR) prediction task is to predict whether a user will click on the recommended item. As mind-boggling amounts of data are produced online daily, accelerating CTR model training critical ensuring an up-to-date and reducing cost. One approach increase speed apply large batch training. However, as shown in computer vision natural language processing tasks, with easily suffers from loss accuracy. Our experiments show that previous scaling rules fail neural networks. To...

10.1609/aaai.v37i9.26347 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2023-06-26

Visible light communication (VLC) technology enabling data transmission by modulating solid-state light-emitting diode (LED) devices has attracted much attention recently. However, the rate of VLC remains low due to bandwidth performance LEDs. We present a transmitter consisting an LED array enable implementation 16-level pulse amplitude modulation scheme, which theoretically increases from 10Mbps 40Mbps. In addition, provides sufficient brightness for illumination office-sized locations....

10.1109/sopo.2012.6271071 article EN Symposium on Photonics and Optoelectronics 2012-05-01

In this paper, we propose a distributed algorithm, called Directed-Distributed Gradient Descent (D-DGD), to solve multi-agent optimization problems over directed graphs. Existing algorithms mostly deal with similar under the assumption of undirected networks, i.e., requiring weight matrices be doubly-stochastic. The row-stochasticity matrix guarantees that all agents reach consensus, while column-stochasticity ensures each agent's local gradient contributes equally global objective. graph,...

10.48550/arxiv.1510.02146 preprint EN other-oa arXiv (Cornell University) 2015-01-01

We present a failure recovery framework for System S, large-scale stream data analysis environment. It is intended to support multiple sites, which have their own local administration and goals. However, it beneficial these sites cooperate with each other, especially in the presence of various failures. Our ultimate goal automatic, timely through cooperation among sites. identify unique challenges context S our initial design work. In particular, we consider backup selection problem,...

10.1109/ares.2007.87 article EN 2007-04-01

We propose a distributed algorithm, termed the Directed-Distributed Projected Subgradient (D-DPS), to solve constrained optimization problem over multi-agent network, where goal of agents is collectively minimize sum locally known convex functions. Each agent in network owns only its local objective function, commonly set. focus on circumstance when communications between are described by directed network. The D-DPS augments an additional variable for each agent, overcome asymmetry caused...

10.48550/arxiv.1602.00653 preprint EN other-oa arXiv (Cornell University) 2016-01-01

This paper considers a distributed optimization problem over multi-agent network, in which the objective function is sum of individual cost functions at agents. We focus on case when communication between agents described by \emph{directed} graph. Existing algorithms for directed graphs require least knowledge neighbors' out-degree each agent (due to requirement column-stochastic matrices). In contrast, our algorithm requires no such knowledge. Moreover, proposed achieves best known rate...

10.48550/arxiv.1611.06160 preprint EN other-oa arXiv (Cornell University) 2016-01-01

With the rapid advancement of large language models (LLMs), there is a pressing need for comprehensive evaluation suite to assess their capabilities and limitations. Existing LLM leaderboards often reference scores reported in other papers without consistent settings prompts, which may inadvertently encourage cherry-picking favored prompts better results. In this work, we introduce GPT-Fathom, an open-source reproducible built on top OpenAI Evals. We systematically evaluate 10+ leading LLMs...

10.48550/arxiv.2309.16583 preprint EN other-oa arXiv (Cornell University) 2023-01-01

The emergence of Large Language Models (LLMs) such as ChatGPT and LLaMA encounter limitations in domain-specific tasks, with these models often lacking depth accuracy specialized areas, exhibiting a decrease general capabilities when fine-tuned, particularly analysis ability small sized models. To address gaps, we introduce ICE-GRT, utilizing Reinforcement Learning from Human Feedback (RLHF) grounded Proximal Policy Optimization (PPO), demonstrating remarkable in-domain scenarios without...

10.48550/arxiv.2401.02072 preprint EN other-oa arXiv (Cornell University) 2024-01-01

Recent developments in visible light communication (VLC) technology have solidified utilizing light-emitting diodes (LEDs) for not only illumination, but also optical wireless communication. This paper presents a novel access point transceiver that features addressable arrays of LEDs and photodetectors. The transmitter array enables combined illumination control serial data transmission an 16 producing aggregate rate 100 Mb/s. receiver consists 16-element broadband channels. Designed 0.5 μm...

10.1117/12.910159 article EN Proceedings of SPIE, the International Society for Optical Engineering/Proceedings of SPIE 2011-12-01
Coming Soon ...