- Recommender Systems and Techniques
- Advanced Graph Neural Networks
- Topic Modeling
- Explainable Artificial Intelligence (XAI)
- Privacy-Preserving Technologies in Data
- Natural Language Processing Techniques
- Advanced Bandit Algorithms Research
- Bayesian Modeling and Causal Inference
- Semantic Web and Ontologies
- Multi-Agent Systems and Negotiation
- Software Engineering Research
- Machine Learning in Healthcare
- Ethics and Social Impacts of AI
- Adversarial Robustness in Machine Learning
- Software Testing and Debugging Techniques
- Consumer Market Behavior and Pricing
- Visual Attention and Saliency Detection
- Software Reliability and Analysis Research
- Neural and Behavioral Psychology Studies
- CO2 Reduction Techniques and Catalysts
- Neural dynamics and brain function
- Radiomics and Machine Learning in Medical Imaging
- AI and HR Technologies
- Stochastic Gradient Optimization Techniques
- Privacy, Security, and Data Protection
Rutgers, The State University of New Jersey
2019-2025
China University of Geosciences (Beijing)
2024
Teikoku Pharma (United States)
2024
Shaanxi Normal University
2021-2023
Shanghai Jiao Tong University
2021
Nantong University
2021
Beijing Information Science & Technology University
2021
Robert Bosch (Germany)
2021
Xiangtan University
2011
There has been growing attention on fairness considerations recently, especially in the context of intelligent decision making systems. For example, explainable recommendation systems may suffer from both explanation bias and performance disparity. We show that inactive users be more susceptible to receiving unsatisfactory recommendations due their insufficient training data, biased by records active nature collaborative filtering, which leads unfair treatment system. In this paper, we...
Human Intelligence (HI) excels at combining basic skills to solve complex tasks. This capability is vital for Artificial (AI) and should be embedded in comprehensive AI Agents, enabling them harness expert models task-solving towards General (AGI). Large Language Models (LLMs) show promising learning reasoning abilities, can effectively use external models, tools, plugins, or APIs tackle problems. In this work, we introduce OpenAGI, an open-source AGI research development platform designed...
Recommender systems are gaining increasing and critical impacts on human society since a growing number of users use them for information seeking decision making. Therefore, it is crucial to address the potential unfairness problems in recommendations. Just like have personalized preferences items, users' demands fairness also many scenarios. important provide fair recommendations satisfy their demands. Besides, previous works recommendation mainly focus association-based fairness. However,...
By providing explanations for users and system designers to facilitate better understanding decision making, explainable recommendation has been an important research problem. In this paper, we propose Counterfactual Explainable Recommendation (CountER), which takes the insights of counterfactual reasoning from causal inference recommendation. CountER is able formulate complexity strength explanations, it adopts a learning framework seek simple (low complexity) effective (high strength)...
Structural data well exists in Web applications, such as social networks media, citation academic websites, and threads online forums. Due to the complex topology, it is difficult process make use of rich information within data. Graph Neural Networks (GNNs) have shown great advantages on learning representations for structural However, non-transparency deep models makes non-trivial explain interpret predictions made by GNNs. Meanwhile, also a big challenge evaluate GNN explanations, since...
Social networks have been widely studied over the last century from multiple disciplines to understand societal issues such as inequality in employment rates, managerial performance, and epidemic spread. Today, these many more can be at global scale thanks digital footprints that we generate when browsing Web or using social media platforms. Unfortunately, scientists often struggle access data primarily because it is proprietary, even shared with privacy guarantees, either no representative...
Many of the traditional recommendation algorithms are designed based on fundamental idea mining or learning correlative patterns from data to estimate user-item preference. However, pure may lead Simpson's paradox in predictions, and thus results sacrificed performance. is a well-known statistical phenomenon, which causes confusions conclusions ignoring result inaccurate decisions. Fortunately, causal counterfactual modeling can help us think outside observational for user personalization so...
As one of the most pervasive applications machine learning, recommender systems are playing an important role on assisting human decision-making. The satisfaction users and interests platforms closely related to quality generated recommendation results. However, as a highly data-driven system, system could be affected by data or algorithmic bias thus generate unfair results, which weaken reliance systems. result, it is crucial address potential unfairness problems in settings. Recently,...
Recommender systems (RS), serving at the forefront of Human-centered AI, are widely deployed in almost every corner web and facilitate human decision-making process. However, despite their enormous capabilities potential, RS may also lead to undesired effects on users, items, producers, platforms, or even society large, such as compromised user trust due non-transparency, unfair treatment different consumers, privacy concerns extensive use user’s private data for personalization, just name a...
Recommender systems may be confounded by various types of confounding factors (also called confounders) that lead to inaccurate recommendations and sacrificed recommendation performance. Current approaches solving the problem usually design each specific model for confounder. However, real-world include a huge number confounders thus designing confounder could unrealistic. More importantly, except those ``explicit confounders'' experts can manually identify process such as item's position in...
Foundation Models such as Large Language (LLMs) have significantly advanced many research areas. In particular, LLMs offer significant advantages for recommender systems, making them valuable tools personalized recommendations. For example, by formulating various recommendation tasks rating prediction, sequential recommendation, straightforward and explanation generation into language instructions, make it possible to build universal engines that can handle different tasks. Additionally, a...
When a user starts exploring items from new area of an e-commerce system, cross-domain recommendation techniques come into help by transferring the abundant knowledge user's familiar domains to this domain. However, solution usually requires direct information sharing between service providers on cloud which may not always be available and brings privacy concerns. In paper, we show that one can overcome these concerns through learning edge devices such as smartphones laptops. The problem is...
Recommender systems (RS), serving at the forefront of Human-centered AI, are widely deployed in almost every corner web and facilitate human decision-making process. However, despite their enormous capabilities potential, RS may also lead to undesired effects on users, items, producers, platforms, or even society large, such as compromised user trust due non-transparency, unfair treatment different consumers, privacy concerns extensive use user's private data for personalization, just name a...
Achieving fairness over different user groups in recommender systems is an important problem. The majority of existing works achieve through constrained optimization that combines the recommendation loss and constraint. To fairness, algorithm usually needs to know each user's group affiliation feature such as gender or race. However, involved sensitive requires protection. In this work, we seek a federated learning solution for fair problem identify main challenge algorithmic conflict...
Recommendation foundation model utilizes large language models (LLM) for recommendation by converting tasks into natural tasks. It enables generative which directly generates the item(s) to recommend rather than calculating a ranking score each and every candidate item as in traditional models, simplifying pipeline from multi-stage filtering single-stage filtering. To avoid generating excessively long text hallucinated recommendations when deciding recommend, creating LLM-compatible IDs...
Accurate groundwater level (GWL) prediction is crucial in resource management. Currently, it relies mainly on physics-based models for and quantitative analysis. However, used often have errors structure, parameters, data, resulting inaccurate GWL predictions. In this study, machine learning algorithms were to correct the of models. First, a MODFLOW flow model was created Hutuo River alluvial fan North China Plain. Then, using observed GWLs from 10 monitoring wells located upper, middle,...
Recommender systems are important and powerful tools for various personalized services. Traditionally, these use data mining machine learning techniques to make recommendations based on correlations found in the data. However, relying solely correlation without considering underlying causal mechanism may lead practical issues such as fairness, explainability, robustness, bias, echo chamber controllability problems. Therefore, researchers related area have begun incorporating causality into...
There is increasing recognition of the need for human-centered AI that learns from human feedback. However, most current systems focus more on model design, but less participation as part pipeline. In this work, we propose a Human-in-the-Loop (HitL) graph reasoning paradigm and develop corresponding dataset named HOOPS task KG-driven conversational recommendation. Specifically, first construct KG interpreting diverse user behaviors identify pertinent attribute entities each user--item pair....
Knowledge Graph (KG) is a flexible structure that able to describe the complex relationship between data entities. Currently, most KG embedding models are trained based on negative sampling, i.e., model aims maximize some similarity of connected entities in KG, while minimizing sampled disconnected Negative sampling helps reduce time complexity learning by only considering subset instances, which may fail deliver stable performance due uncertainty procedure. To avoid such deficiency, we...
Causal graph, as an effective and powerful tool for causal modeling, is usually assumed a Directed Acyclic Graph (DAG). However, recommender systems involve feedback loops, defined the cyclic process of recommending items, incorporating user in model updates, repeating procedure. As result, it important to incorporate loops into graphs accurately dynamic iterative data generation systems. are not always beneficial since over time they may encourage more narrowed content exposure, which if...
Causal reasoning and logical are two important types of abilities for human intelligence. However, their relationship has not been extensively explored under machine intelligence context. In this paper, we explore how the can be jointly modeled to enhance both accuracy explainability learning models. More specifically, by integrating ability -- counterfactual (neural) propose Counterfactual Collaborative Reasoning (CCR), which conducts logic improve performance. particular, use recommender...
Recommender systems are important and powerful tools for various personalized services. Traditionally, these use data mining machine learning techniques to make recommendations based on correlations found in the data. However, relying solely correlation without considering underlying causal mechanism may lead practical issues such as fairness, explainability, robustness, bias, echo chamber controllability problems. Therefore, researchers related area have begun incorporating causality into...