Deepak Ramachandran

ORCID: 0000-0001-5412-6133
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Topic Modeling
  • Natural Language Processing Techniques
  • Speech and dialogue systems
  • Video Analysis and Summarization
  • Semantic Web and Ontologies
  • Multimodal Machine Learning Applications
  • Reinforcement Learning in Robotics
  • Logic, Reasoning, and Knowledge
  • Multi-Agent Systems and Negotiation
  • Explainable Artificial Intelligence (XAI)
  • Software Engineering Research
  • Advanced Graph Neural Networks
  • Service-Oriented Architecture and Web Services
  • Bayesian Modeling and Causal Inference
  • Multimedia Communication and Technology
  • Recommender Systems and Techniques
  • Data Stream Mining Techniques
  • Data Management and Algorithms
  • Machine Learning and Data Classification
  • Advanced Text Analysis Techniques
  • Image Retrieval and Classification Techniques
  • Machine Learning and Algorithms
  • Computer Graphics and Visualization Techniques
  • Text Readability and Simplification
  • Text and Document Classification Technologies

James Paget University Hospital
2023-2024

Google (United States)
2019-2024

James Paget University Hospitals NHS Foundation Trust
2024

University of California, Santa Cruz
2023

Harvard University Press
2023

Brown University
2021

Johns Hopkins University
2021

Honda (United States)
2021

Stanford University
2020

California University of Pennsylvania
2019-2020

Remarkable progress has been made on automated reasoning with natural text, by using Large Language Models (LLMs) and methods such as Chain-of-Thought prompting Selection-Inference. These techniques search for proofs in the forward direction from axioms to conclusion, which suffers a combinatorial explosion of space, thus high failure rates problems requiring longer chains reasoning. The classical literature shown that backward (i.e. intended conclusion supporting axioms) is significantly...

10.18653/v1/2023.acl-long.361 article EN cc-by 2023-01-01

In spoken dialog systems, state tracking refers to the task of correctly inferring user's goal at a given turn, all history up that turn. The Dialog State Tracking Challenge is research community challenge has run for three rounds. rise host new methods and also deeper understanding about problem itself, including evaluation.

10.1609/aimag.v35i4.2558 article EN AI Magazine 2014-12-01

Most current NLP systems have little knowledge about quantitative attributes of objects and events. We propose an unsupervised method for collecting information from large amounts web data, use it to create a new, very resource consisting distributions over physical quantities associated with objects, adjectives, verbs which we call Distributions Quantitative (DoQ). This contrasts recent work in this area has focused on making only relative comparisons such as “Is lion bigger than wolf?”....

10.18653/v1/p19-1388 preprint EN cc-by 2019-01-01

Deciding what mix of engine and battery power to use is critical hybrid vehicles' fuel efficiency. Current solutions consider several factors such as the charge how efficient operates at a given speed. Previous research has shown that by taking into account future requirements vehicle, more balance vs. can be attained. In this paper, we utilize probabilistic driving route prediction system, trained using Inverse Reinforcement Learning, optimize control policy. Our approach considers routes...

10.1609/aaai.v26i1.8175 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2021-09-20

Concept erasure techniques have recently gained significant attention for their potential to remove unwanted concepts from text-to-image models. While these methods often demonstrate success in controlled scenarios, robustness real-world applications and readiness deployment remain uncertain. In this work, we identify a critical gap evaluating sanitized models, particularly terms of performance across various concept dimensions. We systematically investigate the failure modes current...

10.48550/arxiv.2501.09833 preprint EN arXiv (Cornell University) 2025-01-16

We address the problem of best policy identification in preference-based reinforcement learning (PbRL), where occurs from noisy binary preferences over trajectory pairs rather than explicit numerical rewards. This approach is useful for post-training optimization generative AI models during multi-turn user interactions, preference feedback more robust handcrafted reward models. In this setting, driven by both an offline dataset -- collected a rater unknown 'competence' and online data with...

10.48550/arxiv.2501.18873 preprint EN arXiv (Cornell University) 2025-01-30

A major challenge in aligning large language models (LLMs) with human preferences is the issue of distribution shift. LLM alignment algorithms rely on static preference datasets, assuming that they accurately represent real-world user preferences. However, vary significantly across geographical regions, demographics, linguistic patterns, and evolving cultural trends. This shift leads to catastrophic failures many applications. We address this problem using principled framework...

10.48550/arxiv.2502.01930 preprint EN arXiv (Cornell University) 2025-02-03

Pretrained Language Models (LMs) have been shown to possess significant linguistic, common sense and factual knowledge. One form of knowledge that has not studied yet in this context is information about the scalar magnitudes objects. We show pretrained language models capture a amount but are short capability required for general common-sense reasoning. identify contextual pre-training numeracy as two key factors affecting their performance, simple method canonicalizing numbers can effect...

10.18653/v1/2020.findings-emnlp.439 article EN cc-by 2020-01-01

Najoung Kim, Ellie Pavlick, Burcu Karagol Ayan, Deepak Ramachandran. Proceedings of the 59th Annual Meeting Association for Computational Linguistics and 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). 2021.

10.18653/v1/2021.acl-long.304 article EN cc-by 2021-01-01

Pretrained Language Models (LMs) have been shown to possess significant linguistic, common sense and factual knowledge. One form of knowledge that has not studied yet in this context is information about the scalar magnitudes objects. We show pretrained language models capture a amount but are short capability required for general common-sense reasoning. identify contextual pre-training numeracy as two key factors affecting their performance, simple method canonicalizing numbers can effect...

10.18653/v1/2020.blackboxnlp-1.27 article EN cc-by 2020-01-01

Automated reasoning with unstructured natural text is a key requirement for many potential applications of NLP and developing robust AI systems. Recently, Language Models (LMs) have demonstrated complex capacities even without any finetuning. However, existing evaluation automated assumes access to consistent coherent set information over which models reason. When in the real-world, available frequently inconsistent or contradictory, therefore need be equipped strategy resolve such conflicts...

10.48550/arxiv.2306.07934 preprint EN cc-by arXiv (Cornell University) 2023-01-01

Structured Complex Task Decomposition (SCTD) is the problem of breaking down a complex real-world task (such as planning wedding) into directed acyclic graph over individual steps that contribute to achieving task, with edges specifying temporal dependencies between steps. SCTD an important component assistive tools, and challenge for commonsense reasoning systems. We probe how accurately can be done knowledge extracted from pre-trained Large Language Models (LLMs). introduce new...

10.1609/aaai.v38i17.29918 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2024-03-24

We describe a manifold learning framewor that naturally accommodates supervised learning, partially and unsupervised clustering as particular cases. Our method chooses function by minimizing loss subject to regularization penalty. This augmented cost is minimized using greedy, stagewise, functional minimization procedure, in Gradientboost. Each stage of boosting fast efficient. demonstrate our approach both radial basis approximations trees. The performance at the state art on many standard...

10.1145/1390156.1390232 article EN 2008-01-01

Our goal in this work is to make high level decisions for mobile robots. In particular, given a queue of prioritized object delivery tasks, we wish find sequence actions real time accomplish these tasks efficiently. We introduce novel reinforcement learning algorithm called Smoothed Sarsa that learns good policy by delaying the backup step until uncertainty state estimate improves. The space modeled Dynamic Bayesian Network and updated using Region-based Particle Filter. take advantage fact...

10.1109/robot.2009.5152707 article EN 2009-05-01

Language models (LMs) pretrained on large corpora of text from the web have been observed to contain amounts various types knowledge about world. This observation has led a new and exciting paradigm in graph construction where, instead manual curation or mining, one extracts parameters an LM. Recently, it shown that finetuning LMs set factual makes them produce better answers queries different set, thus making finetuned good candidate for extraction and, consequently, construction. In this...

10.48550/arxiv.2301.11293 preprint EN cc-by arXiv (Cornell University) 2023-01-01

In this paper, we present a speech-driven second screen application for TV program discovery. We give an overview of the and its architecture. also user study along with failure analysis. The results from are encouraging, demonstrate our application’s effectiveness in target domain. conclude discussion follow-on efforts to further enhance application.

10.1609/aaai.v28i2.19026 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2014-07-27
Coming Soon ...