Martin Renqiang Min

ORCID: 0000-0002-8563-6133
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Generative Adversarial Networks and Image Synthesis
  • Multimodal Machine Learning Applications
  • Natural Language Processing Techniques
  • Topic Modeling
  • Domain Adaptation and Few-Shot Learning
  • Video Analysis and Summarization
  • Human Pose and Action Recognition
  • vaccines and immunoinformatics approaches
  • T-cell and B-cell Immunology
  • Advanced Neural Network Applications
  • Sentiment Analysis and Opinion Mining
  • Adversarial Robustness in Machine Learning
  • Human Motion and Animation
  • Gene expression and cancer classification
  • Machine Learning in Bioinformatics
  • CAR-T cell therapy research
  • AI in cancer detection
  • Anomaly Detection Techniques and Applications
  • RNA and protein synthesis mechanisms
  • Face and Expression Recognition
  • Immunotherapy and Immune Responses
  • Advanced Image and Video Retrieval Techniques
  • Text and Document Classification Technologies
  • Machine Learning and Data Classification
  • Machine Learning in Materials Science

Princeton University
2013-2024

NEC (United States)
2014-2024

The Ohio State University
2021

NEC (Japan)
2019

University of Houston
2019

New Jersey Institute of Technology
2019

Texas Southern University
2019

University of Toronto
2010

Dinghan Shen, Guoyin Wang, Wenlin Martin Renqiang Min, Qinliang Su, Yizhe Zhang, Chunyuan Li, Ricardo Henao, Lawrence Carin. Proceedings of the 56th Annual Meeting Association for Computational Linguistics (Volume 1: Long Papers). 2018.

10.18653/v1/p18-1041 article EN cc-by Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2018-01-01

Conditional image-to-video (cI2V) generation aims to synthesize a new plausible video starting from an image (e.g., person's face) and condition action class label like smile). The key challenge of the cI2V task lies in simultaneous realistic spatial appearance temporal dynamics corresponding given condition. In this paper, we propose approach for using novel latent flow diffusion models (LFDM) that optical sequence space based on warp image. Compared previous direct-synthesis-based works,...

10.1109/cvpr52729.2023.01769 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2023-06-01

In mission critical IT services, system failure prediction becomes increasingly important; it prevents unexpected downtime, and assures service reliability for end users. While operational console logs record rich descriptive information on the health status of those systems, existing management technologies mostly use them in a labor-intensive forensics approach, i.e., identifying what went wrong after fact. Recent efforts log-based take an automation approach with text mining techniques,...

10.1109/bigdata.2016.7840733 article EN 2021 IEEE International Conference on Big Data (Big Data) 2016-12-01

Zero-shot learning (ZSL) aims to recognize instances of unseen classes solely based on the semantic descriptions classes. Existing algorithms usually formulate it as a semantic-visual correspondence problem, by mappings from one feature space other. Despite being reasonable, previous approaches essentially discard highly precious discriminative power visual features in an implicit way, and thus produce undesirable results. We instead reformulate ZSL conditioned classification i.e.,...

10.1109/iccv.2019.00368 article EN 2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2019-10-01

Generating videos from text has proven to be a significant challenge for existing generative models. We tackle this problem by training conditional model extract both static and dynamic information text. This is manifested in hybrid framework, employing Variational Autoencoder (VAE) Generative Adversarial Network (GAN). The features, called "gist," are used sketch text-conditioned background color object layout structure. Dynamic features considered transforming input into an image filter....

10.1609/aaai.v32i1.12233 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2018-04-27

Developing conditional generative models for text-to-video synthesis is an extremely challenging yet important topic of research in machine learning. In this work, we address problem by introducing Text-Filter conditioning Generative Adversarial Network (TFGAN), a GAN model with novel multi-scale text-conditioning scheme that improves text-video associations. By combining the proposed deep architecture, TFGAN generates high quality videos from text on real-world video datasets. addition,...

10.24963/ijcai.2019/276 article EN 2019-07-28

Several recent studies investigate TCR-peptide/-pMHC binding prediction using machine learning or deep approaches. Many of these methods achieve impressive results on test sets, which include peptide sequences that are also included in the training set. In this work, we how state-of-the-art models for generalize to unseen peptides. We create a dataset including positive samples from IEDB, VDJdb, McPAS-TCR, and MIRA set, as well negative both randomization 10X Genomics assays. name collection...

10.3389/fimmu.2022.1014256 article EN cc-by Frontiers in Immunology 2022-10-21

We propose a sequential variational autoencoder to learn disentangled representations of data (e.g., videos and audios) under self-supervision. Specifically, we exploit the benefits some readily accessible supervision signals from input itself or off-the-shelf functional models accordingly design auxiliary tasks for our model utilize these signals. With signals, can easily disentangle representation an sequence into static factors dynamic (i.e., time-invariant time-varying parts)....

10.1109/cvpr42600.2020.00657 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020-06-01

Pengyu Cheng, Martin Renqiang Min, Dinghan Shen, Christopher Malon, Yizhe Zhang, Yitong Li, Lawrence Carin. Proceedings of the 58th Annual Meeting Association for Computational Linguistics. 2020.

10.18653/v1/2020.acl-main.673 preprint EN cc-by 2020-01-01

Neural network based sequence-to-sequence models in an encoder-decoder framework have been successfully applied to solve Question Answering (QA) problems, predicting answers from statements and questions. However, almost all previous failed consider detailed context information unknown states under which systems do not enough answer given These scenarios with incomplete or ambiguous are very common the setting of Interactive (IQA). To address this challenge, we develop a novel model,...

10.1145/3097983.3098115 preprint EN 2017-08-04

Although progress has been made for text-to-image synthesis, previous methods fall short of generalizing to unseen or underrepresented attribute compositions in the input text. Lacking compositionality could have severe implications robustness and fairness, e.g., inability synthesize face images demographic groups. In this paper, we introduce a new framework, StyleT2I, improve synthesis. Specifically, propose CLIP-guided Contrastive Loss better distinguish different among sentences. To...

10.1109/cvpr52688.2022.01766 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022-06-01

Conventional embedding methods directly associate each symbol with a continuous vector, which is equivalent to applying linear transformation based on "one-hot" encoding of the discrete symbols. Despite its simplicity, such approach yields number parameters that grows linearly vocabulary size and can lead overfitting. In this work, we propose much more compact K-way D-dimensional scheme replace encoding. proposed "KD encoding", represented by $D$-dimensional code cardinality $K$, final...

10.48550/arxiv.1806.09464 preprint EN other-oa arXiv (Cornell University) 2018-01-01

Abstract Motivation: Effective computational methods for peptide-protein binding prediction can greatly help clinical peptide vaccine search and design. However, previous fail to capture key nonlinear high-order dependencies between different amino acid positions. As a result, they often produce low-quality rankings of strong peptides. To solve this problem, we propose machine learning including neural networks (HONNs) with possible deep extensions kernel support vector machines predict...

10.1093/bioinformatics/btv371 article EN Bioinformatics 2015-07-23

Spatial transcriptomics technologies, which generate a spatial map of gene activity, can deepen the understanding tissue architecture and its molecular underpinnings in health disease. However, high cost makes these technologies difficult to use practice. Histological images co-registered with targeted tissues are more affordable routinely generated many research clinical studies. Hence, predicting expression from morphological clues embedded histological provides scalable alternative...

10.1093/bioinformatics/btae383 article EN cc-by Bioinformatics 2024-09-01

MHC Class I protein plays an important role in immunotherapy by presenting immunogenic peptides to anti-tumor immune cells. The repertoires of for various proteins are distinct, which can be reflected their diverse binding motifs. To characterize motifs proteins, vitro experiments have been conducted screen with high affinities hundreds given proteins. However, considering tens thousands known conducting extensive is infeasible, and thus a more efficient scalable way needed.

10.1093/bioinformatics/btad055 article EN cc-by Bioinformatics 2023-01-23

Previous models for video captioning often use the output from a specific layer of Convolutional Neural Network (CNN) as features. However, variable context-dependent semantics in may make it more appropriate to adaptively select features multiple CNN layers. We propose new approach generating adaptive spatiotemporal representations videos task. A novel attention mechanism is developed, that and sequentially focuses on different layers (levels feature "abstraction"), well local regions maps...

10.1609/aaai.v32i1.12245 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2018-04-27

We present a multi-sequence generalization of Variational Information Bottleneck and call the resulting model Attentive (AVIB). Our AVIB leverages multi-head self-attention to implicitly approximate posterior distribution over latent encodings conditioned on multiple input sequences. apply fundamental immuno-oncology problem: predicting interactions between T-cell receptors (TCRs) peptides.Experimental results various datasets show that significantly outperforms state-of-the-art methods for...

10.1093/bioinformatics/btac820 article EN cc-by Bioinformatics 2022-12-26

Recent few-shot video classification (FSVC) works achieve promising performance by capturing similarity across support and query samples with different temporal alignment strategies or learning discriminative features via Transformer block within each episode. However, they ignore two important issues: a) It is difficult to capture rich intrinsic action semantics from a limited number of instances task. b) Redundant irrelevant frames in videos easily weaken the positive influence frames. To...

10.1109/iccv51070.2023.01769 article EN 2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2023-10-01
Coming Soon ...