Gabriel Huang

ORCID: 0000-0002-1418-6569
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • 3D Modeling in Geospatial Applications
  • Computational Physics and Python Applications
  • Geographic Information Systems Studies
  • Domain Adaptation and Few-Shot Learning
  • Generative Adversarial Networks and Image Synthesis
  • Multimodal Machine Learning Applications
  • Geological Modeling and Analysis
  • Distributed and Parallel Computing Systems
  • Remote Sensing and LiDAR Applications
  • Soil Mechanics and Vehicle Dynamics
  • Industrial Technology and Control Systems
  • Anomaly Detection Techniques and Applications
  • Advanced Computational Techniques and Applications
  • Genetic and phenotypic traits in livestock
  • Image Processing and 3D Reconstruction
  • Biofuel production and bioconversion
  • Regional Development and Policy
  • Power Systems and Technologies
  • Geophysics and Gravity Measurements
  • Model Reduction and Neural Networks
  • Adversarial Robustness in Machine Learning
  • Advanced Neural Network Applications
  • Human Pose and Action Recognition
  • Reinforcement Learning in Robotics
  • Video Analysis and Summarization

ServiceNow (United States)
2023

Université de Montréal
2017-2023

Centre Universitaire de Mila
2019-2022

Computer Research Institute of Montréal
2018

Mila - Quebec Artificial Intelligence Institute
2018

Labeling data is often expensive and time-consuming, especially for tasks such as object detection instance segmentation, which require dense labeling of the image. While few-shot about training a model on novel (unseen) classes with little data, it still requires prior many labeled examples base (seen) classes. On other hand, self-supervised methods aim at learning representations from unlabeled transfer well to downstream detection. Combining promising research direction. In this survey,...

10.1109/tpami.2022.3199617 article EN IEEE Transactions on Pattern Analysis and Machine Intelligence 2022-01-01

Scattering networks are a class of designed Convolutional Neural Networks (CNNs) with fixed weights. We argue they can serve as generic representations for modelling images. In particular, by working in scattering space, we achieve competitive results both supervised and unsupervised learning tasks, while making progress towards constructing more interpretable CNNs. For learning, demonstrate that the early layers CNNs do not necessarily need to be learned, replaced network instead. Indeed,...

10.1109/tpami.2018.2855738 article EN IEEE Transactions on Pattern Analysis and Machine Intelligence 2018-07-19

Recent progress in self-supervision has shown that pre-training large neural networks on vast amounts of unsupervised data can lead to substantial increases generalization downstream tasks. Such models, recently coined foundation have been transformational the field natural language processing. Variants also proposed for image data, but their applicability remote sensing tasks is limited. To stimulate development models Earth monitoring, we propose a benchmark comprised six classification...

10.48550/arxiv.2306.03831 preprint EN cc-by arXiv (Cornell University) 2023-01-01

Learning specific hands-on skills such as cooking, car maintenance, and home repairs increasingly happens via instructional videos. The user experience with videos is known to be improved by meta-information time-stamped annotations for the main steps involved. Generating automatically challenging, we describe here two relevant contributions. First, construct release a new dense video captioning dataset, Video Timeline Tags (ViTT), featuring variety of together annotations. Second, explore...

10.18653/v1/2020.aacl-main.48 preprint EN 2020-01-01

Learning specific hands-on skills such as cooking, car maintenance, and home repairs increasingly happens via instructional videos. The user experience with videos is known to be improved by meta-information time-stamped annotations for the main steps involved. Generating automatically challenging, we describe here two relevant contributions. First, construct release a new dense video captioning dataset, Video Timeline Tags (ViTT), featuring variety of together annotations. Second, explore...

10.48550/arxiv.2011.11760 preprint EN other-oa arXiv (Cornell University) 2020-01-01

We show that several popular few-shot learning benchmarks can be solved with varying degrees of success without using support set Labels at Test-time (LT). To this end, we introduce a new baseline called Centroid Networks, modification Prototypical Networks in which the labels are hidden from method test-time and have to recovered through clustering. A benchmark perfectly LT does not require proper task adaptation is therefore inadequate for evaluating methods. In practice, most cannot LT,...

10.48550/arxiv.1902.08605 preprint EN other-oa arXiv (Cornell University) 2019-01-01

Games generalize the single-objective optimization paradigm by introducing different objective functions for players. Differentiable games often proceed simultaneous or alternating gradient updates. In machine learning, are gaining new importance through formulations like generative adversarial networks (GANs) and actor-critic systems. However, compared to optimization, game dynamics more complex less understood. this paper, we analyze gradient-based methods with momentum on simple games. We...

10.48550/arxiv.1807.04740 preprint EN other-oa arXiv (Cornell University) 2018-01-01

Model-agnostic meta-learning (MAML) is a popular method for few-shot learning but assumes that we have access to the meta-training set. In practice, training on set may not always be an option due data privacy concerns, intellectual property issues, or merely lack of computing resources. this paper, consider novel problem repurposing pretrained MAML checkpoints solve new classification tasks. Because potential distribution mismatch, original steps no longer optimal. Therefore propose...

10.48550/arxiv.2103.09027 preprint EN other-oa arXiv (Cornell University) 2021-01-01

Labeling data is often expensive and time-consuming, especially for tasks such as object detection instance segmentation, which require dense labeling of the image. While few-shot about training a model on novel (unseen) classes with little data, it still requires prior many labeled examples base (seen) classes. On other hand, self-supervised methods aim at learning representations from unlabeled transfer well to downstream detection. Combining promising research direction. In this survey,...

10.48550/arxiv.2110.14711 preprint EN other-oa arXiv (Cornell University) 2021-01-01
Coming Soon ...