Panoptic Scene Graph Generation with Semantics-Prototype Learning
Panopticon
DOI:
10.1609/aaai.v38i4.28098
Publication Date:
2024-03-25T09:28:34Z
AUTHORS (7)
ABSTRACT
Panoptic Scene Graph Generation (PSG) parses objects and predicts their relationships (predicate) to connect human language visual scenes. However, different preferences of annotators semantic overlaps between predicates lead biased predicate annotations in the dataset, i.e. for same object pairs. Biased make PSG models struggle constructing a clear decision plane among predicates, which greatly hinders real application models. To address intrinsic bias above, we propose novel framework named ADTrans adaptively transfer informative unified ones. promise consistency accuracy during process, observe invariance degree representations each class, learn unbiased prototypes with intensities. Meanwhile, continuously measure distribution changes presentation its prototype, constantly screen potentially data. Finally, predicate-prototype representation embedding space, are easily identified. Experiments show that significantly improves performance benchmark models, achieving new state-of-the-art performance, shows great generalization effectiveness on multiple datasets. Our code is released at https://github.com/lili0415/PSG-biased-annotation.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (11)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....