PME: pruning-based multi-size embedding for recommender systems
Pruning
Categorical variable
Feature (linguistics)
DOI:
10.3389/fdata.2023.1195742
Publication Date:
2023-06-15T05:48:29Z
AUTHORS (6)
ABSTRACT
Embedding is widely used in recommendation models to learn feature representations. However, the traditional embedding technique that assigns a fixed size all categorical features may be suboptimal due following reasons. In domain, majority of features' embeddings can trained with less capacity without impacting model performance, thereby storing equal length incur unnecessary memory usage. Existing work tries allocate customized sizes for each usually either simply scales feature's popularity or formulates this allocation problem as an architecture selection problem. Unfortunately, most these methods have large performance drop significant extra time cost searching proper sizes. article, instead formulating problem, we approach from pruning perspective and propose Pruning-based Multi-size (PME) framework. During search phase, prune dimensions least impact on reduce its capacity. Then, show token obtained by transferring pruned cost. Experimental results validate PME efficiently find hence achieve strong while significantly reducing number parameters layer.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (32)
CITATIONS (0)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....