Logic-Q: Improving Deep Reinforcement Learning-based Quantitative Trading via Program Sketch-based Tuning
Sketch
DOI:
10.1609/aaai.v39i17.34045
Publication Date:
2025-04-11T12:50:36Z
AUTHORS (8)
ABSTRACT
Deep reinforcement learning (DRL) has revolutionized quantitative trading (Q-trading) by achieving decent performance without significant human expert knowledge. Despite its achievements, we observe that the current state-of-the-art DRL models are still ineffective in identifying market trends, causing them to miss good opportunity or suffer from large drawdowns when encountering crashes. To address this limitation, a natural approach is incorporate knowledge trends. Whereas, such abstract and hard be quantified. In order effectively leverage knowledge, paper, propose universal logic-guided deep framework for Q-trading, called Logic-Q. particular, Logic-Q adopts program synthesis sketching paradigm introduces model design leverages lightweight, plug-and-play trend-aware sketch determine trend correspondingly adjusts policy post-hoc manner. Extensive evaluations of two popular tasks demonstrate can significantly improve previous strategies.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (0)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....