Benchmark of four popular virtual screening programs: construction of the active/decoy dataset remains a major determinant of measured performance
Benchmarking
Decoy
Docking (animal)
Benchmark (surveying)
DOI:
10.1186/s13321-016-0167-x
Publication Date:
2016-10-17T04:28:46Z
AUTHORS (4)
ABSTRACT
In a structure-based virtual screening, the choice of docking program is essential for success hit identification. Benchmarks are meant to help in guiding this choice, especially when undertaken on large variety protein targets. Here, performance four popular screening programs, Gold, Glide, Surflex and FlexX, compared using Directory Useful Decoys-Enhanced database (DUD-E), which includes 102 targets with an average 224 ligands per target 50 decoys ligand, generated avoid biases benchmarking. Then, relationship between these performances properties or small molecules was investigated.The comparison based two metrics, three different parameters each. The BEDROC scores α = 80.5, indicated that, overall database, Glide succeeded (score > 0.5) 30 targets, Gold 27, FlexX 14 11. did not depend hydrophobicity nor openness cavities, neither families proteins belong. However, despite care construction DUD-E differences that remain actives likely explain successes FlexX. Moreover, similarity its crystal structure ligand seems be at basis good Glide. When all significant removed from benchmarking, subset 47 remains, only 5 4 2.The dramatic drop programs shows we should beware benchmarks, because may due wrong reasons. Therefore, benchmarking would hardly provide guidelines experiments, tendency maintained, i.e., display better than Surflex. We recommend always use several combine their results. Graphical AbstractSummary results obtained by database. percentage successful results, BDEROC(α 80.5) 0.5, entire considered Blue, biased chemical libraries Red.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (37)
CITATIONS (75)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....