Revisiting Prompt Engineering via Declarative Crowdsourcing
Crowdsourcing
Leverage (statistics)
DOI:
10.48550/arxiv.2308.03854
Publication Date:
2023-01-01
AUTHORS (5)
ABSTRACT
Large language models (LLMs) are incredibly powerful at comprehending and generating data in the form of text, but brittle error-prone. There has been an advent toolkits recipes centered around so-called prompt engineering-the process asking LLM to do something via a series prompts. However, for LLM-powered processing workflows, particular, optimizing quality, while keeping cost bounded, is tedious, manual process. We put forth vision declarative engineering. view LLMs like crowd workers leverage ideas from crowdsourcing literature-including leveraging multiple prompting strategies, ensuring internal consistency, exploring hybrid-LLM-non-LLM approaches-to make engineering more principled Preliminary case studies on sorting, entity resolution, imputation demonstrate promise our approach
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....