Prompt Engineering

Prodigy’s prompt engineering workflows help you
find and evaluate the best Large Language Model
prompts for any custom use case.

prodigyab.llm.tournamentprompt_eval./data./prompts./configs========= Current winner: [prompt 1 + GPT-3] =========COMPARISON PROB TRIALS[prompt 1 + GPT-3] > [prompt 1 + GPT-4] 0.50 0[prompt 1 + GPT-3] > [prompt 1 + GPT-4] 0.71 1
This live demo requires JavaScript to be enabled.

Find the best prompts, quickly and empirically

Prodigy’s novel prompt tournament workflow gives you a reliable and quantifiable way to to evaluate your prompts on real-world data and determine which prompts perform best for your specific use case. Collaborate on prompt engineering and see the results in real time.

This live demo requires JavaScript to be enabled.
This live demo requires JavaScript to be enabled.

Build fully custom solutions with your model in the loop

Prodigy lets you build entirely custom workflows and interfaces using any model in the loop. Prodigy can transform the model’s response into consistent, structured data. Develop your prompts in Prodigy’s intuitive web interface and query the model live as you make changes to see instant results.

Documentation

Overview
  • Downloadable developer tool and library
  • Create, review and train from your annotations
  • Runs entirely on your own machines
  • Powerful built-in workflows

Pricing

Overview
  • Lifetime license, pay once, use forever
  • Flexible options for individuals and teams
  • Full privacy, no data leaves your servers
  • Download and install like any other library