Prompt Engineering
Definition
The practice of designing, testing, and iterating on the instructions given to language models to achieve reliable, high-quality outputs. In the agent context, prompt engineering encompasses: system prompt design (defining the agent's role, capabilities, and constraints), tool descriptions (explaining when and how to use each tool), few-shot examples (demonstrating expected behavior), and dynamic prompt assembly (constructing prompts from templates, retrieved context, and conversation history). Effective prompt engineering is empirical — it requires systematic testing against representative inputs, not just intuition.
Builder Context
For agents, prompt engineering is primarily about tool selection accuracy and error recovery behavior. The most impactful improvements come from: (1) clear, specific tool descriptions with examples of when to use and when NOT to use each tool; (2) explicit failure handling instructions ('if the API returns an error, try X before giving up'); (3) output format specifications with examples. Avoid over-prompting — agents with 5000-word system prompts perform worse than agents with 500-word focused instructions. Test prompts against adversarial inputs (ambiguous requests, multi-step tasks, conflicting instructions) not just happy-path examples.