Rewrite your prompt to match the strengths of GPT-4o, Claude, or Gemini with model-specific formatting tips.
Target model
Getting the best results from AI models isn't just about what you ask - it's about how you structure your request for the specific model you're using. GPT-4o, Claude 3.5, and Gemini 1.5 each have distinct strengths, formatting preferences, and response patterns. A prompt optimized for Claude (using XML tags and structured instructions) will produce better results than a generic prompt, even with the same core request. Our free Model-Specific Prompt Optimizer rewrites your prompt using the formatting and structural conventions that each model responds to best.
Anthropic's Claude models have been trained to work especially well with XML-formatted prompts. Wrapping your task in `<task>` tags, your context in `<context>` tags, and your constraints in `<instructions>` tags gives Claude a clearer structure to process. This formatting is recommended in Anthropic's own prompt engineering guide and consistently produces better output for complex tasks.
OpenAI's GPT-4o responds particularly well to few-shot examples - showing the model what good output looks like before asking it to produce its own. GPT-4o also responds well to explicit persona definitions and numbered step-by-step instructions. The model has been trained extensively on assistant-style interactions, so prompts that frame the request conversationally while being specific about output format tend to work well.
Google's Gemini models respond best to clean, simple prompts with explicit input/output specifications. Gemini can handle complex instructions but produces more reliable results with clear, plain-English direction rather than elaborate prompt engineering structures. For Gemini, clarity and simplicity of the ask often outperforms elaborate formatting.
For Claude: Use XML tags to structure complex prompts. A system-level instruction in `<instructions>` tags, context in `<context>` tags, the actual task in `<task>` tags, and the desired output format in `<output_format>` tags gives Claude maximum clarity. Claude also responds well to being asked to "think step by step" for analytical tasks, as its extended thinking capability produces higher-quality reasoning when explicitly prompted.
For GPT-4o: Lead with a clear persona definition, then show 1-2 examples of ideal output format before making your request. GPT-4o is highly responsive to few-shot examples and uses them to calibrate both content and format. For creative tasks, specifying stylistic references ("in the style of a Bloomberg Businessweek feature") gives GPT-4o a rich training-data signal to draw on.
For Gemini: Keep prompts clean and direct. Specify the output format explicitly at the end: "Present as a numbered list," "Format as a Markdown table," "Respond in under 200 words." Gemini's large context window means it handles long documents well, but its best results come from prompts that are direct about what you want rather than elaborate in structure.
Get 3 free AI enhancements per day, no credit card required. Works inside ChatGPT, Claude, and Gemini.