Astera AI Agent Builder

Your AI Agents. Built on your data. By your team.

Design, test and launch autonomous AI agents in hours.

Join the Waitlist  
Blogs

Home / Blogs / Prompt Engineering Best Practices You Should Know

Table of Content
The Automated, No-Code Data Stack

Learn how Astera Data Stack can simplify and streamline your enterprise’s data management.

    Prompt Engineering Best Practices You Should Know

    May 9th, 2025

    Introduction

    Look around yourself.

    We are swarming in the world of data and AI. From students at school using ChatGPT to complete their assignments to professionals using AI for market research, content creation, or even debugging code, everyone is leveraging the power of large language models (LLMs). Mr. Smith isn’t Googling his tax questions anymore; he’s asking an AI assistant.

    But as widespread as AI has become, one thing remains clear: not every result that an AI tool, such as an AI agent or a chatbot, produces has similar quality or desired outcome; it depends entirely on how you ask. This is where prompt engineering steps in. Prompt engineering is an essential skill to improve the quality of results generated by LLMs. In this blog, we’ll explore the best practices of prompt engineering. So, whether you’re building an AI agent or simply trying to get a better response from your chatbot, you’ll know precisely how to speak AI’s language.

    Prompt Engineering Best Practices

    What is Prompt Engineering?

    Prompt engineering is the art of writing instructions for AI tools. Through prompts, AI software knows what task to perform, whether writing an email, scheduling a meeting, or drafting a blog post. Without clear instructions, AI applications won’t know what to do.

    As Anthropic’s CEO stated:

    “It sounds simple, but 30 minutes with a prompt engineer can often make an application work when it wasn’t before.”

    Dario Amodei, CEO and Co-founder of Anthropic.

    Best Practices for Prompt Engineering

    Prompt engineering helps you get more accurate, helpful responses from AI models. This section covers practical tips for writing better prompts. The focus is on avoiding common mistakes and making AI work more effectively for your use case.

    Specify an Audience

    To get the desired output, specify who the AI is supposed to respond for. Specifying an audience gives your prompt structure and direction. It ensures the output aligns with your goals, whether you’re crafting emails, summarizing documents, or handling customer inquiries.

    Prompt 1: “Summarize this document.”

    Prompt 2: “Summarize this marketing report for the Head of Sales in 3 bullet points, highlighting key revenue trends.”

    Prompt 2 works better because it specifies the target audience, output format, and focus area i.e., revenue trends.

    Be Clear and Specific

    AI tools perform best when instructions are unambiguous. Avoid generalities or overly wordy instructions. Instead, use precise language, define all terms, and state what you want the LLM to do. Business users should consider prompts like a mini brief not a casual chat.

    Prompt 1: “Make this sound better.”

    Prompt 2: “Rewrite this message in a persuasive tone for an enterprise buyer.”

    Prompt 2 works better because it specifies the prompt’s tone, target audience and purpose.

    Set a Persona

    One of the most effective ways to guide AI behavior is by assigning it a role or persona. This helps the model tailor tone, vocabulary, and response style based on the intended context, just like briefing a new hire.

    Prompt Example: “You are a customer support agent. Respond to the following complaint in a calm and helpful tone.”

    Comprehend the Task at Hand

    Clearly understanding the task is the first step towards writing effective AI prompts. Study a variety of data points to understand the scope of the task before crafting prompts. This is because LLMs follow instructions literally. If your task is underspecified or misaligned, the output will be too.

    Good Prompt Practice:

    Task: Summarizing sales performance

    Prompt: “Summarize the monthly sales performance of all regional managers. Highlight any region where sales dropped by more than 15% compared to the previous month.”

    Why it works:

    • Defines the type of summary needed
    • Clarifies what to analyze (regional sales)
    • Provides a threshold (15%) to trigger additional attention

    Remove Ambiguity

    Be exact. Avoid vague words, phrases, and terminologies, and strip away assumptions.

    Prompt 1: “Extract relevant data from this form.”

    What’s wrong?

    • What does “relevant” mean?
    • What type of form is it?
    • What fields should we extract?
    • What format should the output be in?

    Prompt 2: “Extract the following fields from the purchase order form: Customer Name, Order ID, Product, Quantity, and Total Price. Return the result in JSON format with field names as keys.”

    Why this works:

    • Specific fields are listed.
    • Document type is identified.
    • Output format is clearly mentioned.

    Tell AI What Not to Do

    Just like humans, AI benefits from clear boundaries. Telling the model what not to do helps avoid irrelevant outputs especially in high-stakes business tasks.

    Prompt 1: “Summarize this report.”

    Prompt 2: “Summarize the attached financial report in under 200 words. Do not include introductory context or historical comparisons. Focus only on Q4 revenue figures and cost breakdowns.”

    Prompt 2 works better because it specifies the word count, what not to include, and the focus area thus, generating better results.

    Break Down Complex Tasks

    Break down the task into smaller, logical steps (step-by-step instructions) and make sure to include all the necessary information. This technique is called chain-of-thought prompting. Avoid overfitting the prompt i.e. trying to do too much in a single prompt.

    Overloaded Prompt: “Read the invoice, clean the data, summarize it, provide a visual chart of monthly spending, and also identify any anomalies you see.”

    What’s wrong?

    • It’s trying to do too many things.
    • No clear separation between tasks.
    • It lacks formatting or structure.

    Step-by-Step Prompt: “Perform the following tasks in order:

    1. Extract the following fields from the invoice: Date, Vendor, Amount, and Category.
    2. Summarize total monthly spending, grouped by category.
    3. Highlight any transaction where the amount exceeds $10,000.
    4. Return the result as a JSON object.”

    Why this works:

    • Broken into 4 manageable steps.
    • Output expectations are defined.
    • Easy for the model (or pipeline) to follow and debug.

    Structure Prompts by Priority

    List the desired actions first, followed by exceptions and edge cases. Then, add instructions on what to avoid.

    Example:

    1. You are a business analyst assistant who reviews monthly sales data in CSV format.
    2. Start by calculating total revenue, number of transactions, and average order value.
    3. Then, group sales by “product category” and compute total sales and revenue for each category.
    4. Highlight any product category where monthly revenue dropped by more than 20% compared to the previous month.
    5. If any entries have missing Product ID or Revenue values, flag those rows separately under “Data Issues.”
    6. Do not include any predictions or forecasting; only analyze historical data.
    7. Return the output in a structured JSON format with clear keys and sub-sections for summary, breakdown, and issues.

    Specify the Output Format

    Give clear instructions on the output format. Let’s say for example specify in your prompt that the output should be in CSV and specify the delimiter. Without format instructions, AI might return data in unexpected ways wrapped in quotes, as plain text, or even in an incorrect structure.

    Prompt example: “Only get me the delimited output with field headers and values. Please don’t include the output enclosed in parenthesis or quotes.”

    Final Word

    Prompt engineering is more than just writing clever instructions. It’s about understanding how large language models interpret context, structure, and logic. A well-crafted prompt significantly increases the chances of receiving the correct answer from the language model. As AI’s impact continues to grow in business operations, the ability to write precise and effective prompts has become a critical skill across roles.

    With platforms like Astera, prompt engineering becomes both intuitive and powerful. Users can dynamically integrate prompts into low-code workflows, conditionally execute chains of instructions, and even fine-tune responses using functions and enterprise data.

    Learn more about Astera AI Agent Builder.

    Happy prompting!

    Authors:

    • Tooba Tariq
    You MAY ALSO LIKE
    Why AI is About Mastering Prompts
    Understanding Autonomous AI Agents
    Enterprise AI Strategy: Why AI Agents Should Be Your First Step
    Considering Astera For Your Data Management Needs?

    Establish code-free connectivity with your enterprise applications, databases, and cloud applications to integrate all your data.

    Let’s Connect Now!
    lets-connect