Five essential tips for creating impactful prompts

Publié le 24 June 2025 à 10h54
modifié le 24 June 2025 à 10h54

The art of prompting embodies a crucial skill in interacting with advanced language models. Designing impactful prompts relies on fundamental principles of precision and creativity. A well-constructed prompt can transform the quality of generated responses, thereby influencing the relevance of the obtained results. With recent advancements in artificial intelligence, mastering this technique has become imperative for anyone looking to optimize the use of models like Gemini or other LLMs. Five innovative strategies emerge to refine this practice, made accessible to all through a methodical and thoughtful approach.

Properly configuring the three inference variables is an essential starting point for optimizing LLM results. These variables govern creativity, accuracy, and hallucinations of the models. The max tokens variable determines the maximum number of tokens generated. By adjusting this value to your use case, you can reduce costs and improve the relevance of responses while avoiding extraneous text at the end. For example, a value below 10 tokens is recommended for classification tasks.

The temperature influences the level of determinism in the answers. A temperature close to 0 produces expected and reliable results. In contrast, a high value encourages creativity, although it increases the risk of hallucinations. The top-K and top-P parameters allow for more specific word selections, with the former limiting choices to the most likely words, while the latter favors those whose cumulative probability does not exceed a certain threshold.

Using few-shot prompting

Few-shot prompting emerges as an effective alternative to fine-tuning. This method encourages the model to comprehend the expected output through examples. Providing at least one example is vital, but multiple examples ensure a deeper understanding and more reliable results. Experts recommend giving between three and five representative examples. These examples must be error-free, as a single mistake can disrupt the model’s interpretation.

For instance, a prompt to transform a pizza order into JSON could include several output cases, illustrating the expected structure. A good example is: “I want a small pizza with cheese, tomato sauce, and pepperoni.” followed by the response in JSON format, providing a clear reference.

Utilizing step-back prompting

Step-back prompting is a flexible method for solving complex problems. This approach begins by asking the model which method it would adopt to tackle a given problem. Once this initial approach is established, the model must apply this method to solve the problem. This technique stimulates deeper reasoning and allows access to fundamental concepts.

By specifying the approach before solving the problem, the model can leverage a broader knowledge base. For complex math problems, a prompt might initially ask: “What approach should be taken to solve the following problem?”. The response obtained then clarifies the steps needed to finalize the solution.

Reiterate with self-consistency

Self-consistency involves executing several prompt cycles to enhance accuracy rates. Submitting the same prompt multiple times while adjusting the temperature to encourage response diversity helps extract a coherent answer. This method reduces the risks of hallucinations.

To classify the content of a phone transcript, it is possible to ask the same prompt three times. Each response is then analyzed based on the majority voting principle, where the most recurrent answer is considered the most reliable. This static refinement process ensures better accuracy.

Testing with automatic prompt engineering

Automatic prompt engineering is an innovative technique for automating the creation of effective prompts. This method involves asking the model to produce various variants of a reference prompt. Each variant is then tested, and the generated results are compared to reference human responses.

Result analysis is conducted using standardized metrics such as BLEU and ROUGE. By identifying the version of the prompt that yields output closest to human references, users gain a powerful tool to refine their approach.

Using a spreadsheet allows tracking the impact of each prompt and evaluating their effectiveness. This method aids in understanding performance and adjusting parameters, ensuring continuous learning on how to optimize prompt creation.

Frequently Asked Questions

What are the three key inference variables for creating effective prompts?
The three essential inference variables are: max tokens (which controls the maximum number of tokens generated), temperature (which influences the determinism of the response), and top-K/top-P (which regulate word selection).

How can I use few-shot prompting to improve my prompts?
Few-shot prompting involves providing several examples of the expected output to guide the model. It is recommended to give between three and five clear and diverse examples.

What is step-back prompting and how can it be used?
Step-back prompting involves first asking for the approach to solve a problem, followed by the resolution of that problem. This helps the model contextualize and think more deeply about the complex question.

How does self-consistency help increase the accuracy of generated responses?
Self-consistency involves submitting the same prompt multiple times and using a higher temperature setting to generate diverse responses, then selecting the most frequent answer, which reduces hallucinations.

What is automatic prompt engineering and how does it work?
Automatic prompt engineering involves generating multiple variations of a reference prompt to test their effectiveness, then evaluating the results in comparison to human responses using standardized metrics.

How can I use a spreadsheet to improve my prompts?
Using a spreadsheet allows you to list prompts and evaluate their effectiveness. You can include elements such as the prompt name, objective, parameters used, generated output, and an assessment of the results.

actu.iaNon classéFive essential tips for creating impactful prompts

Joëlle Pineau (Meta) expects a commercialization of household robots in the next 5 to 10 years

découvrez comment joëlle pineau, spécialiste de l'intelligence artificielle chez meta, envisage la révolution des robots ménagers. dans les 5 à 10 prochaines années, ces technologies promettent de transformer notre quotidien. restez informé des avancées et des opportunités qu'apportera cette nouvelle ère de l'automatisation domestique.

how does AI evaluate? anthropic explores Claude’s values

découvrez comment l'intelligence artificielle évalue les valeurs humaines à travers l'exploration des modèles de claude par anthropic. plongez dans les mécanismes de décision et d'éthique qui façonnent l'avenir de l'ia.

A new model predicts the tipping point of a chemical reaction

découvrez comment un nouveau modèle révolutionnaire prédit le point de non-retour d'une réaction chimique, offrant des perspectives inédites pour la recherche en chimie et les applications industrielles. explorez les implications de cette avancée dans la compréhension des réactions chimiques complexes.

The meeting between touch and technology: AI introduces tangible textures to 3D printed objects

découvrez comment l'intelligence artificielle révolutionne l'impression 3d en intégrant des textures palpables, offrant ainsi une nouvelle dimension tactile aux objets. plongez dans l'univers innovant où technologie et sensation se rencontrent pour transformer notre expérience d'interaction avec les créations numériques.

A collective license to ensure that British authors are compensated for their works used in training AI.

découvrez comment une licence collective peut assurer une rémunération équitable pour les auteurs britanniques dont les œuvres sont utilisées dans l'entraînement des intelligences artificielles, protégeant ainsi leurs droits d'auteur tout en favorisant l'innovation.

The 10 most effective AI image generators of April 2025