How to Speak LLM: Stop Prompting AI the Wrong Way

June 12th, 2025

LLMs are relatively new, with the first widely adopted model, GPT-3, made available to the public through ChatGPT on November 30, 2022. It has now surpassed 100 million active users a month. Despite their accessibility, many people don't understand them, and even fewer know how to use them efficiently.

LLMs are powerful tools consisting of vast neural networks and probability-based models. While they might feel magical, at their core, they predict the next likely word based on patterns in data. What you put in has a signifigant effect on what you get out. This is a very important concept, and it is fundamental for exploring what today is known as "Prompt Engineering".

Why Prompt Engineering?

Large Language Models (LLMs) today can do miraculous tasks with quite a bit of accuracy. They can summarize text, write code, generate images, and the latest and greatest can even hook into external tools such as Google Drive and Notion.

The problem is LLMs sometimes get lost, or rather they are unsure of how to accomplish a task. Maybe you weren't specific enough, or lacked a logical order of instructions. It's possible you didn't even provide the LLM instructions.

To introduce the topic, let's take a trip down memory lane. You are back in pre-school, and your teacher gathers your class in a circle. Your job is to write the instructions on how to make a peanut butter and jelly sandwich. That's all you are told.

Once you hand in your instructions, your teacher now proceeds to go through the pile, one by one, following the directions of the students. As it turns out, the students are screaming, laughing, and finally comprehending that their instructions weren't good enough. Your teacher has dipped her hand in peanut butter, smeared it all over herself, and forgot to use bread because, well, it was never specified in the instructions.

LLMs... they're (sorta) like this.

LLMs can still give you decent results without detailed guidance, but if you want a great response some effort up front makes all the difference.

Writing Great Prompts

Now you are probably wondering how you can write great prompts, and I'm glad you asked.

Let's start with the basic components of a chat between you and a model such as ChatGPT or Google Gemini. These models have the following to work with whenever you send a message:

  • The previous chat history
    • Most models today utilize your previous conversation history, with some even using all of your history.
  • The information you provide
    • This one can come in various forms, depending on what your model supports. Text, images, files, etc.
  • The task you give the model
    • This is what you want the model to do

Context

Above all, the most important thing you can do is give your LLM relevant content to work with. If you are trying to create great brand marketing for instance, give it a description of your ideal customer, give it the prices of your products, give it brand colors, and give it your brand's mission statement.

Likewise, if you are a developer trying to debug a faulty piece of code, give the model not just the function you are working on, but also all of the functions that are being called from inside that snippet of code. Provide the model with the knowledge of what libraries you are using and what version.

The more information your model has, the better the outputs will be.

Questions

The second most important piece to great results is not what questions you ask, but rather how you are asking them. To demonstrate how phrasing impacts model quality, here are some direct comparisons showing poor versus well-structured prompts:

Bad QuestionGood Question
Get me top performing stocks and run an analysis.I want you to act as a Financial Analyst. Go to Yahoo Finance and find the top 10 stocks by market cap. Once you have curated a list, I want you to go to marketwatch.com and find relevant news on each of them. Provide me a list, in order of market cap, with the stock name, it's price, and the overall market sentiment.
How do I cook chicken?I am preparing for dinner and looking to cook chicken. I am planning to grill it on the barbeque, along with some vegetables. The chicken is marinated and ready to go, what steps should I take to cook it?
Write me a blog post on prompt engineering.I am writing a blog about how to use LLMs effectively with prompt engineering. I want it to be beginner-friendly, conversational, and include real-life examples. Start with a short intro, then explain what LLMs are and why prompting matters.

As you can see, the good questions are a lot more descriptive, provide more context, and overall help the LLM get a full sense of what you are trying to accomplish. I encourage you to examine the difference for yourself by pasting these into an LLM of your choice.

Strategies for Amazing Outputs

Let's walk through some well-known strategies for effective outputs. These are ever evolving, but if you just understand the basics and why they work, you'll be much better at using LLMs than most other people.

Role Assignment

Role assignment can actually be seen in one of our good example questions above:

I want you to act as a Financial Analyst.

In this instance, we ask the LLM to behave as a financial analyst. By doing this, the LLM is better able to understand the context in which you'd like the task done. When it figures out what information it should use from it's training, it's searching with the words "Financial Analyst" in mind.

Role assiging allows you to tap into the LLM's ability to act as a specialist, providing focused, expert-level responses which help to reduce rambling and generic output.

One-Shot and Multi-Shot Prompting

Models, like people, perform better when given an example. In this case, we give the model one example of what we want:

Q: Summarize the following: Steven Paul Jobs (February 24, 1955 – October 5, 2011) was an American businessman, inventor, and investor best known for co-founding the technology company Apple Inc. Jobs was also the founder of NeXT and chairman and majority shareholder of Pixar. He was a pioneer of the personal computer revolution of the 1970s and 1980s, along with his early business partner and fellow Apple co-founder Steve Wozniak.
A: Steven Jobs was an American business man known for co-founding Apple with Steve Wozniak. He also founded NeXT and was a chairman of Pixar.

Q: Summarize the following: Elon Reeve Musk (/ˈiːlɒn/ EE-lon; born June 28, 1971) is a businessman. He is known for his leadership of Tesla, SpaceX, X (formerly Twitter), and the Department of Government Efficiency (DOGE). Musk has been considered the wealthiest person in the world since 2021; as of May 2025, Forbes estimates his net worth to be US$424.7 billion.
A:

As a result, we can see the model not only perform better, as compared to loose instructions, but it also has some rigidity to it's response, resembling the answer we provided on our first example.

This works due to the way generative pre-trained transformers (GPTs) work. In simple terms, GPTs are a prediction model for the next likely word, or set of characters, in a string. It is important to understand that by providing an example, the model is better able to predict the next word because of the context it has recieved.

If we give multiple examples, this is called "multi-shot prompting". Multi-shot prompting is especially good in scenarios where there are classifications, and you want to give an example for each type:

Q: Dog
A: Animal

Q: Rose
A: Plant

Q: Diamond
A: Mineral

Q: Eagle
A: Animal

Q: Oak
A: Plant

Q: Quartz
A: Mineral

Q: Salmon
A: ?

By utilizing this technique, you can teach the model to associate each of these words with their classification in concise, natural language. Since words like "Dog", "Eagle", and "Salmon" are more closely related than "Diamond" and "Oak", the model can infer that Salmon is an animal, and not a mineral or a plant.

Chain of Thought

Models get confused the more computationally heavy the task becomes. Due to this, we need to combat hallucinations and false predictivity. One way we have discovered how to do this is by something called Chain of Thought (CoT).

Chain of Thought is exactly what it sounds like. It is a step by step process that we used to explain to an LLM how to do things. The key to CoT is that we are essentially verbalizing the tiny voice of reasoning in our heads we typically use to justify our math:

Q: Sally has 10 apples and hands John half of them. John then gives 3 apples to Bob, and then Bob asks Sally for 2 more. How many do each of them have
A:
1. We know Sally has 10 apples.
2. Sally gives half of her apples to John. 10/2=5. Sally has 5 apples. 0+5=5. John has 5 apples.
3. John gives 3 apples to bob. 5-3=2. John has 2 apples. 0+3=3. Bob has 3 apples. 5+0=5. Sally has 5 apples.
4. Bob asks Sally for 2 more apples. 5-2=3. Sally has 3 apples. 3+2=5. Bob has 5 apples. 2+0=2. John has 2 apples.

Q: Robert sells goldfish at the store. He has 20 goldfish to give out. Timmy comes in to purchase a quarter of the entire stock, and shortly after Billy buys one third of the remaining stock. Emma also comes in to buy 2 goldfish. At the end of the day, Timmy realizes he bought 2 too many and returns them to Robert. How many goldfish does each person have?
A:

When you run this prompt, you can see that the LLM follows a series of steps, and is keeping track of the mathematics behind each transfer of goldfish. By keeping track of the math and mapping our sentences to math problems, we help the LLM understand how it can keep track of complex word problems.

What Happens When We Don't Write Great Questions?

So what happens when we don’t write great questions? Well, you'll typically get average outputs, but there's a large possibility for misleading or completely useless results as well.

When you leave gaps in your context, such as not providing steps, or maybe even examples, the LLM is left to figure it out for itself, relying on previously trained data. Although LLMs are trained on a plethora of data, chances are whatever it finds and decides to use won't be the data that you want the model to use.

With the added guesswork of filling in context, this often leads LLMs to "hallucinate", and go off on their own tangent of made-up examples and instructions. The techniques above help to mitigate this.

Along with these possible hallucinations, responses just aren't tailored if you forget to provide context. Models don't know how you intend them to frame a response without giving it something like a role or series of examples.

By providing relevant content to the model, you can get results that better align with your expected answer, reducing the amount of iterations you do and even aligning the model with intended personality or tone.

Personal Tips

Now that we’ve covered the foundations, here are my favorite personal tactics for refining LLM outputs.

Use the Right Model

Out of all the tips I can give, above all, is using the right model. What does this mean? Well, each LLM is trained on a specific set of data. Although there is large overlap in training data between models, it's not the same. GPT-4, Gemini, and LLaMa 3 all have slightly different training sets with slightly different prediction algorithms. This means that where some models lack, others may excel.

Some models, such as Anthropic's Claude, are even trained to be specifically better at tasks such as programming.

When picking a model, find one that excels in the specific category you need. OpenAI's GPT-4o is a great swiss army knife, but if you want to take your outputs to the highest level, selecting the best model for the job is a must.

Query for Multiple Versions

Asking your model to provide you multiple versions is a sure-fire way to get better results. I usually do this for brainstorming things such as presentation titles, blogs, or anything where I feel there are multiple ways to complete the task and I'm not sure what way I'd like. Ask your LLM to give you 10 examples.

I am trying to create a project that gets users current location, connects them to their local commute options, and helps them plan an optimal route to work. I am looking to name this app with something sleek and flashy. Ideally, since this will be used on mobile devices, the name should be representable with a logo.

From here, you can keep conversing and asking refinement questions until you get an optimal answer. Feedback such as "I really like 1 and 3, can we go more in that direction", or "Give me some options that are more playful" can go a long way and help you find ideas you like.

Clarification Loops

Something that I use frequently, that I call "Clarification Loops", throws responsibility on the LLM to clarify any details it needs. Although this seems painfully obvious, it's a strategy that goes widely unused.

This tactic is especially good for fleshing out complex ideas, such as code architecture and marketing plans. Let's see an example of a message regarding landing page copy:

I really like this, but it could probably use some work. Feel free to ask any questions to help generate copy for this landing page. Remember, we want to invoke creativity and developer empowerment.

Here we can see that the user asks the model to ask any questions. By doing so, the model will respond with a set of relevant questions to have you answer, providing the model with more context. By doing so, on the next turn, it can generate a complete response.

Self-Induced Context

Sometimes we want to craft great responses, but maybe we don't have loads of information to hand the model, or we just don't feel like typing it out. With "Self-Induced Context", we utilize the model to generate it's own specific context.

Since models have memory of their chat history, we can intelligently set up a series of questions that in the end help us gather a better response.

Lets say we want to craft an amazing marketing plan for the next quarter. We can ask the following set of questions, one by one, to the LLM:

How do organizations and businesses craft their marketing plans?
What key aspects of marketing plans seperate the best of the best from the others?
What are some metrics that marketing plans typically use to track performance?
What is achievable in a quarter?
Craft me a marketing plan for <insert business here>. We <value prop>. Our customers are typically <customer profile>. We want to appeal to them by tapping into <emotions>.

By doing this, your model will regurgitate a bunch of information it understands and knows about marketing, slowly surfacing a slew of information to the context of your conversation. As this context builds the model becomes primed to better predict the next word, making for relevant, high-quality responses.

Hence, you have just given the model loads of context to perform better.

Adding Humanity to Writing

ChatGPT has a certain way of writing... usually with lots of emojis. The reality is that even now, tons of people are able to pick out AI-written language from articles, blogs, and messaging apps. If you'd like to help ChatGPT sound more human, and specifically more like you, then this tip does the trick.

Feed it your old writing. Take an old english essay, some chats you sent to friends, or any other substantial piece of writing and supply it to the model. This works with any writing. Here I supplied the writing of a quirky catepillar enthusiast, (with the help of ChatGPT), to tell me why I should watch Suits in the same style:

Ohhh boy, don’t even get me started on caterpillars—they are, like, the weird squishy noodles of destiny! I mean, think about it: these little fuzz-tubes just munch leaves all day like it’s their full-time job (which, biologically speaking, it is), and then one day—bam!—they wrap themselves up like an anxious burrito and come out as fluttering winged sky fairies! It’s like Pokémon evolution but real! Also, did you know some caterpillars shoot poop to avoid predators? Like little organic cannons of distraction! Nature is wild, man. Wild and gooey and full of LEGITIMATE TRANSFORMATIVE POWER.

I want you to emulate the writing style of the text above. Write me a paragraph convincing me that the TV show Suits is the best TV show for corporate high-achievers to binge after work.

Self-learning Tips

One of the most powerful capabilities of LLMs today is their ability to summarize, explain, and elaborate on a myriad of topics. We can use this to teach ourselves so many different subjects, it's just a matter of how we extract the information. Using things such as self-induced context help a lot, but you can also ask models to explain things in terms of a child, as the Pareto principle, and much more.

Reading and fetching information is great, but the true learning capabilities are in the power to self-assess and participate in active recall. By asking models to quiz you on a topic, or files such as your class notes, it becomes incredibly easy to start studying and increase your ability to understand material. I recommend trying out all of the strategies we have covered in order to achieve this

Conclusion

I hope you enjoyed this post.

Prompting is a skill that gets better with practice, and the more context and clarity you provide, the more impressive the results.

Once you master the basics, as well as the strategies we discussed in this post, you unlock the full potential of these tools.

I hope this provides clarity on LLMs, and how you can integrate them into your own lives. Whether you are a developer like me or just trying to cook dinner, there is a benefit in using this new and powerful technology.

Do not get left behind, as these models are quickly affecting the way we work, how we learn, and where our world is going. Stay curious and explore.


Resources: