Modern large language models like ChatGPT rely on complex neural networks trained on massive text datasets to generate human-like text. To analyze and process language, these AI systems break down text into smaller units known as tokens. A token usually corresponds to a single word or group of characters.
By segmenting text into discrete tokens, the AI model can then understand language as statistical relationships between each token. During training, the model is exposed to billions or even trillions of tokens from books, websites, and other text sources. It learns the probabilities of which tokens are likely to follow other tokens in sequences.
However, despite their vast training, AI models have a limited working memory. They can only actively consider a set number of tokens at once when generating text. This is known as the context window. The size of the context window varies across different AI architectures, but may range from 4,000 tokens to 16,000 or more tokens.
If a sequence exceeds the context window, the model will start to lose track of the original tokens that began the text. For example, by the end of a very long chat, the AI may forget words and phrases from earlier sentences. Its loss of context can lead to contradictory or disconnected statements.
That's why when interacting with AI, it's helpful to periodically recap important information and reset the context window. By reminding the model of relevant details, you can improve coherence and consistency in the text it generates. Understanding the constraints around tokens and context windows allows us to prompt more effectively.
The prompt is the text you provide to instruct the AI what kind of output you want it to generate. The model pays close attention to the prompt, as well as previous responses in the conversation and previous prompts. When crafting a prompt, explain the context and goals in detail, model the desired tone, incorporate well-known references, use technical terminology, set boundaries, and iterate through testing.
Prompting Techniques:
Sample prompts:
"Tell me about 'Pride and Prejudice' | "Write a brief, formal summary of the novel 'Pride and Prejudice', focusing on its major plot points and characters." |
"What is photosynthesis?" | "Explain the process of photosynthesis in simple terms suitable for a 5th-grade science class, including the role of sunlight, water, and carbon dioxide." |
"How to cook lasagna?" | "Generate a recipe for a vegetarian lasagna, including a list of ingredients, quantities, and step-by-step cooking instructions." |
"Talk about the French Revolution." | "Describe the historical significance of the French Revolution, paying particular attention to its impact on democracy and societal structure in Europe." |
"Write a poem." | "Compose a sonnet in the style of Shakespeare, with a theme of love and loss." |
One counterintuitive limitation of large language models like ChatGPT is their tendency to hallucinate or generate plausible-sounding but incorrect or nonsensical information. Once the AI begins generating text that strays from accuracy, it will often continue further down that erroneous path rather than self-correct.
This stubbornness occurs because the model aims to maintain local coherence within the text it is producing. The AI system does not actually have any beliefs, intentions, or understanding; it simply follows patterns learned from its training data. Even if those patterns lead to false or imaginary information, the model will keep extrapolating in the same direction rather than contradict itself.
Another quirk is that most commercial AI tools are specifically trained to be polite, harmless, and supportive. If you ask an AI to provide feedback on an idea, it will tend to respond with praise and encouragement rather than objective critiques. To get a more balanced perspective, explicitly prompt the model to highlight pros, cons, and improvements. Or request it "act as" a strict critic or devil's advocate.
In both cases, the key is realizing the AI has no inherent sense of truth or falsehood. Its knowledge comes from recognizing patterns, not comprehending meaning. Providing clear, explicit prompts is crucial to steering the model away from potential inaccuracies and towards truthful, comprehensive responses reflecting diverse perspectives. We must guide the AI - it will not correct itself without direction.
AI models can make factual mistakes or logical errors, so it is important to critically evaluate any text generated by the system. Check that facts stated are accurate, reasoning is sound, examples are relevant, and the tone and style match expectations. Look for any internal contradictions or inconsistencies that may indicate the AI has misunderstood the prompt or gone astray during generation. Consider whether the output comprehensively addresses all parts of the original prompt appropriately, or if some elements may have been overlooked or ignored. If there are errors or omissions, it may be necessary to re-prompt the model to try again, providing additional context and clarification needed to produce a satisfactory response. Applying human judgment is crucial when reviewing AI output before utilizing or acting upon it in any way.
When using AI for text generation, ask for pros and cons rather than just accepting everything it says. Request the AI to take on the role of a critic to provide more objective feedback. Use the AI to brainstorm ridiculous quantities of ideas, then pick out and expand on the most promising ones. Start new chat sessions periodically to reset the context window. Employ other tools like scholarly databases and creativity plugins to enhance the AI's capabilities. The key is interacting with care, precision, and human judgment.
Some applications of AI generation appropriately value creative ideation over strict accuracy. For example, brainstorming hypothetical scenarios or composing fiction allows loose associations between concepts and imaginary elements. However, other tasks demand high levels of factual accuracy and fidelity to the real world, such as summarizing scientific papers or news events. Expectations and applications must align accordingly. Tools like scholarly databases can enhance accuracy for precision tasks while creativity plugins generate more unstructured speculative outputs. Both approaches have their place depending on context and use case needs.
It is vital to keep in mind that AI systems have no true comprehension of language or concepts; they work by recognizing patterns in their training data. Therefore, we must vigilantly monitor for biases, misinformation, or potentially harmful content in model outputs. Do not erroneously assume human traits like true intentions or free will apply to AI systems. Guard against overtrusting or anthropomorphizing these tools. Use AI ethically, do not misrepresent capabilities, and do not attempt to deceive others or plagiarize the work. A conscientious, human-centric approach is imperative as these rapidly evolving technologies become more capable and potentially influential.
Rapid advances in artificial intelligence will likely continue, leading to ever more capable and sophisticated systems. If misused, future AI could become dangerously persuasive and controlling. However, thoughtfully implemented, these technologies also enable the potential to enhance knowledge discovery and empower human creativity. As users, we must learn to interact wisely and remain vigilant of the profound influence these tools may exert. Finding humanistic and ethical applications will require great care as our choices shape the path ahead. The promise and perils are both real. By keeping human values central as AI progresses, we can work to maximize benefits while minimizing harm. The trajectory remains uncertain, but our guidance as responsible stewards is critical.
Clearly explain context and goals when prompting AI to get helpful results. Speak conversationally but remember limitations.
AI models break text into tokens and have a limited context window for generating coherent text.
Craft effective prompts by modeling desired tone, incorporating references, setting boundaries, and iterating through testing.
Use techniques like zero-shot, multi-shot, and chain of thought prompting to provide necessary context.
AI can hallucinate false information or be stubbornly inconsistent, so guide it with explicit prompts and don't overtrust output.
Critically evaluate AI text, check for accuracy, sound reasoning, and comprehensiveness. Re-prompt to address errors.
Useful applications include summarization, explanation, research, and collaboration, but validate quality and fidelity.
Align expectations to use cases - some prioritize creativity, others demand high accuracy. Use tools accordingly.
Monitor for biases and misinformation. Use AI ethically, do not misrepresent capabilities or attempt deception.
Responsible guidance of AI will shape future trajectory. Keep human values central as capabilities advance.
This guide was prepared by Rick Dakan with help from Anthropic’s Claude AI.