Strategic Prompting: My AI Journey

Android and human shaking hands

Shaking Andriod’s Hand

I'll be honest: a couple of years ago, I thought talking to AI was pretty straightforward. You ask a question, you get an answer. Simple, right? Well, after spending considerable time working with these systems and making plenty of mistakes along the way, I've learned that effective AI communication is more like learning a new language than having a casual chat.

Let me share what I've discovered about how prompt engineering has evolved and where I think it's headed, from the perspective of someone still figuring it out but getting better results than when I started.

How Everything Changed

The biggest shift I've noticed is that AI systems can now handle so much more than just text. When I first started experimenting with ChatGPT, it was all about crafting the perfect written prompt. Now I find myself working with models that can process images, generate visuals, and even execute code directly. The last is super useful for someone who doesn’t know how to code.

Context windows have become massive too. I used to worry about hitting character limits and would break complex requests into smaller pieces. Now I can front-load entire project briefs, background information, and detailed requirements all in one go. This change has made my prompts more effective, though I sometimes still catch myself being overly brief out of old habit.

What really surprised me was watching these models become tool users themselves. Instead of just telling me what code to write, they can actually run it. Instead of describing what an image might look like, they can create it. This evolution has pushed me to think differently about what I'm asking for and how I structure my requests.

What I've Learned About Modern Prompting (Through Trial and Error)

My approach has definitely evolved, though I'm still refining my techniques. Here are the strategies that have made the biggest difference in my results:

  • Being Crystal Clear About What I Want
    I used to think being conversational was the key to good prompting. Now I've learned that being explicit and direct works much better. Instead of "Can you help me write something about marketing?" I'll say "Write a 500 word blog post explaining three specific benefits of email marketing for small businesses, using examples from retail companies." The difference in output quality is remarkable.

  • Loading Context Upfront
    With these expanded context windows, I've started treating my initial prompt like a comprehensive project brief. I include background information, target audience details, desired tone, format specifications, and examples all at the beginning. This front loading approach has dramatically improved consistency in my results.

  • Asking for Step by Step Thinking
    One technique that consistently improves my outputs is explicitly asking the AI to think through problems step by step. I'll add phrases like "Before providing your final answer, work through this systematically" or "Explain your reasoning as you go." This approach has helped me catch potential issues early and understand how the AI arrived at its conclusions.

  • Using Examples to Set Expectations
    I've found that providing 2 or 3 solid examples within my prompts helps establish the style and quality I'm looking for. Instead of trying to describe exactly what I want, I show it. This technique has been particularly helpful for creative tasks where tone and style matter.

  • Understanding Temperature Settings (Still Learning This One)
    This is an area where I'm still developing my skills, but I've started paying attention to temperature settings for different types of tasks. When I need precise, factual information like technical documentation, I use lower temperature settings around 0.2. For creative brainstorming sessions, I'll bump it up to 0.7 or 0.8. The difference is significant, though I'm still experimenting to find the sweet spots for different scenarios.

  • Building in Quality Checks
    I've learned to include validation steps directly in my prompts rather than hoping for good results. I might say something like "After completing this task, review your work for accuracy and highlight any assumptions you made." This self checking mechanism has reduced the number of iterations I need to get satisfactory results.

Where I Think This Is All Heading

Looking ahead, I see some exciting developments that both excite and intimidate me as someone still building these skills.

  • Everything Becomes Multimedia
    I expect that within a few years, my prompts will need to coordinate across text, images, audio, and video simultaneously. This multimodal future will require me to think more like a creative director than a question asker. I'm already starting to experiment with image-based prompts, though I have a lot to learn in this area.

  • Persistent Conversations
    The idea that AI systems will remember our ongoing conversations across sessions is fascinating. Instead of starting fresh each time, I'll be building long-term working relationships with these systems. This continuity will change how I structure prompts and manage projects, though I'm not entirely sure what best practices will emerge.

  • Specialized AI Partners
    I anticipate working with domain specific AI agents that have deep expertise in particular fields. This specialization will require me to develop different prompting strategies for different types of AI partners. A legal AI will need different communication approaches than a creative AI or a technical AI.

  • More Autonomous Collaboration
    In the longer term, I expect these systems will start anticipating what I need and suggesting improvements to my prompts before I even realize there's a problem. This predictive capability could make prompting easier, but it might also require new skills around managing and directing autonomous AI behavior.

What I'm Still Working On

I'll admit there are areas where I know I need to improve. My prompting consistency varies depending on the task and my energy level. Sometimes I nail the specificity and get exactly what I need; other times I'm too vague and end up with generic results that require multiple iterations.

I'm also still developing my understanding of advanced prompting techniques. I know there are sophisticated approaches I haven't mastered yet, and I don't always apply the techniques I do know consistently across different scenarios.

Teaching others about prompting has been helpful for my own development, though I sometimes struggle to articulate why certain approaches work better than others. Explaining my thought process forces me to be more intentional about my choices.

The Learning Continues

What strikes me most about this field is how quickly it's evolving and how much there is still to learn. The prompting strategies that work well today might be outdated in six months. This constant evolution keeps me on my toes and reminds me that becoming skilled at AI communication is an ongoing process, not a destination.

The key insight I've gained is that thoughtful, systematic approaches consistently produce better results than casual, off-the-cuff prompting. Even when I don't execute perfectly, having a structured approach to how I communicate with AI systems has improved my outcomes significantly.

As these systems become more sophisticated, I expect the art of prompting will become even more important. The quality of my questions and instructions will continue to directly impact the quality of results I get. It's an exciting challenge, and I'm looking forward to seeing where this journey takes me next.

Previous
Previous

Beneath the Surface: Navigating AI

Next
Next

Why Another Master’s?