Skip to content

timothywarner/elements

Repository files navigation

The Elements of Prompt Engineering Live Online Training Course

Thumbnail

Link Checker GitHub last commit GitHub license

Repository short link: timw.info/elements

📬 Connect with Tim

🌐 Primary Contact: Visit techtrainertim.com for all my social links and latest content!

Additional Contact Methods

🎯 Essential AI Services

🔧 Vendor-Specific Guides

🛠️ Recommended LLM Services

  • Perplexity - AI-powered search and discovery
  • Cursor - AI-enhanced development environment
  • Kagi Universal Summarizer - Advanced text summarization tool
  • Midjourney - AI image generation and art creation
  • SlidesGPT - AI-powered presentation generation
  • ElevenLabs - AI voice synthesis and text-to-speech
  • Heygen - AI video generation and avatar creation
  • Poppy AI - AI-powered content creation and automation
  • Gamma AI - AI presentation and document creation
  • Cline AI - AI-powered code generation and assistance
  • Windsurf AI - AI-powered data analysis and visualization
  • Sora - OpenAI's text-to-video generation model
  • Operator - AI-powered business automation platform

📚 Learning Resources

🏢 Microsoft AI Stack

🖼️ LLM Galleries

🔍 Search & Development Tools

🛡️ Responsible AI Principles

🔗 Model Context Protocol (MCP)

💡 Tim's LLM Prompting Guidance

  • Maintain at least 2 "daily driver" LLMs at a paid tier for A/B testing (fault tolerance and groundedness)
  • Never provide personal or confidential information to public/free AIs—ensure privacy by understanding chat storage, usage stats, and licensing policies
  • Speak to the LLM in ways most comfortable to you (voice, text, image) and take advantage of its multi-modal capabilities
  • Apply a stream-of-consciousness technique to generate prompts, even with rough spelling/grammar, including key information like who, what, when, where, why, and how
  • Think procedurally and in a step-by-step manner to help the AI break down complex topics
  • Optimize custom instructions and prompts ("meta prompting"), including asking the AI to summarize or focus its responses
  • Use system prompts and meta prompts to direct and focus the LLM's capabilities
  • Be aware of potential signs of amnesia or hallucination in AI responses; have a backup plan (such as testing with multiple LLMs)
  • Accept that you'll never be fully caught up—embrace exploration, questioning, and constant testing
  • Build cognitive "muscle memory" with AI by practicing prompt refinement and cross-model comparisons
  • Remember to attribute AI-enriched content where relevant
  • Understand the unique strengths and behaviors of each LLM and leverage them strategically in multi-chat sessions
  • "LLM Pillar Jumping": Use insights from one LLM session to support or refine another
  • Consider "A/B testing" LLMs against each other for more grounded and reliable answers
  • Get vulnerable with your AI (in trusted, secure sessions) to receive maximally personalized results—the more context you provide about your unique situation, the more tailored and valuable the response
  • Leverage "meta-prompting" by asking the AI to craft system messages, design prompts, and optimize instructions—let the AI help you become better at using AI

📖 Tim's Essential Tech Writing Bookshelf

About

The Elements of Prompt Engineering Live Training Course

Resources

License

Contributing

Security policy

Stars

Watchers

Forks