Vibe Coding with Precision
- Rick Pollick

- 7 hours ago
- 16 min read

Defining Vibe Coding in My Daily Work
Vibe coding, as I practice it, means describing software needs in conversational English, letting an AI like Claude or Grok generate the bulk of the code, and then refining through targeted follow ups rather than typing out every function myself.
Andrej Karpathy captured the essence in his viral post: "There's a new kind of coding I call 'vibe coding', where you fully give in to the vibes, embrace exponentials, and forget that the code even exists." (https://x.com/karpathy/status/1886192184808149383)
For me, this is not about abandoning code entirely but reallocating my effort: I spend time on problem definition, system design sketches, edge case enumeration, and integration planning, while the AI handles syntax, boilerplate, and even basic optimizations. In a typical session, I might start with a high level prompt like:
"Design a Python service that ingests claim rejection data from an API, categorizes errors by type using regex patterns tuned for common denial codes like, and exposes a FastAPI endpoint for dashboard consumption with pagination and filtering by date range."
The AI then proposes a structure, generates the code, and I iterate on specifics like retry policies or logging formats.
This approach shines in my world of tech, where I often prototype tools for claim reconciliation, queue monitoring, or productivity tracking, all while juggling agile sprints and product roadmaps.
The Power of Intent: Thinking Through Goals and Outcomes

Vague intentions yield vague code; precise goals produce precise deliverables. I have learned this the hard way through dozens of sessions where a fuzzy "build something useful for operations" prompt led to generic CRUD apps disconnected from actual pharmacist pain points.
My process now breaks clarity into explicit layers, each building on the last.
Problem Articulation
Start with a one paragraph user story that names the persona, their context, and the specific decision or action enabled by the tool. Example: "As a staff pharmacist during peak hours at a high volume retail site, I need a real time view of exception queue items grouped by resolution time under 5 minutes versus over, so I can dispatch technicians to the highest impact items first." This grounds the AI in human realities like shift pressures and call volumes, which it otherwise ignores.
Outcome Definition
Specify measurable success criteria, including inputs, outputs, performance thresholds, and validation steps. For the above, that becomes: "Input: Live data from Snowflake via SQL query on RxClaims table, filtered to last 4 hours. Output: JSON response with queue metrics and top 10 exceptions, served via GET /api/queues with query params for site_id and threshold_minutes. Must execute end to end in under 2 seconds on 10k rows, with 99% uptime under simulated API failures." Without this, AI tends to overengineer or underdeliver on scalability.
Boundary Setting
List what is explicitly in scope and out: tech stack, non functional requirements, assumptions, and anti goals. "Scope: Python 3.11, FastAPI, SQLAlchemy for Snowflake connector, basic JWT auth via header. Out of scope: Full RBAC, Redis caching, unit tests. Assume 1k concurrent users max, no HIPAA encryption yet—just prototype security." Constraints like these prevent scope creep and hallucinations about irrelevant features like Kubernetes deployment.
This upfront investment, which takes me 5–10 minutes, cuts iteration cycles by 70% in my experience, mirroring product management practices I honed.

Mastering Prompt Language for Superior Outputs
Prompt engineering is not a buzzword; it is the skill of communicating intent so precisely that the AI cannot misinterpret. Splunk nails it: "Providing more context and information about the specific [task] improves the quality of the output." (https://www.splunk.com/en_us/blog/learn/prompt-engineering.html)
I have refined a prompting template that works across projects, drawing from guides like Rand Group's top techniques.
Role and Context Assignment
Begin with "You are a principal engineer at a pharmacy tech company with 10+ years in PBM integrations, Rx workflow automation, and Azure cloud services. Our stack includes Snowflake, Python FastAPI, and Azure Functions." This persona primes the model with my domain: NCPDP standards, prior auth loops, and 340B compliance nuances.
Detailed Context Injection
Paste schema snippets, API docs excerpts, or error logs directly into prompts. Example: "Claims table schema: claim_id INT, ncpdp_response_code VARCHAR(10), pharmacy_npi VARCHAR(10), adjudication_date TIMESTAMP, rejection_reason TEXT. Sample rejection: {'code': 'M3', 'reason': 'Prior auth required'}. Target: Aggregate rejections by code and npi, output Pandas dataframe convertible to CSV." Concrete artifacts anchor abstractions.
Structured Reasoning Request
Demand step by step: "1. Propose data flow diagram in Mermaid syntax. 2. List dependencies and pip installs. 3. Write data ingestion function with error handling for rate limits. 4. Generate API endpoint with Pydantic models. 5. Suggest three test cases including edge of zero results." This chain of thought mimics agile refinement spikes.
Precision Language Habits
Swap fluff for specifics: "Optimize" becomes "Reduce query time from 45s to under 10s by indexing adjudication_date and partitioning on npi." "User friendly" becomes "Form with progressive disclosure: show only status field first, expand to details on click, with ARIA labels for screen readers." Ambiguity invites mediocrity; metrics demand excellence.
Iterative Feedback Loops
Respond to outputs with surgical edits: "Good structure, but swap SQLAlchemy for native Snowflake connector to cut latency 30%. Add logging with structlog at DEBUG for claim_id on failure. Regenerate only the query function." Treat it like pair programming with a talented but inexperienced dev.
These habits evolved from trial and error in building prototypes, where vague prompts wasted hours on non starters.
Technical Literacy: The Unsung Multiplier

Vibe coding democratizes building, but it does not eliminate the need for technical grounding. Sources unanimously stress that domain and tech knowledge prevent garbage outputs and enable safe deployment.
In my toolkit, this knowledge manifests across layers.
Problem Framing with Workflow InsightSchneider Electric emphasizes: domain experts excel at "defining the right problem and success metrics" before AI touches data. In pharmacy, for example, I know refill exceptions spike at month end due to insurance resets, so I prompt for time windowed aggregations preemptively.Architectural Decision MakingI specify patterns like "Event driven with Azure Service Bus for claim updates, not polling, to handle 50k/day volume without API burnout." Without basics in pub/sub versus REST, or stateless functions, AI defaults to naive loops.Output Validation and HardeningI spot issues like missing null handling in SQL or unescaped user inputs because I can read the diff. "Domain knowledge experts analyze if [results] make sense," per Schneider. I add "Handle NULL rejection_reason with 'Unknown' default and log count."Tradeoff NavigationLinkedIn's analysis of AI/ML teams notes: "Without domain knowledge, models drift from reality." (LinkedIn) I balance speed versus auditability: "Log all overrides to immutable Snowflake table, but compute aggregates in memory for sub second response."Basic Coding HygieneEnough Python/SQL to tweak imports, fix indentation, or write a pytest stub ensures I own the artifact, not just rent it from the AI.Machine Learning Mastery sums it: domain knowledge ensures predictions are "relevant, interpretable, and actionable." (machinelearningmastery.com) Vibe coding amplifies this for code gen.
Expanded Real World Examples

Example 1: Pharmacy Dashboard Evolution
A Replit case shows vibe coding building apps via chat. My version: V1 prompt "Ops dashboard" yielded a static HTML table. V2, with "Pharmacist facing, Snowflake RxExceptions last 24h, columns: queue_age_bucket, tech_assigned_count, resolution_rate; interactive filters by site; under 3s load": produced a Streamlit app with SQL caching and
Example 2: Claims Script Refinement
Coursera's prompt examples highlight iteration. Mine: Initial "Reconcile claims" gave CSV exporter. Refined with schema, rate limits, and "Output discrepancy report with claim_id, expected_paid vs actual, flagged >$5 variance": delivered a production script handling 100k rows, now in a Azure DevOps pipeline.

These examples illustrate how specificity in prompting directly correlates with the quality and deployability of the resulting code.
More Real World Examples
Example 3: Prior Auth Tool with Guardrails
Bayesian Health stresses clinical domain for AI safety. My prototype: "FastAPI for PA status check, input Rx number, output formatted fax cover with HIPAA note; validate inputs against NPI registry mock; no persistent storage." Domain knowledge added "Reject if PA type is non standard per CMS rules," preventing bad UX.
Example 4: Snowflake Query Optimizer
Generic "Speed up query" failed. Specific: "Rewrite SELECT * FROM fills WHERE date_trunc('day', fill_date) = current_date -1; current 2min on 1M rows; add clustering key on fill_date, limit 10k, output explain plan": halved time, taught me Snowflake best practices.
Dashboard Deployed
From refined V2 prompt to Azure deployment
Rows Handled
Production claims reconciliation script
Query Speed
Snowflake query time halved with precise prompting
Fewer Iterations
Reduction in iteration cycles from upfront clarity


Vibe Coding as a Force
Being able to think through and articulate precisely what I want, coupled with defining explicit goals and outcomes, elevates vibe coding from a novelty to a disciplined method for creating production grade tools in my healthcare technology workflow. Quality language and prompting serve as the precision instruments that guide large language models toward outputs that respect real world constraints like API rate limits, HIPAA logging requirements, and pharmacist shift time pressures.
Layering in even intermediate understanding of technology stacks, system architectures, data workflows, and coding patterns multiplies these gains exponentially, enabling me to validate, harden, and deploy artifacts that integrate seamlessly into Azure DevOps pipelines or Snowflake powered analytics rather than languishing as untested prototypes.
Defining vibe coding through my lens
Vibe coding, as I practice it, means describing software needs in conversational English, letting an AI generate the bulk of the code, and then refining through targeted follow ups rather than typing out every function myself. Andrej Karpathy captured the essence in his viral post: “There’s a new kind of coding I call ‘vibe coding’, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.” I do not actually forget the code exists, but I do shift my energy away from syntax and toward problem definition, architecture, data flow, and constraints.
In my daily work at the intersection of agile leadership and product strategy, this looks like me investing up front in decomposing complex problems like claim adjudication exceptions or refill queue prioritization into user centric requirements, architectural sketches, and non functional specifications, while the AI generates idiomatic Python, SQL, FastAPI endpoints, and even test stubs. A concrete session might begin with a prompt like:
“Architect a resilient Python microservice for real time pharmacy exception monitoring. Inputs: Snowflake queries on RxExceptions table with columns claim_id, ncpdp_reject_code, queue_age_minutes, tech_id, resolution_status. Outputs: Paginated JSON via FastAPI GET /api/exceptions?site_npi=1234567890&age_bucket=0-5min,5-15min,15plus, with server side filtering and sorting. Constraints: Sub 500ms response on 50k rows, Azure Functions deployment, structlog at INFO for audit trail, Pydantic validation, retry on Snowflake transient 403s up to 3x with exponential backoff.”The model responds with a modular structure: data layer with connection pooling, service layer for aggregation logic, API layer with OpenAPI docs, and a deployment script. I then iterate on specifics like adding Snowflake clustering key recommendations, refining NCPDP code mappings, or tuning retry policies.
This method aligns perfectly with my background managing multiple Scrum teams and transitioning to product management, where rapid prototyping accelerates roadmap validation without constantly pulling core engineers off platform work.
Intent as the foundation: layered clarity on goals and outcomes
Vague intentions yield vague code; precise goals produce precise deliverables. I have learned this the hard way through dozens of sessions where a fuzzy “build something useful for operations” prompt led to generic CRUD apps disconnected from actual pharmacist pain points.
My process now breaks intent into explicit layers that I treat like a mini product discovery pass.
This upfront investment of 10–15 minutes, which basically feels like a tiny product/architecture spike, consistently cuts iteration cycles by a huge margin and aligns beautifully with the way I already work in agile environments.

Prompting as architecture: using language with surgical
precision
Prompt engineering is not a gimmick; it is the skill of communicating intent so precisely that the AI has almost no room to misinterpret. My prompting has evolved from “hey, can you refactor this?” to something much closer to a design brief.
Over time I have settled into a template that works across projects.
Role and environment priming I start by giving the model a job description and context.
That pushes the model into the right mental neighborhood: prior auth loops, claim rejections, audit trails, and healthcare grade logging.
Artifact-rich context injection I paste real artifacts instead of paraphrasing:
Table schemas.
Sample rows.
Snippets of error logs.
Excerpts from existing APIs or config.
Chained reasoning and structured output I ask for a specific sequence instead of “just write the code”:
Step 1: Propose a high level architecture and return it in Mermaid.
Step 2: List dependencies and how they are installed.
Step 3: Implement the data access layer.
Step 4: Implement the service / business logic layer.
Step 5: Implement the FastAPI endpoints and Pydantic models.
Step 6: Write three or more tests for the most critical pieces.
This keeps the conversation in design mode long enough to catch issues before code appears.
Metric driven precision I deliberately avoid vague adjectives and use metrics instead:
“Fast” becomes “P99 latency under 300 ms.”
“Reliable” becomes “graceful handling of Snowflake timeouts with retries and a dead letter queue.”
“Secure” becomes “JWT validation, PII redaction, and no secrets in code.”
Granular, delta based iteration When I critique the output, I stay extremely specific:
“Good architecture, but replace the ORM with the official Snowflake connector. Add connection pooling. Remove unused imports. Regenerate only ingestion.py and tests/test_ingestion.py.”
I treat the model like a talented but inexperienced colleague: precise feedback, small iterations, continuous checks.
Guardrails against hallucination I explicitly instruct it to stay grounded:
“Use only the schemas and context provided. If you need something not specified, mark it as TODO for a human to decide. Do not invent new tables or external services.”
Prompting at this level of detail feels like architecture and product design more than it feels like “asking a chatbot for code,” and that is exactly why it works.
Technical and domain literacy as multipliers
Vibe coding democratizes building, but it absolutely does not eliminate the need for technical and domain fundamentals. In fact, the better I understand the system, the more useful the AI becomes.
Here is how that plays out. It helps me spot nonsensical outputs, such as a suggested workflow that would clearly violate a regulatory constraint, and it lets me embed reality into my prompts by acknowledging that not all data is real time and that some events are batched nightly.
Knowing the basics of system design also lets me guide the AI instead of being dragged around by it: I can choose between event driven and batch processing, decide when to cache versus query live, use pub/sub for notifications instead of direct API calls where appropriate, and handle multi tenant data in context without breaking isolation. With that in mind, my prompts can say “use an event driven architecture with Azure Service Bus for claim events and a query optimized read model in Snowflake” instead of hand waving “make it scalable.”
I do not need to be a rockstar engineer to benefit; I just need enough coding fundamentals to read what the model wrote, recognize smells like duplicate code, missing validations, or weak error handling, make small deliberate edits, and write or at least understand tests, which is well within reach for someone who has worked closely with engineering for years and has hands on experience with Python, SQL, and cloud tooling.
Because I can reason about architecture, data, and workflows, I can design test cases that mimic real edge conditions, compare outputs to known good baselines, and say “this looks plausible but will fail in production because X,” and that validation step is what turns vibe coding from “vibes” into an actual engineering practice.
When those ingredients come together, vibe coding is absolutely a force multiplier, and the key is recognizing that the “force” is not magic, it is leverage. AI handles the tedious parts like wiring endpoints, writing serializers, generating boilerplate, scaffolding modules, and building test shells, while I stay focused on what the system should do, why it matters, how it fits into the overall architecture, and whether the outputs are safe and correct.
For example, building a reasonably robust authentication and role-based access control layer might drop from days of work to a fraction of that time, as long as I still review the critical paths and security decisions. With the same team size, I can spin up more prototypes to validate ideas, modernize more legacy modules, and create more internal tools for operations and support, all of which increases our surface area of value without immediately increasing headcount. The model also lets me explore multiple options quickly—three different query strategies, two alternative API designs, a couple of UI flows—and because generating options is cheap, I am free to test more and converge faster on a good solution.
None of this works if I treat AI output as plug and play, though; the code still needs tests, review, security checks, performance validation, and compliance scrutiny, especially in healthcare. When I do that, vibe coding acts like a multiplier on my existing strengths, and when I do not, it simply amplifies risks.

How organizations can leverage vibe coding
Scaling this up from “how I work” to “how an org works” is where tools like Claude Code really come in. The core idea is to embed AI assistance as a first class citizen of daily development, not as a novelty.
Making AI coding assistance part of the toolchain
Claude Code is built to sit inside the developer’s environment: a CLI and editor integrated assistant that can read the repo, understand the project structure, and operate on code directly.
That matters because:
• Developers do not have to copy paste between tools.
• The assistant sees real context: files, tests, configs, logs.
• Changes can be applied, tested, and committed from the same place.
Organizations can:
Standardize on one or two assistants that meet their security and privacy requirements.
Configure access so that the assistant can see relevant code but not sensitive data it should not touch.
Integrate the assistant into their onboarding docs, internal wikis, and “How we work” guides.
Codifying vibe coding patterns as internal practice
Instead of letting everyone reinvent their own prompting style, organizations can capture and share what works.
For example:
• Prompt templates for common tasks:
• Refactoring a legacy module.
• Adding a new endpoint in the house style.
• Writing tests for an untested component.
• Generating data migrations.
• Internal guidance for what to include in prompts:
• Links or snippets from architecture docs.
• Schema definitions and example data.
• Existing patterns or libraries that must be reused.
• Constraints around performance, security, or compliance.
• Checklists for validating AI generated code:
• Did we add or update tests?
• Does the code follow our style and patterns?
• Are error cases handled?
• Are there any new security exposures?
In practice, this looks like a short “AI coding playbook” that lives next to your engineering handbook.
Weaving AI into daily ceremonies
There are small, pragmatic ways to bake AI into the cadence of work:
Standups and planning
Use AI to quickly assess feasibility of a spike or POC before committing a full sprint.
Generate draft implementation plans for complex items that engineers can critique.
Triage and incident response
Use the assistant to trace error messages through the codebase, identify the likely root cause, and propose a patch.
Generate runbook steps or remediation scripts based on real logs.
Code review
Ask the assistant for a summary of a large pull request, highlighting risky changes.
Use it to suggest missing tests or potential edge cases.
Documentation
Generate or update docs automatically as part of refactors.
Summarize complex modules for new team members.
Flattening the onboarding curve
Large, legacy codebases and data estates are notoriously hard to learn. Claude Code type tools can:
Answer questions like “where is the main entry point for this workflow” or “which services call this API.”
Guide new developers to the relevant files, tests, and docs.
Generate small modifications for practice that are actually consistent with the existing structure.
That can shave weeks off onboarding time, especially in complex domains like healthcare or financial systems.
ROI of Vibe Approach
When implemented thoughtfully, AI coding assistance and vibe coding can meaningfully change the economics and experience of software development by driving productivity, reducing costs, and simplifying the developer experience. Research and real world practice show that developers complete more tasks per unit time, commit more code, compile and test more frequently, and report a better overall experience, especially on boilerplate and repetitive work. In practice, that looks like speeding up mundane tasks and freeing senior engineers to focus on higher-level design, allowing junior developers to contribute more quickly with good guardrails, and reducing the cognitive load of navigating large codebases.
Studies and case work also point to material reductions in project cost, often in the 30–40 percent range for certain types of work when AI is integrated into the development process, along with compressed timelines on the order of multiple weeks for medium sized projects and measurable improvements in bug density and test coverage when AI is explicitly tasked with generating tests and checking for edge cases. From a leadership perspective, that translates into doing more with the same budget, freeing up capacity to tackle modernization work that was previously too expensive, and being able to say yes to more “nice to have” but high value internal tools that move the needle for operations.
A subtle but important benefit is simplification of the day to day developer experience. Developers can work in fewer tools and fewer contexts, with the assistant acting as a navigator, writer, and reviewer inside the same environment instead of forcing constant context switching. Documentation, refactors, and tests become easier to keep aligned with the code, and that reduction in friction is hard to quantify but very easy to feel on a daily basis.
At the same time, there is a known productivity paradox where developers feel faster because the AI produces code quickly, but organizational throughput does not improve, or sometimes even slows down, because code review becomes a bottleneck, security issues and quality problems create rework, and teams end up shipping more features but not necessarily more value.
The fix is not to use less AI, but to tighten validation and review, align AI assisted work with clear business outcomes, and make sure the rest of the delivery pipeline—testing, security, operations—can scale with the increased volume and velocity of code. When that happens, vibe coding and tools like Claude Code stop being a novelty and become part of the operating model.
Putting it all together, my own workflow has evolved into a continuous, end to end process. I start with discovery: a one page problem statement naming the persona, pain, constraints, and success metrics, plus an architecture sketch, data schemas, and an explicit list of risks and non goals. From there, I move into an initial vibe coding session using a structured prompt template, always focusing on architecture first and code second, and tackling one slice of the system at a time—data layer, then API, then tests.
Once I have a first pass, I shift into validation and hardening, where I run tests and add missing ones, review the code like it came from a junior colleague, check logging, error handling, and compliance concerns, and deploy to a dev environment with realistic data to see how it behaves.
After that, I integrate the work into the team by documenting what worked, folding any reusable patterns back into the codebase (for example, a standard retry decorator or logging pattern), and sharing prompt examples and lessons learned so others can benefit. Finally, I treat this as a loop of continuous improvement: after a few cycles, I refine both the prompts and the governance, and I deliberately iterate on where AI is most helpful and where it should not be used. That is how vibe coding stops being “AI that writes code” and becomes a force multiplier for how I think, design, and deliver solutions, and how a whole organization can leverage the same pattern to move faster, spend smarter, and simplify how software gets built.
My Current Vibe Coding Workflow

Pre-Prompt
Notebook with problem, schemas, wireframes.
Initial Prompt
Role + context + structure.
Review Cycle
Architecture yes/no, then code per module.
Harden
Tests, logs, deploy script.
Retrospective
What prompt tweaks for next time.
This scales from prototypes to backlog items, blending my Scrum Master discipline with product instincts.
References
Cloudflare. "What is vibe coding? AI coding." https://www.cloudflare.com/learning/ai/ai-vibe-coding/[2]
Coursera. "6 Prompt Engineering Examples." https://www.coursera.org/articles/prompt-engineering-examples[13]
Google Cloud. "Vibe Coding Explained: Tools and Guides." https://cloud.google.com/discover/what-is-vibe-coding[6]
Karpathy, A. X post. https://x.com/karpathy/status/1886192184808149383[3]
Machine Learning Mastery. "The Role of Domain Knowledge in Machine Learning." https://machinelearningmastery.com/role-domain-knowledge-machine-learning/[11]
MIT Technology Review. "What is vibe coding, exactly?" https://www.technologyreview.com/2025/04/16/1115135/what-is-vibe-coding-exactly/[5]
Rand Group. "Top ten practical AI prompt engineering techniques to get better results." https://www.randgroup.com/insights/services/ai-machine-learning/top-ten-practical-ai-prompt-engineering-techniques-to-get-better-results[9]
Replit Blog. "What is Vibe Coding?" https://blog.replit.com/what-is-vibe-coding[4]
Schneider Electric. "How important is domain knowledge for AI projects?" https://www.se.com/ww/en/work/campaign/digital-transformation/internet-of-things/how-important-the-domain-knowledge-is-for-ai-projects/[7]
Splunk. "The Role of Prompt Engineering in Useful AI." https://www.splunk.com/en_us/blog/learn/prompt-engineering.html[10]
Wikipedia. "Vibe coding." https://en.wikipedia.org/wiki/Vibe_coding[1]
W. Crumbley. "How Important Is Domain Knowledge for an AI/ML Team?" https://www.linkedin.com/pulse/how-important-domain-knowledge-aiml-team-will-crumbley-qnhoe[8]
More Sources [1] Vibe coding - Wikipedia https://en.wikipedia.org/wiki/Vibe_coding [2] What is vibe coding? | AI coding - Cloudflare https://www.cloudflare.com/learning/ai/ai-vibe-coding/ [3] There's a new kind of coding I call "vibe coding", where you fully ... - X https://x.com/karpathy/status/1886192184808149383?lang=en [4] What is Vibe Coding? How To Vibe Your App to Life - Replit Blog https://blog.replit.com/what-is-vibe-coding [5] What is vibe coding, exactly? - MIT Technology Review https://www.technologyreview.com/2025/04/16/1115135/what-is-vibe-coding-exactly/ [6] Vibe Coding Explained: Tools and Guides - Google Cloud https://cloud.google.com/discover/what-is-vibe-coding [7] How important is domain knowledge for AI and data science projects? https://blog.se.com/digital-transformation/internet-of-things/2022/10/31/how-important-the-domain-knowledge-is-for-ai-projects/ [8] How Important Is Domain Knowledge for an AI/ML Team? - LinkedIn https://www.linkedin.com/pulse/how-important-domain-knowledge-aiml-team-will-crumbley-qnhoe [9] Top ten practical AI prompt engineering techniques to get better results https://www.randgroup.com/insights/services/ai-machine-learning/top-ten-practical-ai-prompt-engineering-techniques-to-get-better-results/ [10] The Role of Prompt Engineering in Useful AI - Splunk https://www.splunk.com/en_us/blog/learn/prompt-engineering.html [11] The Role of Domain Knowledge in Machine Learning https://machinelearningmastery.com/role-domain-knowledge-machine-learning/ [12] Examples of Prompts | Prompt Engineering Guide https://www.promptingguide.ai/introduction/examples [13] 6 Prompt Engineering Examples | Coursera https://www.coursera.org/articles/prompt-engineering-examples





