How to Use a Prompt Declaration Language (PDL) with LangChain — and Why It’s a Game Changer for Prompt Engineering

Introduction: The Problem With “Loose” Prompts

Let’s be honest — most of us start our AI journey by typing something like:

“Hey GPT, explain black holes to me like I’m five!”

That’s great for quick fun, but when you’re building serious AI applications, prompts become messy fast.
You have:

  • Hardcoded text inside your code
  • No way to reuse prompts
  • Different team members editing things randomly
  • Chaos when you switch models or contexts

So what’s the solution?
Structure your prompts — just like we structure our code.

And that’s where a Prompt Declaration Language (PDL) comes in.


What Is a Prompt Declaration Language (PDL)?

A Prompt Declaration Language is a fancy term for a structured way to describe prompts — like a “blueprint” for how an AI should behave, what inputs it expects, and how it should respond.

If you’ve ever written YAML or JSON configs, this will feel familiar.

Think of it like a recipe for AI behavior:

prompt:
  name: explain_topic
  model: gpt-5
  description: Explain a topic in a simple way.
  inputs:
    topic: string
    audience: [child, student, expert]
  system: |
    You are a kind teacher.
    Explain things clearly and warmly.
  user: |
    Explain "{{topic}}" to a "{{audience}}" in simple language.

Boom — that’s your PDL!
Now your prompt is reusable, version-controlled, and readable by both humans and machines.


Why YAML?

We use YAML because it’s clean and friendly.
It’s like writing your grocery list — easy to read, easy to edit.

  • ✅ Human-readable
  • ✅ Works great with Python
  • ✅ Plays nicely with tools like LangChain, DSPy, and Semantic Kernel

Using PDL in LangChain (with Python)

Let’s now bring our YAML prompt to life using LangChain.

Step 1: Install Dependencies

Make sure you have the following:

pip install langchain openai pyyaml

Step 2: Create Your YAML File

Save this as explain_topic.yaml:

prompt:
  name: explain_topic
  model: gpt-4o-mini
  temperature: 0.4
  description: Explain a topic in a simple way.

  inputs:
    topic: string
    audience: [child, student, expert]

  system: |
    You are a kind and patient teacher.
    Always use fun examples and keep it simple.

  user: |
    Explain "{{topic}}" to a "{{audience}}" in a friendly and easy-to-understand way.

  output:
    format: markdown

Step 3: Load and Run It in LangChain

Here’s the Python magic

import yaml
from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate

#  Load the YAML prompt
with open("explain_topic.yaml", "r") as f:
    config = yaml.safe_load(f)["prompt"]

#  Prepare the model
llm = ChatOpenAI(model=config["model"], temperature=config["temperature"])

#  Create the LangChain prompt template
prompt = ChatPromptTemplate.from_messages([
    ("system", config["system"]),
    ("user", config["user"])
])

#  Fill in the dynamic variables
variables = {"topic": "volcanoes", "audience": "child"}

#  Combine the prompt and model into a runnable chain
chain = prompt | llm

#  Run the chain
result = chain.invoke(variables)

print("🧠 AI says:\n")
print(result.content)

Step 4: Output (Example)

When you run this, the model might say:

“A volcano is like a big mountain that sometimes bursts open!
Inside Earth, there’s hot melted rock called magma. When the pressure builds up,
the magma escapes as lava. It’s super hot — but also super cool!”


Why This Approach Rocks

BenefitDescription
ReusableYou can use the same YAML in many apps or models
ReadableAnyone (even non-coders) can understand it
VersionableKeep prompt versions in Git easily
ComposableChain multiple YAML prompts together
FlexibleSwap models or temperatures with no code changes

This makes your prompt design process feel less like trial-and-error and more like engineering.


Bonus: Chaining Multiple Prompts

Want to build a “teacher + quiz” AI?
Here’s how to chain two YAML prompts:

# Step 1: Explain the topic
explanation = chain.invoke({"topic": "gravity", "audience": "child"}).content

# Step 2: Quiz time!
quiz_prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a fun science teacher."),
    ("user", "Create a 3-question quiz about this topic: {{text}}")
])

quiz_chain = quiz_prompt | llm
quiz = quiz_chain.invoke({"text": explanation}).content

print("\n Quiz:\n", quiz)

And voilà — your AI just became a full learning companion.


Conclusion

A Prompt Declaration Language (PDL) is the missing link between messy one-off prompts and scalable, maintainable AI systems.

By combining YAML + LangChain, you can:

  • Treat prompts like first-class citizens,
  • Keep your AI behavior consistent,
  • And build reusable “prompt libraries” that scale beautifully.

So the next time you’re writing prompts in code, pause —
and think: Should this be a YAML?


Pro Tip:

Try organizing your prompts in a folder like this:

prompts/
 ├── explain_topic.yaml
 ├── summarize.yaml
 └── quiz.yaml

Then load them dynamically — boom, instant Prompt Library.


💬 “Prompts are the new source code.
PDL is the language we’ll write them in.”

promptengineering

Buy This Book On Amazon

Leave a Comment

Your email address will not be published. Required fields are marked *