Using Prompt Templates and Parameters
tip
Having problems? Don't worry, reach out on discord and we will help you out.
In this part of the tutorial series, we'll explore how to use prompt templates and parameters with llm-chain. Prompt templates allow you to create dynamic prompts, and parameters are the text strings you put into your templates.
Here's a simple Rust program demonstrating how to use prompt templates and parameters:
use llm_chain::{executor, parameters, prompt, step::Step};
#[tokio::main(flavor = "current_thread")]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create a new ChatGPT executor
let exec = executor!()?;
// Create our step containing our prompt template
let step = Step::for_prompt_template(prompt!(
"You are a bot for making personalized greetings",
"Make a personalized greeting tweet for {{text}}" // Text is the default parameter name, but you can use whatever you want
));
// A greeting for emil!
let res = step.run(¶meters!("Emil"), &exec).await?;
println!("{}", res);
// A greeting for you
let res = step.run(¶meters!("Your Name Here"), &exec).await?;
println!("{}", res.to_immediate().await?.as_content());
Ok(())
}
Let's break down the different parts of the code:
- We start with importing the necessary libraries, including the traits and structs required for our program.
- The main async function is defined, using Tokio as the runtime.
- We create a new
Executor
with the default settings. - A
Step
is created containing our prompt template with a placeholder ({{text}}
) that will be replaced with a specific value later. - We create a
Parameters
object with the value "Emil" to replace the placeholder in the prompt template. - We execute the
Step
with the providedparameters
and store the result inres
, then print the response to the console. - We create another
Parameters
object, this time with the value "Your Name Here" to replace the placeholder. - We execute the
Step
again with the newparameters
, store the result inres
, and print the response to the console.
In the next tutorial, we will combine multiple LLM invocations to solve more complicated problems.