LangChain手册(JS/TS版)06模型:提示

发布一下 0 0

LangChain provides several utilities to help manage prompts for language models, including chat models.
LangChain 提供了多个实用程序来帮助管理语言模型(包括聊天模型)的提示。

Prompt Templates 提示模板

A PromptTemplate allows you to make use of templating to generate a prompt. This is useful for when you want to use the same prompt outline in multiple places, but with certain values changed. Prompt templates are supported for both LLMs and chat models, as shown below:
PromptTemplate 允许您使用模板来生成提示。当您希望在多个位置使用相同的提示大纲,但更改了某些值时,这很有用。LLM 和聊天模型都支持提示模板,如下所示:

import {  ChatPromptTemplate,  HumanMessagePromptTemplate,  PromptTemplate,  SystemMessagePromptTemplate,} from "langchain/prompts";export const run = async () => {  // A `PromptTemplate` consists of a template string and a list of input variables.  const template = "What is a good name for a company that makes {product}?";  const promptA = new PromptTemplate({ template, inputVariables: ["product"] });  // We can use the `format` method to format the template with the given input values.  const responseA = await promptA.format({ product: "colorful socks" });  console.log({ responseA });  /*  {    responseA: 'What is a good name for a company that makes colorful socks?'  }  */  // We can also use the `fromTemplate` method to create a `PromptTemplate` object.  const promptB = PromptTemplate.fromTemplate(    "What is a good name for a company that makes {product}?"  );  const responseB = await promptB.format({ product: "colorful socks" });  console.log({ responseB });  /*  {    responseB: 'What is a good name for a company that makes colorful socks?'  }  */  // For chat models, we provide a `ChatPromptTemplate` class that can be used to format chat prompts.  const chatPrompt = ChatPromptTemplate.fromPromptMessages([    SystemMessagePromptTemplate.fromTemplate(      "You are a helpful assistant that translates {input_language} to {output_language}."    ),    HumanMessagePromptTemplate.fromTemplate("{text}"),  ]);  // The result can be formatted as a string using the `format` method.  const responseC = await chatPrompt.format({    input_language: "English",    output_language: "French",    text: "I love programming.",  });  console.log({ responseC });  /*  {    responseC: '[{"text":"You are a helpful assistant that translates English to French."},{"text":"I love programming."}]'  }  */  // The result can also be formatted as a list of `ChatMessage` objects by returning a `PromptValue` object and calling the `toChatMessages` method.  // More on this below.  const responseD = await chatPrompt.formatPromptValue({    input_language: "English",    output_language: "French",    text: "I love programming.",  });  const messages = responseD.toChatMessages();  console.log({ messages });  /*  {    messages: [        SystemChatMessage {          text: 'You are a helpful assistant that translates English to French.'        },        HumanChatMessage { text: 'I love programming.' }      ]  }  */};

API Reference:

  • ChatPromptTemplate from langchain/prompts
    聊天提示模板从 langchain/prompts
  • HumanMessagePromptTemplate from langchain/prompts
    人工消息提示模板从 langchain/prompts
  • PromptTemplate from langchain/prompts
    提示模板从 langchain/prompts
  • SystemMessagePromptTemplate from langchain/prompts
    系统消息提示模板从 langchain/prompts

Additional Functionality: Prompt Templates附加功能:提示模板

We offer a number of extra features for prompt templates, as shown below:
我们为提示模板提供了许多额外功能,如下所示:

Prompt Values 提示值

A PromptValue is an object returned by the formatPromptValue of a PromptTemplate. It can be converted to a string or list of ChatMessage objects.
PromptValue 是由 PromptTemplate 中的 formatPromptValue 返回的对象。它可以转换为字符串或 ChatMessage 个对象的列表。

import {  ChatPromptTemplate,  HumanMessagePromptTemplate,  PromptTemplate,  SystemMessagePromptTemplate,} from "langchain/prompts";export const run = async () => {  const template = "What is a good name for a company that makes {product}?";  const promptA = new PromptTemplate({ template, inputVariables: ["product"] });  // The `formatPromptValue` method returns a `PromptValue` object that can be used to format the prompt as a string or a list of `ChatMessage` objects.  const responseA = await promptA.formatPromptValue({    product: "colorful socks",  });  const responseAString = responseA.toString();  console.log({ responseAString });  /*    {        responseAString: 'What is a good name for a company that makes colorful socks?'    }    */  const responseAMessages = responseA.toChatMessages();  console.log({ responseAMessages });  /*    {        responseAMessages: [            HumanChatMessage {                text: 'What is a good name for a company that makes colorful socks?'            }        ]    }    */  const chatPrompt = ChatPromptTemplate.fromPromptMessages([    SystemMessagePromptTemplate.fromTemplate(      "You are a helpful assistant that translates {input_language} to {output_language}."    ),    HumanMessagePromptTemplate.fromTemplate("{text}"),  ]);  // `formatPromptValue` also works with `ChatPromptTemplate`.  const responseB = await chatPrompt.formatPromptValue({    input_language: "English",    output_language: "French",    text: "I love programming.",  });  const responseBString = responseB.toString();  console.log({ responseBString });  /*    {        responseBString: '[{"text":"You are a helpful assistant that translates English to French."},{"text":"I love programming."}]'    }    */  const responseBMessages = responseB.toChatMessages();  console.log({ responseBMessages });  /*    {        responseBMessages: [            SystemChatMessage {                text: 'You are a helpful assistant that translates English to French.'            },            HumanChatMessage { text: 'I love programming.' }        ]    }    */};

API Reference:

  • ChatPromptTemplate from langchain/prompts
    聊天提示模板从 langchain/prompts
  • HumanMessagePromptTemplate from langchain/prompts
    人工消息提示模板从 langchain/prompts
  • PromptTemplate from langchain/prompts
    提示模板从 langchain/prompts
  • SystemMessagePromptTemplate from langchain/prompts
    系统消息提示模板从 langchain/prompts

Partial Values​ 部分值

Like other methods, it can make sense to "partial" a prompt template - eg pass in a subset of the required values, as to create a new prompt template which expects only the remaining subset of values.
与其他方法一样,“部分”提示模板是有意义的 - 例如传入所需值的子集,就像创建一个仅期望剩余值子集的新提示模板一样。

LangChain supports this in two ways:
LangChain 以两种方式支持这一点:

  1. Partial formatting with string values.
    使用字符串值进行部分格式化。
  2. Partial formatting with functions that return string values.
    使用返回字符串值的函数进行部分格式化。

These two different ways support different use cases. In the examples below, we go over the motivations for both use cases as well as how to do it in LangChain.
这两种不同的方式支持不同的用例。在下面的示例中,我们将介绍这两个用例的动机以及如何在 LangChain 中执行此操作。

import { PromptTemplate } from "langchain/prompts";export const run = async () => {  // The `partial` method returns a new `PromptTemplate` object that can be used to format the prompt with only some of the input variables.  const promptA = new PromptTemplate({    template: "{foo}{bar}",    inputVariables: ["foo", "bar"],  });  const partialPromptA = await promptA.partial({ foo: "foo" });  console.log(await partialPromptA.format({ bar: "bar" }));  // foobar  // You can also explicitly specify the partial variables when creating the `PromptTemplate` object.  const promptB = new PromptTemplate({    template: "{foo}{bar}",    inputVariables: ["foo"],    partialVariables: { bar: "bar" },  });  console.log(await promptB.format({ foo: "foo" }));  // foobar  // You can also use partial formatting with function inputs instead of string inputs.  const promptC = new PromptTemplate({    template: "Tell me a {adjective} joke about the day {date}",    inputVariables: ["adjective", "date"],  });  const partialPromptC = await promptC.partial({    date: () => new Date().toLocaleDateString(),  });  console.log(await partialPromptC.format({ adjective: "funny" }));  // Tell me a funny joke about the day 3/22/2023  const promptD = new PromptTemplate({    template: "Tell me a {adjective} joke about the day {date}",    inputVariables: ["adjective"],    partialVariables: { date: () => new Date().toLocaleDateString() },  });  console.log(await promptD.format({ adjective: "funny" }));  // Tell me a funny joke about the day 3/22/2023};

API Reference:

  • PromptTemplate from langchain/prompts
    提示模板从 langchain/prompts

Few-Shot Prompt Templates​ 少数镜头提示模板

A few-shot prompt template is a prompt template you can build with examples.
几个镜头提示模板是可以使用示例构建的提示模板。

import { FewShotPromptTemplate, PromptTemplate } from "langchain/prompts";export const run = async () => {  // First, create a list of few-shot examples.  const examples = [    { word: "happy", antonym: "sad" },    { word: "tall", antonym: "short" },  ];  // Next, we specify the template to format the examples we have provided.  const exampleFormatterTemplate = "Word: {word}\nAntonym: {antonym}\n";  const examplePrompt = new PromptTemplate({    inputVariables: ["word", "antonym"],    template: exampleFormatterTemplate,  });  // Finally, we create the `FewShotPromptTemplate`  const fewShotPrompt = new FewShotPromptTemplate({    /* These are the examples we want to insert into the prompt. */    examples,    /* This is how we want to format the examples when we insert them into the prompt. */    examplePrompt,    /* The prefix is some text that goes before the examples in the prompt. Usually, this consists of instructions. */    prefix: "Give the antonym of every input",    /* The suffix is some text that goes after the examples in the prompt. Usually, this is where the user input will go */    suffix: "Word: {input}\nAntonym:",    /* The input variables are the variables that the overall prompt expects. */    inputVariables: ["input"],    /* The example_separator is the string we will use to join the prefix, examples, and suffix together with. */    exampleSeparator: "\n\n",    /* The template format is the formatting method to use for the template. Should usually be f-string. */    templateFormat: "f-string",  });  // We can now generate a prompt using the `format` method.  console.log(await fewShotPrompt.format({ input: "big" }));  /*  Give the antonym of every input  Word: happy  Antonym: sad  Word: tall  Antonym: short  Word: big  Antonym:  */};

API Reference:

  • FewShotPromptTemplate from langchain/prompts
    FewShotPromptTemplatefrom langchain/prompts
  • PromptTemplate from langchain/prompts
    提示模板从 langchain/prompts

Output Parsers

INFO

Conceptual Guide

Language models output text. But many times you may want to get more structured information than just text back. This is where output parsers come in.
语言模型输出文本。但很多时候,您可能希望获得更结构化的信息,而不仅仅是文本回复。这就是输出解析器的用武之地。

Output parsers are classes that help structure language model responses. There are two main methods an output parser must implement:
输出分析器是帮助构建语言模型响应的类。输出分析器必须实现两种主要方法:

  • getFormatInstructions(): str A method which returns a string containing instructions for how the output of a language model should be formatted.
    getFormatInstructions(): str 返回一个字符串的方法,其中包含有关如何格式化语言模型输出的说明。
  • parse(raw: string): any A method which takes in a string (assumed to be the response from a language model) and parses it into some structure.
    parse(raw: string): any 一种方法,它接受一个字符串(假设是来自语言模型的响应)并将其解析为某种结构。

And then one optional one: 然后是一个可选的:

  • parseWithPrompt(text: string, prompt: BasePromptValue): any: A method which takes in a string (assumed to be the response from a language model) and a formatted prompt (assumed to the prompt that generated such a response) and parses it into some structure. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so.
    parseWithPrompt(text: string, prompt: BasePromptValue): any :一种方法,它接受字符串(假定是来自语言模型的响应)和格式化提示(假定为生成此类响应的提示)并将其解析为某种结构。提示主要在 OutputParser 想要以某种方式重试或修复输出时提供,并且需要来自提示的信息才能执行此操作。

Below we go over some examples of output parsers.
下面我们介绍一些输出解析器的示例。

Structured Output Parser​ 结构化输出解析器

This output parser can be used when you want to return multiple fields.
当您想要返回多个字段时,可以使用此输出分析器。

import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { StructuredOutputParser } from "langchain/output_parsers";// With a `StructuredOutputParser` we can define a schema for the output.const parser = StructuredOutputParser.fromNamesAndDescriptions({  answer: "answer to the user's question",  source: "source used to answer the user's question, should be a website.",});const formatInstructions = parser.getFormatInstructions();const prompt = new PromptTemplate({  template:    "Answer the users question as best as possible.\n{format_instructions}\n{question}",  inputVariables: ["question"],  partialVariables: { format_instructions: formatInstructions },});const model = new OpenAI({ temperature: 0 });const input = await prompt.format({  question: "What is the capital of France?",});const response = await model.call(input);console.log(input);/*Answer the users question as best as possible.You must format your output as a JSON value that adheres to a given "JSON Schema" instance."JSON Schema" is a declarative language that allows you to annotate and validate JSON documents.For example, the example "JSON Schema" instance {{"properties": {{"foo": {{"description": "a list of test words", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}}}would match an object with one required property, "foo". The "type" property specifies "foo" must be an "array", and the "description" property semantically describes it as "a list of test words". The items within "foo" must be strings.Thus, the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of this example "JSON Schema". The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.Your output will be parsed and type-checked according to the provided schema instance, so make sure all fields in your output match the schema exactly and there are no trailing commas!Here is the JSON Schema instance your output must adhere to. Include the enclosing markdown codeblock:```json{"type":"object","properties":{"answer":{"type":"string","description":"answer to the user's question"},"source":{"type":"string","description":"source used to answer the user's question, should be a website."}},"required":["answer","source"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}```What is the capital of France?*/console.log(response);/*{"answer": "Paris", "source": "https://en.wikipedia.org/wiki/Paris"}*/console.log(await parser.parse(response));// { answer: 'Paris', source: 'https://en.wikipedia.org/wiki/Paris' }

API Reference:

  • OpenAI from langchain/llms/openai OpenAI从 langchain/llms/openai
  • PromptTemplate from langchain/prompts
    提示模板从 langchain/prompts
  • StructuredOutputParser from langchain/output_parsers
    结构化输出解析器从 langchain/output_parsers

Structured Output Parser with Zod Schema​具有 Zod 架构的结构化输出解析器

This output parser can be also be used when you want to define the output schema using Zod, a TypeScript validation library. The Zod schema passed in needs be parseable from a JSON string, so eg. z.date() is not allowed, but z.coerce.date() is.
当您想要使用 Zod(一个 TypeScript 验证库)定义输出架构时,也可以使用此输出解析器。传入的 Zod 模式需要可从 JSON 字符串解析,例如。不允许使用 z.date() ,但允许使用 z.coerce.date() 。

import { z } from "zod";import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { StructuredOutputParser } from "langchain/output_parsers";// We can use zod to define a schema for the output using the `fromZodSchema` method of `StructuredOutputParser`.const parser = StructuredOutputParser.fromZodSchema(  z.object({    answer: z.string().describe("answer to the user's question"),    sources: z      .array(z.string())      .describe("sources used to answer the question, should be websites."),  }));const formatInstructions = parser.getFormatInstructions();const prompt = new PromptTemplate({  template:    "Answer the users question as best as possible.\n{format_instructions}\n{question}",  inputVariables: ["question"],  partialVariables: { format_instructions: formatInstructions },});const model = new OpenAI({ temperature: 0 });const input = await prompt.format({  question: "What is the capital of France?",});const response = await model.call(input);console.log(input);/*Answer the users question as best as possible.The output should be formatted as a JSON instance that conforms to the JSON schema below.As an example, for the schema {{"properties": {{"foo": {{"title": "Foo", "description": "a list of strings", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}}}the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of the schema. The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.Here is the output schema:```{"type":"object","properties":{"answer":{"type":"string","description":"answer to the user's question"},"sources":{"type":"array","items":{"type":"string"},"description":"sources used to answer the question, should be websites."}},"required":["answer","sources"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}```What is the capital of France?*/console.log(response);/*{"answer": "Paris", "sources": ["https://en.wikipedia.org/wiki/Paris"]}*/console.log(await parser.parse(response));/*{ answer: 'Paris', sources: [ 'https://en.wikipedia.org/wiki/Paris' ] }*/

API Reference:

  • OpenAI from langchain/llms/openai OpenAI从 langchain/llms/openai
  • PromptTemplate from langchain/prompts
    提示模板从 langchain/prompts
  • StructuredOutputParser from langchain/output_parsers
    结构化输出解析器从 langchain/output_parsers

Output Fixing Parser​ 输出修复解析器

This output parser wraps another output parser, and in the event that the first one fails it calls out to another LLM to fix any errors.
此输出解析器包装另一个输出解析器,如果第一个输出解析器失败,它会调用另一个 LLM 以修复任何错误。

import { z } from "zod";import { ChatOpenAI } from "langchain/chat_models/openai";import {  StructuredOutputParser,  OutputFixingParser,} from "langchain/output_parsers";export const run = async () => {  const parser = StructuredOutputParser.fromZodSchema(    z.object({      answer: z.string().describe("answer to the user's question"),      sources: z        .array(z.string())        .describe("sources used to answer the question, should be websites."),    })  );  /** This is a bad output because sources is a string, not a list */  const badOutput = `\`\`\`json  {    "answer": "foo",    "sources": "foo.com"  }  \`\`\``;  try {    await parser.parse(badOutput);  } catch (e) {    console.log("Failed to parse bad output: ", e);    /*    Failed to parse bad output:  OutputParserException [Error]: Failed to parse. Text: ```json      {        "answer": "foo",        "sources": "foo.com"      }      ```. Error: [      {        "code": "invalid_type",        "expected": "array",        "received": "string",        "path": [          "sources"        ],        "message": "Expected array, received string"      }    ]    at StructuredOutputParser.parse (/Users/ankushgola/Code/langchainjs/langchain/src/output_parsers/structured.ts:71:13)    at run (/Users/ankushgola/Code/langchainjs/examples/src/prompts/fix_parser.ts:25:18)    at <anonymous> (/Users/ankushgola/Code/langchainjs/examples/src/index.ts:33:22)   */  }  const fixParser = OutputFixingParser.fromLLM(    new ChatOpenAI({ temperature: 0 }),    parser  );  const output = await fixParser.parse(badOutput);  console.log("Fixed output: ", output);  // Fixed output:  { answer: 'foo', sources: [ 'foo.com' ] }};

API Reference:

  • ChatOpenAI from langchain/chat_models/openai 聊天打开AI从 langchain/chat_models/openai
  • StructuredOutputParser from langchain/output_parsers
    结构化输出解析器从 langchain/output_parsers
  • OutputFixingParser from langchain/output_parsers
    输出修复解析器从 langchain/output_parsers

Comma-separated List Parser​ 逗号分隔的列表解析器

This output parser can be used when you want to return a list of comma-separated items.
当您想要返回逗号分隔项的列表时,可以使用此输出分析器。

import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { CommaSeparatedListOutputParser } from "langchain/output_parsers";export const run = async () => {  // With a `CommaSeparatedListOutputParser`, we can parse a comma separated list.  const parser = new CommaSeparatedListOutputParser();  const formatInstructions = parser.getFormatInstructions();  const prompt = new PromptTemplate({    template: "List five {subject}.\n{format_instructions}",    inputVariables: ["subject"],    partialVariables: { format_instructions: formatInstructions },  });  const model = new OpenAI({ temperature: 0 });  const input = await prompt.format({ subject: "ice cream flavors" });  const response = await model.call(input);  console.log(input);  /*   List five ice cream flavors.   Your response should be a list of comma separated values, eg: `foo, bar, baz`  */  console.log(response);  // Vanilla, Chocolate, Strawberry, Mint Chocolate Chip, Cookies and Cream  console.log(await parser.parse(response));  /*  [    'Vanilla',    'Chocolate',    'Strawberry',    'Mint Chocolate Chip',    'Cookies and Cream'  ]  */};

API Reference:

  • OpenAI from langchain/llms/openai OpenAI从 langchain/llms/openai
  • PromptTemplate from langchain/prompts
    提示模板从 langchain/prompts
  • CommaSeparatedListOutputParser from langchain/output_parsers
    逗号分隔列表输出解析器从 langchain/output_parsers

Custom List Parser​ 自定义列表解析器

This output parser can be used when you want to return a list of items, with a specific length and separator.
当您想要返回具有特定长度和分隔符的项目列表时,可以使用此输出分析器。

import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { CustomListOutputParser } from "langchain/output_parsers";// With a `CustomListOutputParser`, we can parse a list with a specific length and separator.const parser = new CustomListOutputParser({ length: 3, separator: "\n" });const formatInstructions = parser.getFormatInstructions();const prompt = new PromptTemplate({  template: "Provide a list of {subject}.\n{format_instructions}",  inputVariables: ["subject"],  partialVariables: { format_instructions: formatInstructions },});const model = new OpenAI({ temperature: 0 });const input = await prompt.format({  subject: "great fiction books (book, author)",});const response = await model.call(input);console.log(input);/*Provide a list of great fiction books (book, author).Your response should be a list of 3 items separated by "\n" (eg: `foo\n bar\n baz`)*/console.log(response);/*The Catcher in the Rye, J.D. SalingerTo Kill a Mockingbird, Harper LeeThe Great Gatsby, F. Scott Fitzgerald*/console.log(await parser.parse(response));/*[  'The Catcher in the Rye, J.D. Salinger',  'To Kill a Mockingbird, Harper Lee',  'The Great Gatsby, F. Scott Fitzgerald']*/

API Reference:

  • OpenAI from langchain/llms/openai OpenAI从 langchain/llms/openai
  • PromptTemplate from langchain/prompts
    提示模板从 langchain/prompts
  • CustomListOutputParser from langchain/output_parsers
    自定义列表输出解析器从 langchain/output_parsers

Combining Output Parsers​ 组合输出分析器

Output parsers can be combined using CombiningOutputParser. This output parser takes in a list of output parsers, and will ask for (and parse) a combined output that contains all the fields of all the parsers.
输出解析器可以使用 CombiningOutputParser 进行组合。此输出解析器接收输出解析器列表,并将请求(并解析)包含所有解析器的所有字段的组合输出。

import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import {  StructuredOutputParser,  RegexParser,  CombiningOutputParser,} from "langchain/output_parsers";const answerParser = StructuredOutputParser.fromNamesAndDescriptions({  answer: "answer to the user's question",  source: "source used to answer the user's question, should be a website.",});const confidenceParser = new RegexParser(  /Confidence: (A|B|C), Explanation: (.*)/,  ["confidence", "explanation"],  "noConfidence");const parser = new CombiningOutputParser(answerParser, confidenceParser);const formatInstructions = parser.getFormatInstructions();const prompt = new PromptTemplate({  template:    "Answer the users question as best as possible.\n{format_instructions}\n{question}",  inputVariables: ["question"],  partialVariables: { format_instructions: formatInstructions },});const model = new OpenAI({ temperature: 0 });const input = await prompt.format({  question: "What is the capital of France?",});const response = await model.call(input);console.log(input);/*Answer the users question as best as possible.Return the following outputs, each formatted as described below:Output 1:The output should be formatted as a JSON instance that conforms to the JSON schema below.As an example, for the schema {{"properties": {{"foo": {{"title": "Foo", "description": "a list of strings", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}}}the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of the schema. The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.Here is the output schema:```{"type":"object","properties":{"answer":{"type":"string","description":"answer to the user's question"},"source":{"type":"string","description":"source used to answer the user's question, should be a website."}},"required":["answer","source"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}```Output 2:Your response should match the following regex: /Confidence: (A|B|C), Explanation: (.*)/What is the capital of France?*/console.log(response);/*Output 1:{"answer":"Paris","source":"https://www.worldatlas.com/articles/what-is-the-capital-of-france.html"}Output 2:Confidence: A, Explanation: The capital of France is Paris.*/console.log(await parser.parse(response));/*{  answer: 'Paris',  source: 'https://www.worldatlas.com/articles/what-is-the-capital-of-france.html',  confidence: 'A',  explanation: 'The capital of France is Paris.'}*/

API Reference:

  • OpenAI from langchain/llms/openai OpenAI从 langchain/llms/openai
  • PromptTemplate from langchain/prompts
    提示模板从 langchain/prompts
  • StructuredOutputParser from langchain/output_parsers
    结构化输出解析器从 langchain/output_parsers
  • RegexParser from langchain/output_parsers 正则表达式解析器从 langchain/output_parsers
  • CombiningOutputParser from langchain/output_parsers
    组合输出解析器从 langchain/output_parsers

Example Selectors

INFO

Conceptual Guide

If you have a large number of examples, you may need to programmatically select which ones to include in the prompt. The ExampleSelector is the class responsible for doing so. The base interface is defined as below.
如果有大量示例,则可能需要以编程方式选择要包含在提示中的示例。示例选择器是负责执行此操作的类。基本接口定义如下。

class BaseExampleSelector {  addExample(example: Example): Promise<void | string>;  selectExamples(input_variables: Example): Promise<Example[]>;}

It needs to expose a selectExamples - this takes in the input variables and then returns a list of examples method - and an addExample method, which saves an example for later selection. It is up to each specific implementation as to how those examples are saved and selected. Let’s take a look at some below.
它需要公开一个 selectExamples - 这将接收输入变量,然后返回示例方法的列表 - 和一个 addExample 方法,它保存一个示例以供以后选择。如何保存和选择这些示例取决于每个特定的实现。让我们来看看下面的一些。

Select by Length​ 按长度选择

This ExampleSelector selects which examples to use based on length. This is useful when you are worried about constructing a prompt that will go over the length of the context window. For longer inputs, it will select fewer examples to include, while for shorter inputs it will select more.
此 ExampleSelector 根据长度选择要使用的示例。当您担心构造一个将超过上下文窗口长度的提示时,这很有用。对于较长的输入,它将选择要包含的示例较少,而对于较短的输入,它将选择更多示例。

import {  LengthBasedExampleSelector,  PromptTemplate,  FewShotPromptTemplate,} from "langchain/prompts";export async function run() {  // Create a prompt template that will be used to format the examples.  const examplePrompt = new PromptTemplate({    inputVariables: ["input", "output"],    template: "Input: {input}\nOutput: {output}",  });  // Create a LengthBasedExampleSelector that will be used to select the examples.  const exampleSelector = await LengthBasedExampleSelector.fromExamples(    [      { input: "happy", output: "sad" },      { input: "tall", output: "short" },      { input: "energetic", output: "lethargic" },      { input: "sunny", output: "gloomy" },      { input: "windy", output: "calm" },    ],    {      examplePrompt,      maxLength: 25,    }  );  // Create a FewShotPromptTemplate that will use the example selector.  const dynamicPrompt = new FewShotPromptTemplate({    // We provide an ExampleSelector instead of examples.    exampleSelector,    examplePrompt,    prefix: "Give the antonym of every input",    suffix: "Input: {adjective}\nOutput:",    inputVariables: ["adjective"],  });  // An example with small input, so it selects all examples.  console.log(await dynamicPrompt.format({ adjective: "big" }));  /*   Give the antonym of every input   Input: happy   Output: sad   Input: tall   Output: short   Input: energetic   Output: lethargic   Input: sunny   Output: gloomy   Input: windy   Output: calm   Input: big   Output:   */  // An example with long input, so it selects only one example.  const longString =    "big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else";  console.log(await dynamicPrompt.format({ adjective: longString }));  /*   Give the antonym of every input   Input: happy   Output: sad   Input: big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else   Output:   */}

API Reference:

  • LengthBasedExampleSelector from langchain/prompts
    LengthBasedExampleSelectorfrom langchain/prompts
  • PromptTemplate from langchain/prompts
    提示模板从 langchain/prompts
  • FewShotPromptTemplate from langchain/prompts
    FewShotPromptTemplatefrom langchain/prompts

Select by Similarity​ 按相似性选择

The SemanticSimilarityExampleSelector selects examples based on which examples are most similar to the inputs. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs.
SemanticSimilarityExampleSelector 根据与输入最相似的示例选择示例。它通过查找嵌入的示例来实现这一点,这些嵌入与输入具有最大的余弦相似性。

import { OpenAIEmbeddings } from "langchain/embeddings/openai";import {  SemanticSimilarityExampleSelector,  PromptTemplate,  FewShotPromptTemplate,} from "langchain/prompts";import { HNSWLib } from "langchain/vectorstores/hnswlib";export async function run() {  // Create a prompt template that will be used to format the examples.  const examplePrompt = new PromptTemplate({    inputVariables: ["input", "output"],    template: "Input: {input}\nOutput: {output}",  });  // Create a SemanticSimilarityExampleSelector that will be used to select the examples.  const exampleSelector = await SemanticSimilarityExampleSelector.fromExamples(    [      { input: "happy", output: "sad" },      { input: "tall", output: "short" },      { input: "energetic", output: "lethargic" },      { input: "sunny", output: "gloomy" },      { input: "windy", output: "calm" },    ],    new OpenAIEmbeddings(),    HNSWLib,    { k: 1 }  );  // Create a FewShotPromptTemplate that will use the example selector.  const dynamicPrompt = new FewShotPromptTemplate({    // We provide an ExampleSelector instead of examples.    exampleSelector,    examplePrompt,    prefix: "Give the antonym of every input",    suffix: "Input: {adjective}\nOutput:",    inputVariables: ["adjective"],  });  // Input is about the weather, so should select eg. the sunny/gloomy example  console.log(await dynamicPrompt.format({ adjective: "rainy" }));  /*   Give the antonym of every input   Input: sunny   Output: gloomy   Input: rainy   Output:   */  // Input is a measurement, so should select the tall/short example  console.log(await dynamicPrompt.format({ adjective: "large" }));  /*   Give the antonym of every input   Input: tall   Output: short   Input: large   Output:   */}

API Reference:

  • OpenAIEmbeddings from langchain/embeddings/openai
    OpenAIEmbeddingsfrom langchain/embeddings/openai
  • SemanticSimilarityExampleSelector from langchain/prompts
    SemanticSimilarityExampleSelectorfrom langchain/prompts
  • PromptTemplate from langchain/prompts
    提示模板从 langchain/prompts
  • FewShotPromptTemplate from langchain/prompts
    FewShotPromptTemplatefrom langchain/prompts
  • HNSWLib from langchain/vectorstores/hnswlib HNSWLibfrom langchain/vectorstores/hnswlib

版权声明:内容来源于互联网和用户投稿 如有侵权请联系删除

本文地址:http://0561fc.cn/210259.html