聊天模型是语言模型的变体。虽然聊天模型在底层使用语言模型,但它们公开的界面有点不同。他们没有公开“文本输入,文本输出”API,而是公开了一个接口,其中“聊天消息”是输入和输出。
聊天模型 API 相当新,所以我们仍在寻找正确的抽象。
提供了以下文档部分:
- 入门:LangChain LLM 课程提供的所有功能的概述。
- 操作指南:操作指南的集合。这些重点介绍了如何使用我们的 LLM 课程(流媒体、异步等)实现各种目标。
- 集成:关于如何将不同的 LLM 提供商与 LangChain(OpenAI、Hugging Face 等)集成的示例集合。
开始
该笔记本介绍了如何开始使用聊天模型。该界面基于消息而不是原始文本。
from langchain.chat_models import ChatOpenAIfrom langchain import PromptTemplate, LLMChainfrom langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate,)from langchain.schema import ( AIMessage, HumanMessage, SystemMessage)
chat = ChatOpenAI(temperature=0)
您可以通过将一条或多条消息传递给聊天模型来获得聊天完成。响应将是一条消息。LangChain 目前支持的消息类型有AIMessage, HumanMessage, SystemMessage, 和ChatMessage–ChatMessage接受任意角色参数。大多数时候,你只会处理HumanMessage, AIMessage, 和SystemMessage
chat([HumanMessage(content="Translate this sentence from English to French. I love programming.")])
AIMessage(content="J'aime programmer.", additional_kwargs={})
OpenAI 的聊天模型支持多条消息作为输入。有关更多信息,请参见此处。以下是向聊天模型发送系统和用户消息的示例:
messages = [ SystemMessage(content="You are a helpful assistant that translates English to French."), HumanMessage(content="I love programming.")]chat(messages)
AIMessage(content="J'aime programmer.", additional_kwargs={})
您可以更进一步,使用 为多组消息生成补全generate。这将返回LLMResult带有附加message参数的 。
batch_messages = [ [ SystemMessage(content="You are a helpful assistant that translates English to French."), HumanMessage(content="I love programming.") ], [ SystemMessage(content="You are a helpful assistant that translates English to French."), HumanMessage(content="I love artificial intelligence.") ],]result = chat.generate(batch_messages)result
LLMResult(generations=[[ChatGeneration(text="J'aime programmer.", generation_info=None, message=AIMessage(content="J'aime programmer.", additional_kwargs={}))], [ChatGeneration(text="J'aime l'intelligence artificielle.", generation_info=None, message=AIMessage(content="J'aime l'intelligence artificielle.", additional_kwargs={}))]], llm_output={'token_usage': {'prompt_tokens': 57, 'completion_tokens': 20, 'total_tokens': 77}})
您可以从此 LLMResult 中恢复诸如令牌使用之类的内容
result.llm_output
{'token_usage': {'prompt_tokens': 57, 'completion_tokens': 20, 'total_tokens': 77}}
提示模板
您可以通过使用MessagePromptTemplate. 您可以ChatPromptTemplate从一个或多个构建一个MessagePromptTemplates。您可以使用ChatPromptTemplate's format_prompt– 这会返回一个PromptValue,您可以将其转换为字符串或 Message 对象,具体取决于您是要使用格式化值作为 llm 还是聊天模型的输入。
为方便起见,from_template模板上公开了一个方法。如果您要使用此模板,它会是这样的:
template="You are a helpful assistant that translates {input_language} to {output_language}."system_message_prompt = SystemMessagePromptTemplate.from_template(template)human_template="{text}"human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])# get a chat completion from the formatted messageschat(chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_messages())
AIMessage(content="J'adore la programmation.", additional_kwargs={})
如果想更直接的构造MessagePromptTemplate,可以在外面创建一个PromptTemplate,然后传入,eg:
prompt=PromptTemplate( template="You are a helpful assistant that translates {input_language} to {output_language}.", input_variables=["input_language", "output_language"],)system_message_prompt = SystemMessagePromptTemplate(prompt=prompt)
LLM链
您可以使用与之前非常相似的方式使用现有的 LLMChain - 提供提示和模型。
chain = LLMChain(llm=chat, prompt=chat_prompt)
chain.run(input_language="English", output_language="French", text="I love programming.")
"J'adore la programmation."
流媒体
通过回调处理支持流式ChatOpenAI处理。
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerchat = ChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], temperature=0)resp = chat([HumanMessage(content="Write me a song about sparkling water.")])
Verse 1:Bubbles rising to the topA refreshing drink that never stopsClear and crisp, it's pure delightA taste that's sure to exciteChorus:Sparkling water, oh so fineA drink that's always on my mindWith every sip, I feel aliveSparkling water, you're my vibeVerse 2:No sugar, no calories, just pure blissA drink that's hard to resistIt's the perfect way to quench my thirstA drink that always comes firstChorus:Sparkling water, oh so fineA drink that's always on my mindWith every sip, I feel aliveSparkling water, you're my vibeBridge:From the mountains to the seaSparkling water, you're the keyTo a healthy life, a happy soulA drink that makes me feel wholeChorus:Sparkling water, oh so fineA drink that's always on my mindWith every sip, I feel aliveSparkling water, you're my vibeOutro:Sparkling water, you're the oneA drink that's always so much funI'll never let you go, my friendSparkling
此处的示例都针对使用聊天模型的某些“操作方法”指南。
- 如何使用少量镜头示例
- 如何流式传输响应
如何使用少量镜头示例
本笔记本介绍了如何在聊天模型中使用少量镜头示例。
对于如何最好地进行少量投篮提示,似乎没有达成一致意见。因此,我们还没有围绕这个巩固任何抽象,而是使用现有的抽象。
交替的人类/人工智能信息
进行少量镜头提示的第一种方法依赖于使用交替的人/人工智能信息。请参阅下面的示例。
from langchain.chat_models import ChatOpenAIfrom langchain import PromptTemplate, LLMChainfrom langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate,)from langchain.schema import ( AIMessage, HumanMessage, SystemMessage)
chat = ChatOpenAI(temperature=0)
template="You are a helpful assistant that translates english to pirate."system_message_prompt = SystemMessagePromptTemplate.from_template(template)example_human = HumanMessagePromptTemplate.from_template("Hi")example_ai = AIMessagePromptTemplate.from_template("Argh me mateys")human_template="{text}"human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, example_human, example_ai, human_message_prompt])chain = LLMChain(llm=chat, prompt=chat_prompt)# get a chat completion from the formatted messageschain.run("I love programming.")
"I be lovin' programmin', me hearty!"
系统消息
OpenAI 提供了一个可选name参数,他们还建议将其与系统消息结合使用以进行少量提示。以下是如何执行此操作的示例。
template="You are a helpful assistant that translates english to pirate."system_message_prompt = SystemMessagePromptTemplate.from_template(template)example_human = SystemMessagePromptTemplate.from_template("Hi", additional_kwargs={"name": "example_user"})example_ai = SystemMessagePromptTemplate.from_template("Argh me mateys", additional_kwargs={"name": "example_assistant"})human_template="{text}"human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, example_human, example_ai, human_message_prompt])chain = LLMChain(llm=chat, prompt=chat_prompt)# get a chat completion from the formatted messageschain.run("I love programming.")
"I be lovin' programmin', me hearty."
如何流式传输响应
本笔记本介绍了如何将流式处理与聊天模型结合使用。
from langchain.chat_models import ChatOpenAIfrom langchain.schema import ( HumanMessage,)
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerchat = ChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], temperature=0)resp = chat([HumanMessage(content="Write me a song about sparkling water.")])
Verse 1:Bubbles rising to the topA refreshing drink that never stopsClear and crisp, it's pure delightA taste that's sure to exciteChorus:Sparkling water, oh so fineA drink that's always on my mindWith every sip, I feel aliveSparkling water, you're my vibeVerse 2:No sugar, no calories, just pure blissA drink that's hard to resistIt's the perfect way to quench my thirstA drink that always comes firstChorus:Sparkling water, oh so fineA drink that's always on my mindWith every sip, I feel aliveSparkling water, you're my vibeBridge:From the mountains to the seaSparkling water, you're the keyTo a healthy life, a happy soulA drink that makes me feel wholeChorus:Sparkling water, oh so fineA drink that's always on my mindWith every sip, I feel aliveSparkling water, you're my vibeOutro:Sparkling water, you're the oneA drink that's always so much funI'll never let you go, my friendSparkling
集成
这里的例子都强调了如何与不同的聊天模型集成。
- 人择的
- 蔚蓝
- 开放人工智能
- PromptLayer ChatOpenAI
人择
本笔记本介绍了如何开始使用 Anthropic 聊天模型。
from langchain.chat_models import ChatAnthropicfrom langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate,)from langchain.schema import ( AIMessage, HumanMessage, SystemMessage)
chat = ChatAnthropic()
messages = [ HumanMessage(content="Translate this sentence from English to French. I love programming.")]chat(messages)
AIMessage(content=" J'aime programmer. ", additional_kwargs={})
ChatAnthropic还支持异步和流功能:from langchain.callbacks.manager import CallbackManager from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler await chat.agenerate([messages]) LLMResult(generations=[[ChatGeneration(text=" J'aime la programmation.", generation_info=None, message=AIMessage(content=" J'aime la programmation.", additional_kwargs={}))]], llm_output={}) chat = ChatAnthropic(streaming=True, verbose=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()])) chat(messages) J'adore programmer. AIMessage(content=" J'adore programmer.", additional_kwargs={})人择
本笔记本介绍了如何开始使用 Anthropic 聊天模型。
from langchain.chat_models import ChatAnthropicfrom langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate,)from langchain.schema import ( AIMessage, HumanMessage, SystemMessage)
chat = ChatAnthropic()
messages = [ HumanMessage(content="Translate this sentence from English to French. I love programming.")]chat(messages)
AIMessage(content=" J'aime programmer. ", additional_kwargs={})
ChatAnthropic还支持异步和流功能:
from langchain.callbacks.manager import CallbackManager from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler await chat.agenerate([messages]) LLMResult(generations=[[ChatGeneration(text=" J'aime la programmation.", generation_info=None, message=AIMessage(content=" J'aime la programmation.", additional_kwargs={}))]], llm_output={}) chat = ChatAnthropic(streaming=True, verbose=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()])) chat(messages) J'adore programmer. AIMessage(content=" J'adore programmer.", additional_kwargs={})
蔚蓝
此笔记本介绍了如何连接到 Azure 托管的 OpenAI 端点
from langchain.chat_models import AzureChatOpenAIfrom langchain.schema import HumanMessage
BASE_URL = "https://${TODO}.openai.azure.com"API_KEY = "..."DEPLOYMENT_NAME = "chat"model = AzureChatOpenAI( openai_api_base=BASE_URL, openai_api_version="2023-03-15-preview", deployment_name=DEPLOYMENT_NAME, openai_api_key=API_KEY, openai_api_type = "azure",)
model([HumanMessage(content="Translate this sentence from English to French. I love programming.")])
AIMessage(content="\n\nJ'aime programmer.", additional_kwargs={})
开放人工智能
本笔记本介绍了如何开始使用 OpenAI 聊天模型。
from langchain.chat_models import ChatOpenAIfrom langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate,)from langchain.schema import ( AIMessage, HumanMessage, SystemMessage)
chat = ChatOpenAI(temperature=0)
messages = [ SystemMessage(content="You are a helpful assistant that translates English to French."), HumanMessage(content="Translate this sentence from English to French. I love programming.")]chat(messages)
AIMessage(content="J'aime programmer.", additional_kwargs={}, example=False)
您可以通过使用MessagePromptTemplate. 您可以ChatPromptTemplate从一个或多个构建一个MessagePromptTemplates。您可以使用ChatPromptTemplate's format_prompt– 这会返回一个PromptValue,您可以将其转换为字符串或 Message 对象,具体取决于您是要使用格式化值作为 llm 还是聊天模型的输入。
为方便起见,from_template模板上公开了一个方法。如果您要使用此模板,它会是这样的:
template="You are a helpful assistant that translates {input_language} to {output_language}."system_message_prompt = SystemMessagePromptTemplate.from_template(template)human_template="{text}"human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])# get a chat completion from the formatted messageschat(chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_messages())
AIMessage(content="J'adore la programmation.", additional_kwargs={})
PromptLayer ChatOpenAI
此示例展示了如何连接到PromptLayer以开始记录您的 ChatOpenAI 请求。
安装提示层
该promptlayer包需要将 PromptLayer 与 OpenAI 一起使用。使用 pip安装promptlayer。
pip install promptlayer
进口import os from langchain.chat_models import PromptLayerChatOpenAI from langchain.schema import HumanMessage
设置环境 API 密钥
您可以通过单击导航栏中的设置齿轮在www.promptlayer.com上创建一个 PromptLayer API 密钥。
将其设置为名为PROMPTLAYER_API_KEY.
os.environ["PROMPTLAYER_API_KEY"] = "**********"
像平常一样使用 PromptLayerOpenAI LLM
您可以选择传入pl_tags以使用 PromptLayer 的标记功能跟踪您的请求。
chat = PromptLayerChatOpenAI(pl_tags=["langchain"])chat([HumanMessage(content="I am a cat and I want")])
AIMessage(content='to take a nap in a cozy spot. I search around for a suitable place and finally settle on a soft cushion on the window sill. I curl up into a ball and close my eyes, relishing the warmth of the sun on my fur. As I drift off to sleep, I can hear the birds chirping outside and feel the gentle breeze blowing through the window. This is the life of a contented cat.', additional_kwargs={})
上面的请求现在应该出现在您的PromptLayer 仪表板上。
使用 PromptLayer 轨道
如果您想使用任何PromptLayer 跟踪功能return_pl_id,您需要在实例化 PromptLayer LLM 时传递参数以获取请求 ID。
chat = PromptLayerChatOpenAI(return_pl_id=True)chat_results = chat.generate([[HumanMessage(content="I am a cat and I want")]])for res in chat_results.generations: pl_request_id = res[0].generation_info["pl_request_id"] promptlayer.track.score(request_id=pl_request_id, score=100)
使用它可以让您在 PromptLayer 仪表板中跟踪模型的性能。如果您使用的是提示模板,您也可以将模板附加到请求中。总体而言,这使您有机会在 PromptLayer 仪表板中跟踪不同模板和模型的性能。
版权声明:内容来源于互联网和用户投稿 如有侵权请联系删除