Wiki
FlickrLeetcode
  • 💞Artificial Intelligence
    • ⚙️Midjourney vs Stable Diffusion
    • ⚙️Creative QR Codes with ControlNet
      • ⚙️How to generate a QR Code
      • ⚙️Collect prompt
      • ⚙️clip skip
      • ⚙️AUTOMATIC1111
      • ⚙️Edge detection and human pose detection
    • ⚙️Stable Diffusion
    • ⚙️What is 'token' and 'turbo'?
    • ⚙️Today's learning--LangChain
      • ⚙️Prompt
    • ⚙️LLM Parameters Demystified
    • ⚙️What is Cohere playground?
    • ⚙️DALL-E
    • ⚙️How to use AI to learn something that you don't know?
    • ⚙️Diffusers
    • Boosting Algorithms in machine learning, part 1:AdaBoost
  • 💞GitHub
    • ✅How to add a issue of the code with GitHub?
    • ✅How to edit code?
    • ✅How to use GitHub Desktop
    • ✅How to review in GutHub?
  • 💞Lastest concepts for me
    • 🪅Pandas DataFrame
    • 🪅Choosing between loc and iloc
  • 💞Need to remember
    • 🔢An article for leetcode
    • 🉑two types of group work
    • 🍒What is hashtag#?
    • 🐝Week6_APD
    • 🦋API
    • 🎼BFF
  • 💞Python
    • 🐍argument & parameter
    • 🐍"{:.2f}"
    • 🐍Timeit in Python
    • 🐍Today's learning--Pylint
    • 🐍Split and Strip in Python
    • 🐍Getter and Setter in Python
    • 🐍"import json" in Python
    • 🐍Open CSV file in Python
    • 🐍print(f"An error occurred: {e}")
  • Page
  • 🪅command-line
  • 💞DataVisualization
    • 🪅How to choose plot type
  • 💞DataCleaning
    • 🪅Some basic code of data_cleaning
  • 💞Java
    • 🍡difference use between ArrayList and HashMap in Java
    • 🍡ArrayList & LinkedList
    • 🍡assertFalse(App.checkInputNumber(1,0))
      • 🍡HashSet
    • 🍡iterator
    • 🍡Java concept of assignment 1
    • 🍡Week6_Java
    • 🍡serializable
  • 💞Mark something that easily to forget
    • 🙉Mark something
    • 🙉How to edit cover picture from "Flickr" using "URL" in GitBook?
  • 💞VS Code
    • ✖️Install a project in VS Code
    • ✖️What should do after editing the code of one branch?
    • ✖️How to create a branch in VS code?
    • ✖️How to debug?
Powered by GitBook
On this page
  • Memory: add state to chains and agents
  • Building a language Model Application: Chat Models
  • Get Message Completions from a Chat Model
  • Chat Prompt Template
  • Chains with Chat Models

Was this helpful?

Edit on GitHub
  1. Artificial Intelligence

Today's learning--LangChain

LangChain

PreviousWhat is 'token' and 'turbo'?NextPrompt

Last updated 1 year ago

Was this helpful?


# set the appropriate environment variable
import os
os.environ["SERPAPI_API_KEY"] = "..."
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.agents import OpenAI

# First, let's load the language model we're going to use to control the agent.
llm = OpenAI(temperature=0)

# Next, let's load some tools to use. Note that the 'llm-math' tool uses an LLM, so we need to pass that in.
tools = load_tools(["serpapi", "llm-math"],llm=llm)

# Finally, let's initialize the agent with the tools, the language model, and the type of agent we want to use.
agent = initialize_agent(tools,llm,agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)

# Now let's test it out!
agent.run("What was the high temperature in SF yesterday? What is that number raised to the .023 power?")

Memory: add state to chains and agents

So far, all the chains and agents we've gone through have been stateless. But often, you may want to chain or agent to have some concept of "memory" so that it may remember information about its previous interactions. The clearest and simple example of this is when designing a chatbot-you want it to remember previous message so it can use context from that to have a better conversation. This would be a type of "short-term memory". On the more complex side, you could imagine a chain/agent remembering key pieces of inofrmation over time - this would be a form of "long-term memory".

LangChain provides several specially created chains just for this purpose. This notebook walks through using one of chain (the ConversationChain) with two different types of memory.

By default, the ConversationChain has a simple type of memory that remembers all previous inputs/outputs and adds them to the context that is passed. Let's take a look at using this chain (setting verbose=True so we can see the prompt)

from langchain import OpenAI, ConversationChain

llm = OpenAI(temperature=0)
conversation = ConversationChain(llm=llm, verbose=True)

output = conversation.predict(input="Hi there!")
print(output)

Building a language Model Application: Chat Models

Similarly, you can use chat models instead of LLMs. Chat models are a variation on language models. While chat models use language models under the hood, the interface they expose is a bit different: rather than expose a "text in, txet out" API(Application Programming Interface), they expose an interface where "chat messages" are the inputs and outputs. Chat model APIs are fiarly new, so we are still figuring out the correct abstractions.

Get Message Completions from a Chat Model

You can get completions by passing one or more message to the chat model. The response will be a message. The types of message currently supported in LangChain are AIMessage, HumanMessage, SystemMessage, and ChatMessage-ChatMessage takes in an arbitary role parameter. Most of the time, you'll just dealing withdealing with HumanMessage, AIMessage, and SystemMessage.

from langchain.chat_models import ChatOpenAI
from langchain.schema import(
    AIMessage,
    HumanMessage,
    SystemMessage
)
chat = ChatOpenAI(temperature=0)

You can get completions by passing in a single message.

chat([HumanMessage(content="Translate this setence from English to French. I love programming.")])
# -> AIMessage(content="J'aime programmer.", additional_kwargs={})

You can also pass in multiple messages for OpenAI's gpt-3.5-turbo and gpt-4 models.

messages= [
    SystemMessage(content="You are a helpful assitant that translates English to French."),
    HumanMessage(content="I love programming.")
]
chat(message)
# -> AIMessage(content="J'aime programmer.", additional_kwargs={})

You can go one step further and generate completions for multiple sets of messages using generate. This returns an LLMResult with an additional message parameter:

batch_message = [
    [
        SystemMessage(content="You are a helpful assistant that translates English to French."),
        HumanMessage(content="I love programming.")
    ],
    [
        SystemMessage(content="You are a helpful assistant that translates English to French."),
        HumanMessage(content="I love aitificial intelligence.")
    ],
]
result = chat.generate(batch_messages)
result
# ->LLMResult(generations=[[ChatGeneration(text="J'aime programmer.",
generation_info=None, message=AIMessage(content="J'aime programmer.',
additional_kwargs={}))], [ChatGeneration(text="J'aime l'intelligence artificielle.", generation_info=None, message=AIMessage(content="J'aime l'intelligence artificielle.", additional_kwargs={}))]], llm_output={'token_usage':
{'prompt_tokens': 57, 'completion_token': 20, 'total_token': 77}})

Chat Prompt Template

Similar to LLMs, you can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate's format_prompt--this return a PromptValue, which you can use convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.

For convenience, there is a form_template method exposed on the template. If you were to use this template, this is what it would look like:

from langchain.chat_models import ChatOpenAI
from langchain.prompt.chat import(
    ChatPromptTemplate,
    SystemMessagePromptTemplate,
    HumanMessagePromptTemplate,
)
chat = ChatOpenAI(temperature=0)

template = "You are a helpful assistant that translate {input_language} to {output_language}."
system_message_prompt = SystemMessagePropmtTemplate.from_template(template)
human_template = "{text}"
human_message_prompt = HumanMessage

Chains with Chat Models

The LLMChain discussed in the above section can be used with chat models as well:

from langchain.chat_models import ChatOpenAI
from langchain import LLMChain
from langchain.prompts.chat import(
    ChatPromptTemplate,
    SystemMessagePromptTemplate,
    HumanMessagePromptTemplate,
)
chat = ChatOpenAI(temperature=0)

template = "You are a helpful assistant that translates {input_language} to {output_language}."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template = "{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt
human_message_prompt])

chain = LLMChain(llm=chat, prompt=chat_prompt)
chain.run(input_language="Engilsh", output_language="French", text="I love programming.")
# -> "J'aime programmer."
💞
⚙️
https://python.langchain.com/en/latest/getting_started/getting_started.html