- Install and import the necessary modules
- Configure API keys via environment variables
- Initialize and invoke the OpenAI LLM
- Swap to Google’s Generative AI (Gemini Pro) on the fly
Prerequisites
- Python 3.7+
- An OpenAI API key (Get yours here)
- A Google Cloud API key with access to Gemini Pro
Store your API keys securely. Avoid committing them to version control.
1. Imports and Environment Setup
Begin by installing and importing LangChain, theos module, and (optionally) the Google GenAI client.
2. Initializing the OpenAI LLM
LangChain’sOpenAI class only requires the OPENAI_API_KEY environment variable. No additional parameters are needed.
3. Defining a Prompt and Generating Text
With the client initialized, craft your prompt and call the model:Playful Pals Toys
4. Switching to Google Generative AI (Gemini Pro)
LangChain makes it effortless to swap LLM backends. Simply import and initialize theGoogleGenerativeAI class:
- Joyful Creations
- Imagination Unbound
- Wonder & Play
Comparing LLM Clients
| Model | Initialization Code | Environment Variable |
|---|---|---|
| OpenAI | llm = OpenAI() | OPENAI_API_KEY |
| Google Gemini Pro | llm = GoogleGenerativeAI(model="gemini-pro") | GOOGLE_API_KEY |
Next Steps
- Explore LangChain Documentation for advanced features
- Integrate embedding models and vector databases
- Automate prompt engineering with chains and agents
Always monitor your API usage to avoid unexpected charges.