Use this cheat sheet to integrate a LangChain-powered AI agent with your Django app.
π§ 1. Install Dependencies
pip install django langchain openai python-dotenv
π§ LangChain LLM Setup (OpenAI, Gemini, Claude, DeepSeek, Mistral)
Install the appropriate packages depending on the model provider you want to use:
Provider | Installation Command |
---|---|
OpenAI | pip install openai langchain |
Gemini | pip install langchain-google-genai |
Claude | pip install anthropic langchain |
DeepSeek | pip install langchain openai (used via OpenAI-compatible APIs) |
Mistral / Mixtral | pip install langchain huggingface_hub or use together API |
π LangChain Setup Per Provider
β OpenAI
from langchain.chat_models import ChatOpenAI
llm = ChatOpenAI(openai_api_key=os.getenv("OPENAI_API_KEY"))
β Gemini (Google)
from langchain_google_genai import ChatGoogleGenerativeAI
llm = ChatGoogleGenerativeAI(model="gemini-pro", google_api_key=os.getenv("GOOGLE_API_KEY"))
β Claude (Anthropic)
from langchain.chat_models import ChatAnthropic
llm = ChatAnthropic(anthropic_api_key=os.getenv("ANTHROPIC_API_KEY"), model_name="claude-3-sonnet-20240229")
β DeepSeek (via OpenAI-compatible endpoint)
from langchain.chat_models import ChatOpenAI
llm = ChatOpenAI(
openai_api_key=os.getenv("DEEPSEEK_API_KEY"),
base_url="https://api.deepseek.com/v1",
model="deepseek-chat"
)
β Mistral/Mixtral (via Hugging Face or Together API)
from langchain.llms import HuggingFaceHub
llm = HuggingFaceHub(repo_id="mistralai/Mixtral-8x7B-Instruct-v0.1", huggingfacehub_api_token=os.getenv("HF_API_KEY"))
Or via Together:
from langchain.chat_models import ChatOpenAI
llm = ChatOpenAI(
openai_api_key=os.getenv("TOGETHER_API_KEY"),
base_url="https://api.together.xyz/v1",
model="mistralai/Mixtral-8x7B-Instruct-v0.1"
)
π 2. Project Structure
myproject/
βββ myapp/
β βββ views.py
β βββ urls.py
β βββ ...
βββ myproject/
β βββ settings.py
βββ .env
βββ manage.py
π 3. .env Configuration
OPENAI_API_KEY=your_openai_key
GOOGLE_API_KEY=your_gemini_key
In settings.py
:
import os
from dotenv import load_dotenv
load_dotenv()
βοΈ 4. LangChain Setup (e.g., Chatbot)
# myapp/ai/agent.py
from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.chains import LLMChain
import os
def build_chain():
llm = ChatOpenAI(api_key=os.getenv("OPENAI_API_KEY"), temperature=0.7)
prompt = ChatPromptTemplate.from_template("You are an Islamic insurance advisor. Q: {question} A:")
chain = LLMChain(llm=llm, prompt=prompt)
return chain
def get_ai_response(question):
chain = build_chain()
return chain.run({"question": question})
π 5. Django View for AI Endpoint
# myapp/views.py
from django.http import JsonResponse
from .ai.agent import get_ai_response
def ai_chat(request):
question = request.GET.get("q", "")
if not question:
return JsonResponse({"error": "No question provided"}, status=400)
response = get_ai_response(question)
return JsonResponse({"response": response})
π 6. Add URL Route
# myapp/urls.py
from django.urls import path
from .views import ai_chat
urlpatterns = [
path("chat/", ai_chat, name="ai_chat"),
]
And include in myproject/urls.py
:
from django.urls import path, include
urlpatterns = [
path("api/", include("myapp.urls")),
]
β‘ 7. Run and Query
python manage.py runserver
Query Example:
GET http://localhost:8000/api/chat/?q=What is ..
π§ Optional: Use Gemini (Instead of OpenAI)
from langchain_google_genai import ChatGoogleGenerativeAI
llm = ChatGoogleGenerativeAI(model="gemini-pro", google_api_key=os.getenv("GOOGLE_API_KEY"))
β Tips
- Use
LangChain Expression Language (LCEL)
for composability. - Wrap LangChain in async Django views for performance.
- Add semantic memory with
FAISS
,ChromaDB
, orWeaviate
. - For production: add CORS, rate limiting, auth (e.g., JWT), and async views.