This cheatsheet is designed for developers who want to integrate CrewAI (a multi-agent AI orchestration framework) with a Django backend. It includes setup, implementation patterns, best practices, and advanced usage.
๐ง Overview
Component | Description |
---|---|
CrewAI | Enables building workflows using multiple AI agents that collaborate on tasks. |
Django | A high-level Python web framework used to serve APIs, manage data, and handle user authentication. |
Use Case | Create intelligent workflows powered by LLMs within a Django application (e.g., content generation, decision engines, automation). |
โ Prerequisites
- Python 3.10+
- Django 4.x
- OpenAI API key or any LLM provider credentials
crewai
,langchain
,django
installed via pip
pip install django crewai langchain openai
๐ Step-by-Step Integration
1. Setup Django Project
django-admin startproject crew_project
cd crew_project
python manage.py startapp ai_agents
Add 'ai_agents'
to INSTALLED_APPS
in settings.py
.
2. Configure Environment Variables
Store your API keys securely:
# .env file
OPENAI_API_KEY=your_openai_api_key_here
In settings.py
:
import os
from dotenv import load_dotenv
load_dotenv()
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
Install python-dotenv
:
pip install python-dotenv
3. Define Models (Optional)
If you need to store agent responses or task history:
# models.py
from django.db import models
class TaskHistory(models.Model):
task_description = models.TextField()
result = models.TextField()
created_at = models.DateTimeField(auto_now_add=True)
Run migrations:
python manage.py makemigrations
python manage.py migrate
4. Implement CrewAI Logic
Create an agents.py
inside ai_agents/
:
# agents.py
from crewai import Agent, Task, Crew, Process
from langchain.chat_models import ChatOpenAI
llm = ChatOpenAI(model="gpt-4", temperature=0.7)
researcher = Agent(
role='Research Analyst',
goal='Research latest trends in AI',
backstory='You are an expert analyst in AI trends.',
verbose=True,
llm=llm
)
writer = Agent(
role='Content Writer',
goal='Write engaging articles based on research',
backstory='You are a skilled tech writer.',
verbose=True,
llm=llm
)
def run_crewai_task(topic: str):
task_research = Task(
description=f"Research the topic: {topic}",
expected_output="A summary of current trends",
agent=researcher
)
task_write = Task(
description="Write an article based on the research",
expected_output="An article of at least 500 words",
agent=writer
)
crew = Crew(
agents=[researcher, writer],
tasks=[task_research, task_write],
process=Process.sequential
)
result = crew.kickoff()
return result
5. Create Django Views
# views.py
from django.http import JsonResponse
from django.views import View
from .agents import run_crewai_task
class RunAgentView(View):
def get(self, request):
topic = request.GET.get('topic', 'AI Trends')
result = run_crewai_task(topic)
# Optional: Save to DB
# TaskHistory.objects.create(task_description=topic, result=str(result))
return JsonResponse({"result": str(result)})
6. Set Up URLs
# urls.py
from django.urls import path
from ai_agents.views import RunAgentView
urlpatterns = [
path('run-agent/', RunAgentView.as_view(), name='run_agent'),
]
Include in main project’s urls.py
:
from django.urls import include
urlpatterns = [
...
path('api/', include('ai_agents.urls')),
]
7. Test Your Endpoint
Start server:
python manage.py runserver
Visit:
http://localhost:8000/api/run-agent/?topic=Quantum+Computing
๐ Authentication & Permissions (Optional but Recommended)
Use Django REST Framework or built-in middleware to secure endpoints.
Example using DRF:
pip install djangorestframework
In settings.py
:
REST_FRAMEWORK = {
'DEFAULT_PERMISSION_CLASSES': [
'rest_framework.permissions.IsAuthenticated',
],
'DEFAULT_AUTHENTICATION_CLASSES': [
'rest_framework.authentication.SessionAuthentication',
]
}
Update view:
from rest_framework.views import APIView
from rest_framework.response import Response
from rest_framework.permissions import IsAuthenticated
class ProtectedAgentView(APIView):
permission_classes = [IsAuthenticated]
def get(self, request):
topic = request.GET.get('topic', 'AI Trends')
result = run_crewai_task(topic)
return Response({"result": str(result)})
๐งช Testing CrewAI Workflows
- Use mock LLMs for unit testing.
- Store test results in fixtures or database.
- Validate output format and structure.
๐ฆ Advanced Patterns
1. Async Execution (Celery)
For long-running tasks:
pip install celery[redis]
# tasks.py
from celery import shared_task
from .agents import run_crewai_task
@shared_task
def async_run_crewai_task(topic):
return run_crewai_task(topic)
In view:
from .tasks import async_run_crewai_task
def get(self, request):
topic = request.GET.get('topic')
task = async_run_crewai_task.delay(topic)
return JsonResponse({"task_id": task.id})
Poll or use WebSockets for result updates.
2. Custom Tools with LangChain
Extend CrewAI with custom tools:
from langchain.tools import tool
@tool
def search_internet(query: str) -> str:
"""Searches internet for information."""
return "Mock search result"
tools = [search_internet]
researcher = Agent(tools=tools, ...)
3. Memory Management
Use memory across tasks:
from langchain.memory import ConversationBufferMemory
memory = ConversationBufferMemory()
researcher = Agent(memory=memory, ...)
Save/load memory from DB or cache for persistent sessions.
4. Logging and Monitoring
Log agent actions and outputs into files or monitoring systems:
import logging
logger = logging.getLogger(__name__)
def run_crewai_task(topic):
logger.info(f"Starting task for topic: {topic}")
...
logger.info(f"Result: {result}")
๐ Folder Structure
crew_project/
โ
โโโ ai_agents/
โ โโโ __init__.py
โ โโโ agents.py
โ โโโ views.py
โ โโโ models.py
โ โโโ tasks.py
โ
โโโ crew_project/
โ โโโ settings.py
โ โโโ urls.py
โ โโโ wsgi.py
โ
โโโ manage.py
๐งน Best Practices
Practice | Description |
---|---|
โ
Use .env | Keep secrets out of codebase |
โ Modularize Agents | Separate logic per domain |
โ Use DRF for APIs | Cleaner API management |
โ Async Tasks | For long-running processes |
โ Cache Results | Avoid redundant LLM calls |
โ Monitor Costs | Track token usage and costs |
โ Rate Limiting | Protect endpoints from abuse |
๐ก๏ธ Security Considerations
- Validate input before sending to agents.
- Sanitize LLM output before displaying.
- Use rate limiting and authentication.
- Log suspicious inputs/output.
๐งฉ Additional Integrations
Feature | Library |
---|---|
Frontend UI | React / Vue.js |
Realtime Updates | WebSockets / Channels |
Task Queue | Celery / Redis |
Monitoring | Sentry / Prometheus |
Deployment | Docker / Gunicorn / Nginx |
๐งฐ Useful Commands
# Start Django dev server
python manage.py runserver
# Apply migrations
python manage.py migrate
# Create superuser
python manage.py createsuperuser
# Run tests
python manage.py test
# Collect static files
python manage.py collectstatic
๐ Resources
โ Summary Checklist
โ
Django setup
โ
CrewAI agent/task definitions
โ
Secure API endpoint
โ
Asynchronous support (optional)
โ
Logging & monitoring
โ
Model integration (optional)
โ
Input validation & sanitization
โ
Cost tracking and caching