Implementing CrewAI with Gemini: A Deep Dive into LLM Integration Challenges

Chiraggarg
2 min readNov 1, 2024

--

Disclaimer: The base implementation code discussed in this article was sourced from Krish Naik’s educational content. This article focuses on my personal experience troubleshooting integration issues and the lessons learned while working with this code.

As AI development continues to evolve, tools like CrewAI are becoming increasingly popular for orchestrating multiple AI agents to work together. While following Krish Naik’s tutorial on CrewAI implementation, I encountered several integration challenges that I believe would be valuable to share with the community. In this article, I’ll walk you through my experience integrating CrewAI with different Language Learning Models (LLMs) and the key lessons learned along the way.

Original Implementation

Here’s the initial code from the tutorial:

from crewai import Agent
from tools import tool
from dotenv import load_dotenv
load_dotenv()
from langchain_google_genai import ChatGoogleGenerativeAI
import os

llm = ChatGoogleGenerativeAI(
model="gemini-1.5-pro",
verbose=True,
temperature=0.5,
google_api_key=os.getenv("GOOGLE_API_KEY"),
base_url="https://generativelanguage.googleapis.com"
)

news_researcher = Agent(
role="Senior Researcher",
goal='Uncover ground breaking technologies in {topic}',
verbose=True,
memory=True,
backstory=(
"Driven by curiosity, you're at the forefront of"
"innovation, eager to explore and share knowledge that could change"
"the world."
),
tools=[tool],
llm=llm,
allow_delegation=True
)

While this implementation works well in the tutorial’s context, I faced several integration challenges when trying to run it in my environment.

The Challenge: Understanding LLM Provider Requirements

When implementing the tutorial code, I encountered this error:

litellm.exceptions.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call.

This error highlighted a crucial aspect of CrewAI that wasn’t immediately apparent from the tutorial: its dependency on the litellm library for LLM integration, which requires explicit provider specification.

My Solution: Modified Implementation

After several iterations and research, here’s the working solution that finally worked:

from crewai import Agent
from tools import tool
from dotenv import load_dotenv
import os
from langchain.chat_models import ChatOpenAI
import litellm

load_dotenv()

# Configure litellm
os.environ["OPENAI_API_KEY"] = os.getenv("GOOGLE_API_KEY")
litellm.api_key = os.getenv("GOOGLE_API_KEY")

# Initialize the model using litellm wrapped in ChatOpenAI
llm = ChatOpenAI(
model_name="gemini/gemini-pro", # Note the provider prefix
temperature=0.5,
openai_api_key=os.getenv("GOOGLE_API_KEY"),
max_tokens=1000
)

Key Learning Points from My Experience

  1. Provider Specification is Crucial
    - The LLM provider must be explicitly specified
    - Use the format provider/model-name (e.g., gemini/gemini-pro)
  2. LLM Wrapper Selection
    - ChatOpenAI wrapper provides better compatibility
    - It handles the translation between different API formats

Conclusion

Implementing CrewAI with different LLM providers can be tricky, but understanding the proper configuration and requirements makes it manageable. The key is to pay attention to provider specification, proper environment setup, and using the right wrappers for your chosen LLM.

Remember that the AI landscape is constantly evolving, so always check the latest documentation for CrewAI and your chosen LLM provider for any updates or changes in implementation requirements.

Tips:

  • Always start with a simple implementation and gradually add complexity
  • Keep your dependencies updated
  • Test your implementation with smaller tasks before scaling up
  • Monitor your API usage and costs
  • Maintain proper error handling in your production code

Have you implemented CrewAI in your projects? What challenges did you face? Share your experiences in the comments below!

--

--

Chiraggarg
Chiraggarg

No responses yet