Building an AI Study Planner with LangGraph and Google Gemini
In the rapidly evolving world of AI agents, LangGraph has emerged as a powerful framework (built on top of LangChain) for creating stateful, cyclical, and multi-actor applications. Unlike simple chains, LangGraph allows us to build loops and complex decision-making processes.
In this tutorial, we will build a Personalized Study Planner Agent. This agent will take a user’s learning goal, ask clarifying questions if the goal is too vague, and finally generate a detailed study plan using Google’s Gemini 3 Pro(you can replace this with any gemini models).
Prerequisites
Before we start, make sure you have:
- Python 3.9+ installed.
- A Google Cloud Project with the Vertex AI API enabled (or a Google AI Studio API key).
- Installed the necessary libraries:
pip install langgraph langchain-google-genai langchain-core python-dotenv
Step 1: Setting up the Environment
First, create a .env file to store your API key. If you are using Google AI Studio:
GOOGLE_API_KEY=your_api_key_here
Now, let’s start our study_planner.py script.
Step 2: Defining the Graph State
In LangGraph, the “State” is a shared data structure that pass between nodes. This is the memory of our agent.
import os
from typing import TypedDict, Annotated, List
from dotenv import load_dotenv
# Load environment variables
load_dotenv()
# Define our State
class StudyState(TypedDict):
topic: str # The user's main topic
feedback: str # User's feedback/answers to clarifying questions
plan: str # The final generated study plan
clarifying_questions: List[str] # Questions the agent needs to ask
is_sufficient: bool # Flag to determine if we have enough info
Step 3: Initializing the LLM
We’ll use ChatGoogleGenerativeAI to interact with the Gemini model.
from langchain_google_genai import ChatGoogleGenerativeAI
llm = ChatGoogleGenerativeAI(
model="gemini-3-pro",
temperature=0.7,
google_api_key=os.getenv("GOOGLE_API_KEY")
)
Step 4: Creating the Nodes
Our graph will have two main nodes:
analyzer_node: Checks if the user’s request is detailed enough. If not, it generates clarifying questions.planner_node: Generates the final study plan once we have enough info.
The Analyzer Node
from langchain_core.messages import SystemMessage, HumanMessage
def analyzer_node(state: StudyState):
print("--- DATA GATHERING & ANALYSIS ---")
topic = state.get('topic', '')
feedback = state.get('feedback', '')
# Simple prompt engineering to act as an analyzer
prompt = f"""
You are an expert study advisor. The user wants to learn about: '{topic}'.
User feedback so far: '{feedback}'.
Is this enough information to create a detailed weekly study plan?
It should include current level, time commitment, and specific goals.
If YES, output 'SUFFICIENT'.
If NO, generate 3 specific clarifying questions to ask the user.
"""
response = llm.invoke([HumanMessage(content=prompt)])
content = response.content.strip()
if "SUFFICIENT" in content:
return {"is_sufficient": True}
else:
# Assuming the model returns questions if not sufficient
return {"is_sufficient": False, "clarifying_questions": [content]}
The Planner Node
def planner_node(state: StudyState):
print("--- GENERATING STUDY PLAN ---")
topic = state['topic']
feedback = state.get('feedback', '')
prompt = f"""
Create a comprehensive 4-week study plan for: {topic}.
Incorporate the following user preferences: {feedback}.
Format it as a markdown table with Week, Topic, and Resources.
"""
response = llm.invoke([HumanMessage(content=prompt)])
return {"plan": response.content}
Step 5: Building the Graph
Now we define the workflow. We use a conditional edge to decide whether to go to the planner or stop and ask the user for more info.
from langgraph.graph import StateGraph, END
# Initialize the graph
workflow = StateGraph(StudyState)
# Add nodes
workflow.add_node("analyzer", analyzer_node)
workflow.add_node("planner", planner_node)
# Define entry point
workflow.set_entry_point("analyzer")
# Define Conditional Logic
def should_continue(state: StudyState):
if state['is_sufficient']:
return "planner"
else:
return END
# Add edges
workflow.add_conditional_edges(
"analyzer",
should_continue,
{
"planner": "planner",
END: END
}
)
workflow.add_edge("planner", END)
# Compile the graph
app = workflow.compile()
Step 6: Running the Agent
This is where the magic happens. We simulate a loop where the system asks for feedback if the plan isn’t ready.
if __name__ == "__main__":
current_topic = "Quantum Physics"
current_feedback = ""
print(f"Goal: {current_topic}")
# Initial run
inputs = {"topic": current_topic, "feedback": current_feedback}
for output in app.stream(inputs):
for key, value in output.items():
print(f"Finished running: {key}")
# If we stopped at analyzer with questions (simulated interaction)
if key == "analyzer" and not value.get("is_sufficient"):
questions = value.get("clarifying_questions")[0]
print(f"\nAI needs clarification:\n{questions}")
# In a real app, you would get this from `input()`
# Here we simulate user answering
current_feedback = "I am a beginner and have 5 hours a week."
print(f"\n(User Responds: {current_feedback})\n")
# Re-run with feedback (Note: Real implementation would handle state updates differently)
inputs["feedback"] = current_feedback
elif key == "planner":
print("\nFINAL PLAN:\n")
print(value["plan"])
Conclusion
You’ve just built a stateful AI agent using LangGraph! This pattern—Analyze -> Clarify (Loop) -> Execute—is fundamental to building robust AI applications that don’t just guess what the user wants but actively refine the requirements.