Close Menu

    Subscribe to Updates

    Get the latest news from tastytech.

    What's Hot

    Ars Asks: Share your shell and show us your tricked-out terminals!

    May 7, 2026

    All Star Fox games that the new Star Fox game is technically a remake of

    May 7, 2026

    Our Land review – superb doc on the right to roam

    May 7, 2026
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    tastytech.intastytech.in
    Subscribe
    • AI News & Trends
    • Tech News
    • AI Tools
    • Business & Startups
    • Guides & Tutorials
    • Tech Reviews
    • Automobiles
    • Gaming
    • movies
    tastytech.intastytech.in
    Home»Business & Startups»Deep Agents Tutorial: LangGraph for Smarter AI
    Deep Agents Tutorial: LangGraph for Smarter AI
    Business & Startups

    Deep Agents Tutorial: LangGraph for Smarter AI

    gvfx00@gmail.comBy gvfx00@gmail.comNovember 30, 2025No Comments8 Mins Read
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Imagine an AI that doesn’t just answer your questions, but thinks ahead, breaks tasks down, creates its own TODOs, and even spawns sub-agents to get the work done. That’s the promise of Deep Agents. AI Agents already take the capabilities of LLMs a notch higher, and today we’ll look at Deep Agents to see how they can push that notch even further. Deep Agents is built on top of LangGraph, a library designed specifically to create agents capable of handling complex tasks. Let’s take a deeper look at Deep Agents, understand their core capabilities, and then use the library to build our own AI agents.

    Table of Contents

    Toggle
    • Deep Agents 
    • Core Components
    • Building a Deep Agent 
      • Pre-requisites 
      • Requirements 
      • Imports and API Setup 
      • Defining the Tools, Sub-Agent and the Agent 
      • Running Inference 
      • Viewing the Output
    • Potential Improvements in our Agent 
    • Conclusion 
    • Frequently Asked Questions
        • Login to continue reading and enjoy expert-curated content.
      • Related posts:
    • Docker for Python & Data Projects: A Beginner’s Guide
    • Zero Budget, Full Stack: Building with Only Free LLMs
    • Build Better AI Agents with Google Antigravity Skills and Workflows

    Deep Agents 

    LangGraph gives you a graph-based runtime for stateful workflows, but you still need to build your own planning, context management, or task-decomposition logic from scratch. DeepAgents (built on top of LangGraph) bundles planning tools, virtual file-system based memory and subagent orchestration out of the box.  

    You can use DeepAgents via the standalone deepagents library. It includes planning capabilities, can spawn sub-agents, and uses a filesystem for context management. It can also be paired with LangSmith for deployment and monitoring. The agents built here use the “claude-sonnet-4-5-20250929” model by default, but this can be customized. Before we start creating the agents, let’s understand the core components.

    Core Components

    • Detailed System Prompts – The Deep agent uses a system prompt with detailed instructions and examples.  
    • Planning Tools – Deep agents have a built-in tool for Planning, the TODO list management tool is used by the agents for the same. This helps them stay focused even while performing a complex task.  
    • Sub-Agents – Subagent spawns for the delegated tasks and they execute in context isolation. 
    • File System – Virtual filesystem for context management and memory management, AI Agents here use files as a tool to offload context to memory when the context window is full. 

    Building a Deep Agent 

    Now let’s build a research agent using the ‘deepagents’ library which will use tavily for websearch and it’ll have all the components of a deep agent. 

    Note: We’ll be doing the tutorial in Google Colab.  

    Pre-requisites 

    You’ll need an OpenAI key for this agent that we’ll be creating, you can choose to use a different model provider like Gemini/Claude as well. Get your OpenAI key from the platform: https://platform.openai.com/api-keys

    Also get a Tavily API key for websearch from here: https://app.tavily.com/home

    Tavily API key

    Open a new notebook in Google Colab and add the secret keys: 

    Enter your secret key

    Save the keys as OPENAI_API_KEY, TAVILY_API_KEY for the demo and don’t forget to turn on the notebook access.  

    Also Read: Gemini API File Search: The Easy Way to Build RAG

    Requirements 

    !pip install deepagents tavily-python langchain-openai 

    We’ll install these libraries needed to run the code.  

    Imports and API Setup 

    import os 
    from deepagents import create_deep_agent 
    from tavily import TavilyClient 
    from langchain.chat_models import init_chat_model 
    from google.colab import userdata 
     
    
    # Set API keys 
    TAVILY_API_KEY=userdata.get("TAVILY_API_KEY") 
    os.environ["OPENAI_API_KEY"]=userdata.get("OPENAI_API_KEY") 

    We are storing the Tavily API in a variable and the OpenAI API in the environment. 

    Defining the Tools, Sub-Agent and the Agent 

    # Initialize Tavily client 
    tavily_client = TavilyClient(api_key=TAVILY_API_KEY) 
     
    # Define web search tool 
    def internet_search(query: str, max_results: int = 5) -> str: 
       """Run a web search to find current information""" 
       results = tavily_client.search(query, max_results=max_results) 
       return results  
    
    # Define a specialized research sub-agent 
    research_subagent = { 
       "name": "data-analyzer", 
       "description": "Specialized agent for analyzing data and creating detailed reports", 
       "system_prompt": """You are an expert data analyst and report writer. 
       Analyze information thoroughly and create well-structured, detailed reports.""", 
       "tools": [internet_search], 
       "model": "openai:gpt-4o", 
    }  
    
    # Initialize GPT-4o-mini model 
    model = init_chat_model("openai:gpt-4o-mini") 
    # Create the deep agent 
    # The agent automatically has access to: write_todos, read_todos, ls, read_file, 
    # write_file, edit_file, glob, grep, and task (for subagents) 
    agent = create_deep_agent( 
       model=model, 
       tools=[internet_search],  # Passing the tool 
       system_prompt="""You are a thorough research assistant. For this task: 
       1. Use write_todos to create a task list breaking down the research 
       2. Use internet_search to gather current information 
       3. Use write_file to save your findings to /research_findings.md 
       4. You can delegate detailed analysis to the data-analyzer subagent using the task tool 
       5. Create a final comprehensive report and save it to /final_report.md 
       6. Use read_todos to check your progress 
    
       Be systematic and thorough in your research.""", 
       subagents=[research_subagent], 
    ) 

    We have defined a tool for websearch and passed the same to our agent. We’re using OpenAI’s ‘gpt-4o-mini’ for this demo. You can change this to any model.  

    Also note that we didn’t create any files or define anything for the file system needed for offloading context and the todo list. These are already pre-built in ‘create_deep_agent()’ and it has access to them.  

    Running Inference 

    # Research query 
    research_topic = "What are the latest developments in AI agents and LangGraph in 2025?"  
    
    print(f"Starting research on: {research_topic}\n") 
    print("=" * 70)  
    
    # Execute the agent 
    result = agent.invoke({ 
       "messages": [{"role": "user", "content": research_topic}] 
    }) 
    
    print("\n" + "=" * 70) 
    print("Research completed.\n") 
    Deep Agents Output

    Note: The agent execution might take a while.  

    Viewing the Output

    # Agent execution trace 
    print("AGENT EXECUTION TRACE:") 
    print("-" * 70) 
    for i, msg in enumerate(result["messages"]): 
       if hasattr(msg, 'type'): 
           print(f"\n[{i}] Type: {msg.type}") 
           if msg.type == "human": 
               print(f"Human: {msg.content}") 
           elif msg.type == "ai": 
               if hasattr(msg, 'tool_calls') and msg.tool_calls: 
                   print(f"AI tool calls: {[tc['name'] for tc in msg.tool_calls]}") 
               if msg.content: 
                   print(f"AI: {msg.content[:200]}...") 
           elif msg.type == "tool": 
               print(f"Tool '{msg.name}' result: {str(msg.content)[:200]}...") 
    Viewing the Output
    # Final AI response 
    print("\n" + "=" * 70) 
    final_message = result["messages"][-1] 
    print("FINAL RESPONSE:") 
    print("-" * 70) 
    print(final_message.content) 
    Deep Agents Output 2
    # Files created 
    print("\n" + "=" * 70) 
    print("FILES CREATED:") 
    print("-" * 70) 
    if "files" in result and result["files"]: 
       for filepath in sorted(result["files"].keys()): 
           content = result["files"][filepath] 
           print(f"\n{'=' * 70}") 
           print(f"{filepath}") 
           print(f"{'=' * 70}") 
           print(content) 
    else: 
       print("No files found.") 
    
    print("\n" + "=" * 70) 
    print("Analysis complete.") 
    Deep Agents Output 3

    As we can see the agent did a good job, it maintained a virtual file system, gave a response after multiple iterations and thought it should be a ‘deep-agent’. But there is scope for improvement in our system, let’s look at them in the next system.  

    Potential Improvements in our Agent 

    We built a simple deep agent, but you can challenge yourself and build something much better. Here are few things you can do to improve this agent: 

    1. Use Long-term Memory – The deep-agent can preserve user preferences and feedback in files (/memories/). This will help the agent give better answers and build a knowledge base from the conversations. 
    2. Control File-system – By default the files are stored in a virtual state, you can this to different backend or local disk using the ‘FilesystemBackend’ from deepagents.backends 
    3. By refining the system prompts – You can test out multiple prompts to see which works the best for you. 

    Conclusion 

    We have successfully built our Deep Agents and can now see how AI Agents can push LLM capabilities a notch higher, using LangGraph to handle the tasks. With built-in planning, sub-agents, and a virtual file system, they manage TODOs, context, and research workflows smoothly. Deep Agents are great but also remember that if a task is simpler and can be achieved by a simple agent or LLM then it’s not recommended to use them.  

    Frequently Asked Questions

    Q1. Can I use an alternative to Tavily for web search? 

    A. Yes. Instead of Tavily, you can integrate SerpAPI, Firecrawl, Bing Search, or any other web search API. Simply replace the search function and tool definition to match the new provider’s response format and authentication method. 

    Q2. Can I change the default model used by the deep agent? 

    A. Absolutely. Deep Agents are model-agnostic, so you can switch to Claude, Gemini, or other OpenAI models by modifying the model parameter. This flexibility ensures you can optimize performance, cost, or latency depending on your use case. 

    Q3. Do I need to manually set up the filesystem? 

    A. No. Deep Agents automatically provide a virtual filesystem for managing memory, files, and long contexts. This eliminates the need for manual setup, although you can configure custom storage backends if required. 

    Q4. Can I add more specialized sub-agents? 

    A. Yes. You can create multiple sub-agents, each with its own tools, system prompts, and capabilities. This allows the main agent to delegate work more effectively and handle complex workflows through modular, distributed reasoning. 


    Mounish V

    Passionate about technology and innovation, a graduate of Vellore Institute of Technology. Currently working as a Data Science Trainee, focusing on Data Science. Deeply interested in Deep Learning and Generative AI, eager to explore cutting-edge techniques to solve complex problems and create impactful solutions.

    Login to continue reading and enjoy expert-curated content.

    Related posts:

    Powerful Local AI Automations with n8n, MCP and Ollama

    Abacus AI Review: Features, AI Agents & Automation Explained (Honest Guide)

    ChatGPT Images 2.0 vs Nano Banana 2: The Better Model is.....

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticlePrime Video Subscribers: Grab 6 Months of Apple TV Plus for $36 This Cyber Monday
    Next Article Merino strikes to earn Arsenal bruising 1-1 draw against 10-man Chelsea | Football News
    gvfx00@gmail.com
    • Website

    Related Posts

    Business & Startups

    Abacus AI Review: Features, AI Agents & Automation Explained (Honest Guide)

    May 7, 2026
    Business & Startups

    Is AI Taking Over Wall Street?

    May 6, 2026
    Business & Startups

    7 OpenCode Plugins That Make AI Coding More Powerful

    May 6, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Black Swans in Artificial Intelligence — Dan Rose AI

    October 2, 2025140 Views

    We let ChatGPT judge impossible superhero debates — here’s how it ruled

    December 31, 202571 Views

    Every Clue That Tony Stark Was Always Doctor Doom

    October 20, 202568 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram

    Subscribe to Updates

    Get the latest tech news from tastytech.

    About Us
    About Us

    TastyTech.in brings you the latest AI, tech news, cybersecurity tips, and gadget insights all in one place. Stay informed, stay secure, and stay ahead with us!

    Most Popular

    Black Swans in Artificial Intelligence — Dan Rose AI

    October 2, 2025140 Views

    We let ChatGPT judge impossible superhero debates — here’s how it ruled

    December 31, 202571 Views

    Every Clue That Tony Stark Was Always Doctor Doom

    October 20, 202568 Views

    Subscribe to Updates

    Get the latest news from tastytech.

    Facebook X (Twitter) Instagram Pinterest
    • Homepage
    • About Us
    • Contact Us
    • Privacy Policy
    © 2026 TastyTech. Designed by TastyTech.

    Type above and press Enter to search. Press Esc to cancel.

    Ad Blocker Enabled!
    Ad Blocker Enabled!
    Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.