high severityTavilySearchResults (langchain_community.tools.tavily_search)

Sudden spike in LLM token consumption/costs during agent runs using Tavily search; context window overflows; agents fail with token limit errors or degrade in quality due to excessive tool output.

Root cause

LangChain's TavilySearchResults tool defaults to search_depth='advanced' (deeper results, more content per result) and max_results=5, producing verbose JSON responses (~1000s tokens per call) that overwhelm LLM context windows in agent loops, especially with multiple tool calls.

TavilySearchResultslangchaintoken limitcontext overflowsearch_depth=advanced

Citations