Technology

Critical LangChain "LangGrinch" Flaw Exposes Secrets - Patch Now

A critical serialization vulnerability in langchain-core, tracked as CVE-2025-68664 and nicknamed LangGrinch, can allow attackers to exfiltrate API keys, manipulate LLM outputs, and in some cases trigger remote code execution. Security teams and developers should update affected packages immediately and audit any workflows that serialize or reload LLM outputs.

Dr. Elena Rodriguez3 min read
Published
Listen to this article0:00 min
Share this article:
Critical LangChain "LangGrinch" Flaw Exposes Secrets - Patch Now
AI-generated illustration

A newly disclosed flaw in the langchain-core Python library poses an urgent risk to applications that serialize or persist outputs from large language models. Tracked as CVE-2025-68664 and carrying a CVSS score of 9.3, the vulnerability, dubbed LangGrinch by its reporter, allows crafted input to be interpreted as trusted LangChain objects during deserialization, enabling secret exfiltration, prompt manipulation and, under certain conditions, remote code execution.

Researcher Yarden Porat reported the flaw on December 4, 2025, and public advisories and coverage increased through late December. The issue resides in LangChain’s built-in serialization helpers, specifically the dumps() and dumpd() APIs, which format objects into the project’s internal representation. LangChain uses a reserved marker key "lc" in serialized dictionaries to denote LangChain objects. Because dumps()/dumpd() failed to properly escape or neutralize dictionaries containing that reserved key when those dictionaries could be influenced by untrusted sources, an attacker can combine prompt-injection techniques with the serialization gap so that later deserialization treats attacker-controlled data as a trusted object rather than untrusted content.

Exploitation scenarios are straightforward for environments that store or reload LLM outputs, metadata or structured responses. An attacker who can coerce an LLM to emit specially structured output, or who can submit crafted user-controlled dictionaries into a serialization pipeline, can cause sensitive data such as API keys or cloud credentials to be captured or relayed. The flaw also enables manipulation of downstream LLM behavior by injecting objects that change how subsequent prompts are handled. In certain deployment configurations, the vulnerability can be chained into remote code execution.

Langchain-core is a foundational library for many LangChain agents and applications, which raises the potential impact across production deployments and hosted services. Different advisories and reports listed varying affected and fixed version ranges; examples cited by analysts include fixes in 1.1.8, 1.2.3, 1.2.5 and 0.3.80/0.3.81, and other version numbers have been circulated. Operators should treat those ranges as preliminary and confirm the exact patched releases in LangChain’s official advisory and GitHub release notes before applying updates.

Immediate mitigation steps are clear: update langchain-core and related langchain packages to the patched versions published by the maintainers, prioritize a review of any code paths that serialize or reload LLM output, and strengthen account protections such as multi-factor authentication and credential rotation. Teams should also review memory and compression settings where serialized data is stored and enhance monitoring for anomalous LLM outputs or unexpected deserialization activity.

This incident follows a pattern of high-impact issues in the LangChain ecosystem, including past CVEs such as CVE-2024-36480, CVE-2023-46229 and CVE-2023-44467, underscoring the need for rigorous security controls in systems that integrate LLMs. Organizations seeking external help can consult specialized incident response providers; one advisory that issued guidance in December listed a consultation contact at +971 542468006. Given the severity and the wide use of langchain-core, developers and security teams should treat this vulnerability as a top priority and patch without delay.

Know something we missed? Have a correction or additional information?

Submit a Tip
Your Topic
Today's stories
Updated daily by AI

Name any topic. Get daily articles.

You pick the subject, AI does the rest.

Start Now - Free

Ready in 2 minutes

Discussion

More in Technology