Skip to content

Commit 5832d07

Browse files
Update model in ChatGroq initialization (#2000)
### Summary This PR updates the Groq chat integration example to use a **currently supported Groq model**, improving the reliability of the documentation for users following the example. --- ### Change Details In `src/oss/python/integrations/chat/groq.mdx`, the example model has been updated with reference to[ Groq documentation](https://console.groq.com/docs/reasoning#supported-models): ```diff - model="deepseek-r1-distill-llama-70b", + model="qwen/qwen3-32b",
1 parent 1739073 commit 5832d07

File tree

1 file changed

+1
-1
lines changed
  • src/oss/python/integrations/chat

1 file changed

+1
-1
lines changed

src/oss/python/integrations/chat/groq.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@ If you choose to set a `reasoning_format`, you must ensure that the model you ar
7676
from langchain_groq import ChatGroq
7777

7878
llm = ChatGroq(
79-
model="deepseek-r1-distill-llama-70b",
79+
model="qwen/qwen3-32b",
8080
temperature=0,
8181
max_tokens=None,
8282
reasoning_format="parsed",

0 commit comments

Comments
 (0)