Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
67 commits
Select commit Hold shift + click to select a range
4cee25d
Add native Ollama LLM support
ayman3000 Nov 16, 2025
fe1ee91
Fix Ollama integration: add model_version, usage metadata, safe JSON …
ayman3000 Nov 16, 2025
3a786dd
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 17, 2025
28ca391
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 17, 2025
95e601c
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 18, 2025
a40e261
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 18, 2025
040e253
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 18, 2025
a79433c
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 18, 2025
2288461
Fix formatting and imports for CI
ayman3000 Nov 19, 2025
b909b56
Fix formatting and imports for CI
ayman3000 Nov 19, 2025
b20848c
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 19, 2025
4e8d9f4
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 19, 2025
f0a7138
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 19, 2025
d5fca86
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 19, 2025
74c8d72
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 19, 2025
f97d4bc
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 19, 2025
e2ab4b2
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 20, 2025
b9c11e5
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 20, 2025
0aa1f9f
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 21, 2025
f0b3f98
Fix hello_world_ollama_native/agent.py formatting
ayman3000 Nov 21, 2025
d646742
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 21, 2025
e4e33df
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 21, 2025
5b20acf
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 21, 2025
92a8b2a
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 22, 2025
87de44c
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 22, 2025
9318689
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 22, 2025
8bd865b
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 25, 2025
5d6688a
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 25, 2025
e486fe9
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 25, 2025
8e9657e
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 26, 2025
ac7d9d4
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 26, 2025
661340e
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 27, 2025
c98d6a3
Merge branch 'main' into feature/ollama-llm
ayman3000 Dec 1, 2025
bb37bd2
Merge branch 'main' into feature/ollama-llm
ayman3000 Dec 1, 2025
f86b0a4
Merge branch 'main' into feature/ollama-llm
ayman3000 Dec 1, 2025
e9f2e9e
Merge branch 'main' into feature/ollama-llm
ayman3000 Dec 2, 2025
440b1ce
Merge branch 'main' into feature/ollama-llm
ayman3000 Dec 2, 2025
d1718f0
Merge branch 'main' into feature/ollama-llm
ayman3000 Dec 3, 2025
84aaa6d
Merge branch 'main' into feature/ollama-llm
ayman3000 Dec 3, 2025
6f1d15c
Merge branch 'main' into feature/ollama-llm
ayman3000 Dec 3, 2025
38f54fb
Merge branch 'main' into feature/ollama-llm
ayman3000 Dec 3, 2025
6ceae6d
Fix formatting for CI
ayman3000 Dec 3, 2025
7ff6397
Merge branch 'main' into feature/ollama-llm
ayman3000 Dec 3, 2025
046ddc8
Merge branch 'main' into feature/ollama-llm
ayman3000 Dec 4, 2025
869570a
Merge branch 'main' into feature/ollama-llm
ayman3000 Dec 4, 2025
f2d933b
Merge branch 'main' into feature/ollama-llm
ayman3000 Dec 5, 2025
274bc63
Merge branch 'main' into feature/ollama-llm
ayman3000 Dec 5, 2025
97ee91e
Merge branch 'main' into feature/ollama-llm
ayman3000 Dec 5, 2025
7bd3a91
Merge branch 'main' into feature/ollama-llm
ayman3000 Dec 6, 2025
d78caa7
refactor(tests): clean and reorganize Ollama test suite for readabili…
ayman3000 Dec 9, 2025
938d484
Merge branch 'main' into feature/ollama-llm
ayman3000 Dec 9, 2025
864eca7
Merge branch 'main' into feature/ollama-llm
ayman3000 Dec 9, 2025
001c5af
Merge branch 'main' into feature/ollama-llm
ayman3000 Dec 10, 2025
920cd8d
Merge branch 'main' into feature/ollama-llm
ayman3000 Dec 10, 2025
905a0a7
Merge branch 'main' into feature/ollama-llm
ayman3000 Dec 12, 2025
f0516f2
Merge branch 'main' into feature/ollama-llm
ayman3000 Dec 17, 2025
2e6f43c
Merge branch 'main' into feature/ollama-llm
ayman3000 Dec 17, 2025
f8e876d
Merge branch 'main' into feature/ollama-llm
ayman3000 Dec 17, 2025
9a968a2
Merge branch 'main' into feature/ollama-llm
ayman3000 Dec 17, 2025
b2c1bf3
Merge branch 'main' into feature/ollama-llm
ayman3000 Dec 18, 2025
5693b12
fix imports and formatting
ayman3000 Dec 18, 2025
8a07c24
Merge branch 'main' into feature/ollama-llm
ayman3000 Dec 18, 2025
4078f38
Merge branch 'main' into feature/ollama-llm
ayman3000 Dec 19, 2025
93395f6
Fix Ollama host configuration, examples, and documentation mismatches
ayman3000 Dec 19, 2025
0353759
Merge branch 'main' into feature/ollama-llm
ayman3000 Dec 19, 2025
afa35ea
Merge branch 'main' into feature/ollama-llm
ayman3000 Dec 22, 2025
cd23598
Merge branch 'main' into feature/ollama-llm
ayman3000 Dec 24, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
117 changes: 117 additions & 0 deletions contributing/samples/hello_world_ollama_native/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,117 @@
# Using Ollama Models with ADK (Native Integration)

## Model Choice

If your agent uses tools, choose an Ollama model that supports **function calling**.
Tool support can be verified with:

```bash
ollama show mistral-small3.1
```
Model
architecture mistral3
parameters 24.0B
context length 131072
embedding length 5120
quantization Q4_K_M

Capabilities
completion
vision
tools

Models must list tools under Capabilities.
Models without tool support will not execute ADK functions correctly.

To inspect or customize a model template:
```bash
ollama show --modelfile llama3.1 > model_file_to_modify
```
Then create a modified model:

ollama create llama3.1-modified -f model_file_to_modify


## Native Ollama Provider in ADK

ADK includes a native Ollama model class that communicates directly with the Ollama server at:

http://localhost:11434/api/chat

No LiteLLM provider, API keys, or OpenAI proxy endpoints are needed.

### Example agent
```python
import random
from google.adk.agents.llm_agent import Agent
from google.adk.models.ollama_llm import Ollama


def roll_die(sides: int) -> int:
return random.randint(1, sides)


def check_prime(numbers: list) -> str:
"""Check if a given list of values contains prime numbers.

The input may include non-integer values produced by the LLM.
"""
primes = set()

for number in numbers:
try:
number = int(number)
except (ValueError, TypeError):
continue

if number <= 1:
continue

for i in range(2, int(number**0.5) + 1):
if number % i == 0:
break
else:
primes.add(number)

return (
"No prime numbers found."
if not primes
else f"{', '.join(str(n) for n in sorted(primes))} are prime numbers."
)


root_agent = Agent(
model=Ollama(model="llama3.1"),
name="dice_agent",
description="Agent that rolls dice and checks primes using native Ollama.",
instruction="Always use the provided tools.",
tools=[roll_die, check_prime],
)
```
## Connecting to a remote Ollama server

Default Ollama endpoint:

http://localhost:11434

Override using an environment variable:
```bash
export OLLAMA_API_BASE="http://192.168.1.20:11434"
```
Comment on lines +97 to +100
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This documentation suggests that the Ollama host can be configured via the OLLAMA_API_BASE environment variable. However, the current implementation of the Ollama class in ollama_llm.py does not read from this environment variable. To avoid confusion, the implementation should be updated to support this feature as documented.

Or pass explicitly in code:
```python
Ollama(model="llama3.1", host="http://192.168.1.20:11434")
```


## Running the Example with ADK Web

Start the ADK Web UI:

adk web hello_ollama_native

The interface will be available in your browser, allowing interactive testing of tool calls.




15 changes: 15 additions & 0 deletions contributing/samples/hello_world_ollama_native/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
# Copyright 2025 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from . import agent
94 changes: 94 additions & 0 deletions contributing/samples/hello_world_ollama_native/agent.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
# Copyright 2025 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import random

from google.adk.agents.llm_agent import Agent
from google.adk.models.ollama_llm import Ollama
from typing import Any

def roll_die(sides: int) -> int:
"""Roll a die and return the rolled result.

Args:
sides: The integer number of sides the die has.

Returns:
An integer of the result of rolling the die.
"""
return random.randint(1, sides)


def check_prime(numbers: list[Any]) -> str:
"""Check which values in a list are prime numbers.

Args:
numbers: The list of values to check. Values may be non-integers
and are safely ignored if they cannot be converted.

Returns:
A string indicating which numbers are prime.
"""
primes = set()

for number in numbers:
try:
number = int(number)
except (ValueError, TypeError):
continue # Safely skip non-numeric values

if number <= 1:
continue

for i in range(2, int(number ** 0.5) + 1):
if number % i == 0:
break
else:
primes.add(number)

return (
"No prime numbers found."
if not primes
else f"{', '.join(str(num) for num in sorted(primes))} are prime numbers."
)

root_agent = Agent(
model=Ollama(model="llama3.1"),
name="dice_roll_agent",
description=(
"hello world agent that can roll a dice of any number of sides and"
" check prime numbers."
),
instruction="""
You roll dice and answer questions about the outcome of the dice rolls.
You can roll dice of different sizes.
You can use multiple tools in parallel by calling functions in parallel(in one request and in one round).
It is ok to discuss previous dice roles, and comment on the dice rolls.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There's a typo in the instruction string. roles should be rolls.

      It is ok to discuss previous dice rolls, and comment on the dice rolls.

When you are asked to roll a die, you must call the roll_die tool with the number of sides. Be sure to pass in an integer. Do not pass in a string.
You should never roll a die on your own.
When checking prime numbers, call the check_prime tool with a list of integers. Be sure to pass in a list of integers. You should never pass in a string.
You should not check prime numbers before calling the tool.
When you are asked to roll a die and check prime numbers, you should always make the following two function calls:
1. You should first call the roll_die tool to get a roll. Wait for the function response before calling the check_prime tool.
2. After you get the function response from roll_die tool, you should call the check_prime tool with the roll_die result.
2.1 If user asks you to check primes based on previous rolls, make sure you include the previous rolls in the list.
3. When you respond, you must include the roll_die result from step 1.
You should always perform the previous 3 steps when asking for a roll and checking prime numbers.
You should not rely on the previous history on prime results.
""",
tools=[
roll_die,
check_prime,
],
)
77 changes: 77 additions & 0 deletions contributing/samples/hello_world_ollama_native/main.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
# Copyright 2025 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import asyncio
import time
import warnings

import agent
from dotenv import load_dotenv
from google.adk import Runner
from google.adk.artifacts.in_memory_artifact_service import InMemoryArtifactService
from google.adk.cli.utils import logs
from google.adk.sessions.in_memory_session_service import InMemorySessionService
from google.adk.sessions.session import Session
from google.genai import types

load_dotenv(override=True)
warnings.filterwarnings('ignore', category=UserWarning)
logs.log_to_tmp_folder()


async def main():
app_name = 'my_app'
user_id_1 = 'user1'
session_service = InMemorySessionService()
artifact_service = InMemoryArtifactService()
runner = Runner(
app_name=app_name,
agent=agent.root_agent,
artifact_service=artifact_service,
session_service=session_service,
)
session_11 = await session_service.create_session(
app_name=app_name, user_id=user_id_1
)

async def run_prompt(session: Session, new_message: str):
content = types.Content(
role='user', parts=[types.Part.from_text(text=new_message)]
)
print('** User says:', content.model_dump(exclude_none=True))
async for event in runner.run_async(
user_id=user_id_1,
session_id=session.id,
new_message=content,
):
if event.content.parts and event.content.parts[0].text:
print(f'** {event.author}: {event.content.parts[0].text}')

start_time = time.time()
print('Start time:', start_time)
print('------------------------------------')
await run_prompt(session_11, 'Hi, introduce yourself.')
await run_prompt(
session_11, 'Roll a die with 100 sides and check if it is prime'
)
await run_prompt(session_11, 'Roll it again.')
await run_prompt(session_11, 'What numbers did I get?')
end_time = time.time()
print('------------------------------------')
print('End time:', end_time)
print('Total time:', end_time - start_time)


if __name__ == '__main__':
asyncio.run(main())
Loading