Skip to content
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
Show all changes
51 commits
Select commit Hold shift + click to select a range
18ee6f2
Add universal_simple_to_aiida
GeigerJ2 Mar 14, 2025
df41384
Add universal_qe_to_aiida
GeigerJ2 Mar 14, 2025
ae7a1de
Add CI setup to include AiiDA
GeigerJ2 Mar 14, 2025
ca37531
Changes to quantum_espresso_workflow.py
GeigerJ2 Mar 14, 2025
b7fe9e2
Update .github/workflows/aiida.yml
jan-janssen Mar 14, 2025
375e15e
Merge branch 'main' into aiida-pythonjob
jan-janssen Mar 14, 2025
324a408
Merge branch 'main' into aiida-pythonjob
jan-janssen Mar 14, 2025
a5b5098
Update quantum_espresso_workflow.py
jan-janssen Mar 14, 2025
e3a22b7
Fix `aiida_to_jobflow_simple`
GeigerJ2 Mar 15, 2025
568c7cc
Fix `aiida_to_pyiron_base_simple`
GeigerJ2 Mar 15, 2025
437a661
Add gitignore
GeigerJ2 Mar 15, 2025
02b60ff
Make AiiDA WG construction for QE WF a helper function.
GeigerJ2 Mar 17, 2025
a6e9045
Add aiida_to_jobflow_qe.ipynb. Still some KeyError occurring.
GeigerJ2 Mar 17, 2025
275e9f2
Merge remote-tracking branch 'upstream/main' into aiida-pythonjob
GeigerJ2 Mar 17, 2025
2ecc1f9
Introduce `workflow_json_filename` variable
GeigerJ2 Mar 17, 2025
cc52f6e
Add `jobflow_to_aiida_simple.ipynb`
GeigerJ2 Mar 17, 2025
c1a30dc
Set WG version to latest commit on main.
GeigerJ2 Mar 17, 2025
707694f
Add `verdi presto` profile setup to `aiida.yml`
GeigerJ2 Mar 17, 2025
d14048b
Fix aiida.yml
GeigerJ2 Mar 17, 2025
1d911ae
Add AiiDA profile creation to pyiron and jobflow CI YAML.
GeigerJ2 Mar 17, 2025
5941621
Update jobflow.yml
jan-janssen Mar 17, 2025
ae86016
Update pyiron.yml
jan-janssen Mar 17, 2025
533825e
Fix `nodes` in JSON file created by AiiDA QE WF. Next debug jobflow run.
GeigerJ2 Mar 17, 2025
7b6b3cf
Found bug in WorkGraph. Continue now with task order.
GeigerJ2 Mar 17, 2025
5a8a66c
Tasks are now in same order. Just need to add links.
GeigerJ2 Mar 18, 2025
a55574d
Manual AiiDA WG creation now seems fixed.
GeigerJ2 Mar 18, 2025
9a5585b
Current WG working version
GeigerJ2 Mar 18, 2025
0295262
Adding python files used for being able to enter debugger. Remove onc…
GeigerJ2 Mar 20, 2025
37a6279
Add intermediate JSON files of universal workflow reprs for debugging.
GeigerJ2 Mar 20, 2025
6e33244
Bump WorkGraph to latest and fix a couple of issues
superstar54 Mar 20, 2025
587235f
All AiiDA notebooks should be there.
GeigerJ2 Mar 21, 2025
6be97b8
Set aiida-workgraph version in `environment.yaml`
GeigerJ2 Mar 21, 2025
c144821
Fix load_profile
GeigerJ2 Mar 21, 2025
9c05728
Fix typos in CI YAML files
GeigerJ2 Mar 21, 2025
a1d6e4b
Cleaan up quantum_espresso_workflow.py
GeigerJ2 Mar 21, 2025
2c250a8
Add upterm to connect to Runner to debug AiiDA
GeigerJ2 Mar 21, 2025
df5177c
Merge branch 'main' into aiida-pythonjob
jan-janssen Mar 21, 2025
0d31b77
Merge branch 'main' into aiida-pythonjob
jan-janssen Mar 21, 2025
f4443b4
Update .github/workflows/aiida.yml
jan-janssen Mar 21, 2025
d8bf13d
Update .github/workflows/aiida.yml
jan-janssen Mar 21, 2025
34eb703
Fix load_profile and remove upterm
GeigerJ2 Mar 21, 2025
e82e7e1
Add pyiron_base_to_aiida_simple.ipynb
GeigerJ2 Mar 21, 2025
af7ea44
Start cleaning up repo.
GeigerJ2 Mar 21, 2025
e21cdf0
Only need to fix pyiron_base_to_aiida_qe.ipynb
GeigerJ2 Mar 21, 2025
76cd86a
Adapt `workflow_json_filename`s
GeigerJ2 Mar 21, 2025
701efb7
Latest file versions. Should we use nbstripout?
GeigerJ2 Mar 21, 2025
f0d3faf
Fix universal_qe_to_aiida.ipynb
GeigerJ2 Mar 21, 2025
abfba34
Comment out last cell of pyiron_base_to_aiida_qe.ipynb
GeigerJ2 Mar 21, 2025
56b98a1
Uncomment last cell so that CI doesn't seem successfull even though t…
GeigerJ2 Mar 21, 2025
a5faaaa
Update pyiron_base.py
jan-janssen Mar 21, 2025
09844b3
Update pyiron_base.py
jan-janssen Mar 21, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
36 changes: 36 additions & 0 deletions .github/workflows/aiida.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
name: pyiron

on:
push:
branches: [ main ]
pull_request:

jobs:
build:

runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v4
- uses: conda-incubator/setup-miniconda@v3
with:
auto-update-conda: true
python-version: "3.12"
environment-file: environment.yml
auto-activate-base: false
- name: Tests
shell: bash -l {0}
run: |
pip install -e adis_tools
pip install -e python_workflow_definition
conda install -c conda-forge jupyter papermill
export ESPRESSO_PSEUDO=$(pwd)/espresso/pseudo
papermill universal_simple_to_aiida.ipynb universal_simple_to_aiida_out.ipynb -k "python3"
papermill universal_qe_to_aiida.ipynb universal_qe_to_aiida_out.ipynb -k "python3"
papermill aiida_to_pyiron_base_simple.ipynb aiida_to_pyiron_base_simple_out.ipynb -k "python3"
papermill aiida_to_pyiron_base_qe.ipynb pyiron_base_to_jobflow_qe_out.ipynb -k "python3"
papermill aiida_to_jobflow_simple.ipynb aiida_to_jobflow_simple_out.ipynb -k "python3"
papermill aiida_to_jobflow_qe.ipynb aiida_to_jobflow_qe_out.ipynb -k "python3"
9 changes: 7 additions & 2 deletions .github/workflows/jobflow.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,12 @@ jobs:
pip install -e python_workflow_definition
conda install -c conda-forge jupyter papermill
export ESPRESSO_PSEUDO=$(pwd)/espresso/pseudo

papermill universal_simple_to_jobflow.ipynb universal_simple_to_jobflow_out.ipynb -k "python3"
papermill jobflow_to_pyiron_base_simple.ipynb jobflow_to_pyiron_base_simple_out.ipynb -k "python3"
papermill universal_qe_to_jobflow.ipynb universal_qe_to_jobflow_out.ipynb -k "python3"
papermill jobflow_to_pyiron_base_qe.ipynb jobflow_to_pyiron_base_qe_out.ipynb -k "python3"

papermill jobflow_to_pyiron_base_simple.ipynb jobflow_to_pyiron_base_simple_out.ipynb -k "python3"
papermill jobflow_to_pyiron_base_qe.ipynb jobflow_to_pyiron_base_qe_out.ipynb -k "python3"

papermill jobflow_to_aiida_simple.ipynb jobflow_to_aiida_simple_out.ipynb -k "python3"
papermill jobflow_to_aiida_qe.ipynb jobflow_to_aiida_qe_out.ipynb -k "python3"
9 changes: 7 additions & 2 deletions .github/workflows/pyiron.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,12 @@ jobs:
pip install -e python_workflow_definition
conda install -c conda-forge jupyter papermill
export ESPRESSO_PSEUDO=$(pwd)/espresso/pseudo

papermill universal_simple_to_pyiron_base.ipynb universal_simple_to_pyiron_base_out.ipynb -k "python3"
papermill pyiron_base_to_jobflow_simple.ipynb pyiron_base_to_jobflow_simple_out.ipynb -k "python3"
papermill universal_qe_to_pyiron_base.ipynb universal_qe_to_pyiron_base_out.ipynb -k "python3"
papermill pyiron_base_to_jobflow_qe.ipynb pyiron_base_to_jobflow_qe_out.ipynb -k "python3"

papermill pyiron_base_to_jobflow_simple.ipynb pyiron_base_to_jobflow_simple_out.ipynb -k "python3"
papermill pyiron_base_to_jobflow_qe.ipynb pyiron_base_to_jobflow_qe_out.ipynb -k "python3"

papermill aiida_to_jobflow_simple.ipynb aiida_to_jobflow_simple_out.ipynb -k "python3"
papermill aiida_to_jobflow_qe.ipynb aiida_to_jobflow_qe_out.ipynb -k "python3"
133 changes: 133 additions & 0 deletions python_workflow_definition/src/python_workflow_definition/aiida.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,133 @@
from importlib import import_module
import traceback
from aiida_workgraph import WorkGraph, task
import json

def pickle_node(value):
"""Handle data nodes"""
return value


def get_item(data: dict, key: str):
"""Handle get item from the outputs"""
return data[key]

# I defined the function mapping here
# but in principle, this is not needed, because one can import the function from the module path
# func_mapping = {
# "simple_workflow.add_x_and_y": task.pythonjob()(add_x_and_y),
# "simple_workflow.add_x_and_y_and_z": task.pythonjob()(add_x_and_y_and_z),
# "quantum_espresso_workflow.get_bulk_structure": task.pythonjob()(
# get_bulk_structure
# ),
# "quantum_espresso_workflow.calculate_qe": task.pythonjob()(calculate_qe),
# "quantum_espresso_workflow.plot_energy_volume_curve": task.pythonjob()(
# plot_energy_volume_curve
# ),
# "python_workflow_definition.pyiron_base.get_dict": task.pythonjob()(get_dict),
# "python_workflow_definition.pyiron_base.get_list": task.pythonjob()(get_list),
# "quantum_espresso_workflow.generate_structures": task.pythonjob()(
# generate_structures
# ),
# }


def write_workflow_json(wg):
wgdata = wg.to_dict()
data = {"nodes": {}, "edges": []}
node_name_mapping = {}
i = 0
for name, node in wgdata["tasks"].items():
node_name_mapping[name] = i
callable_name = node["executor"]["callable_name"]
data["nodes"][i] = callable_name
if callable_name == "pickle_node":
data["nodes"][i] = node["inputs"]["value"]["property"]["value"].value
i += 1

for link in wgdata["links"]:
if wgdata["tasks"][link["from_node"]]["executor"]["callable_name"] == "pickle_node":
link["from_socket"] = None
link["from_node"] = node_name_mapping[link["from_node"]]
link["to_node"] = node_name_mapping[link["to_node"]]
data["edges"].append(link)

return data


def load_workflow_json(filename):

with open(filename) as f:
data = json.load(f)

wg = WorkGraph()
task_name_mapping = {}
# add tasks
name_counter = {}

for name, identifier in data["nodes"].items():
# if isinstance(identifier, str) and identifier in func_mapping:
if isinstance(identifier, str) and "." in identifier:
p, m = identifier.rsplit('.', 1)
mod = import_module(p)
_func = getattr(mod, m)
func = task.pythonjob()(_func)
# func = func_mapping[identifier]
# I use the register_pickle_by_value, because the function is defined in a local file
try:
wg.add_task(func, register_pickle_by_value=True, name=m)
except ValueError:
if m in name_counter:
name_counter[m] += 1
else:
name_counter[m] = 0
name_ = f"{m}_{name_counter[m]}"

wg.add_task(func, register_pickle_by_value=True, name=name_)

# Remove the default result output, because we will add the outputs later from the data in the link
del wg.tasks[-1].outputs["result"]
else:
# data task
wg.add_task(pickle_node, value=identifier, name=name)

task_name_mapping[name] = wg.tasks[-1].name
# add links
for link in data["edges"]:
if link["sourceHandle"] is None:
link["sourceHandle"] = "result"
try:
from_task = wg.tasks[task_name_mapping[str(link["source"])]]
# because we are not define the outputs explicitly during the pythonjob creation
# we add it here, and assume the output exit
if link["sourceHandle"] not in from_task.outputs:
from_socket = from_task.add_output(
"workgraph.any",
name=link["sourceHandle"],
metadata={"is_function_output": True},
)
else:
from_socket = from_task.outputs[link["sourceHandle"]]
to_task = wg.tasks[task_name_mapping[str(link["target"])]]
# if the input is not exit, it means we pass the data into to the kwargs
# in this case, we add the input socket
if link["targetHandle"] not in to_task.inputs:
#
to_socket = to_task.add_input(
"workgraph.any",
name=link["targetHandle"],
metadata={"is_function_input": True},
)
else:
to_socket = to_task.inputs[link["targetHandle"]]
wg.add_link(from_socket, to_socket)
except Exception as e:
traceback.print_exc()
print("Failed to link", link, "with error:", e)
return wg

def get_list(**kwargs):
return list(kwargs.values())

def get_dict(**kwargs):
return {k: v for k, v in kwargs.items()}
56 changes: 51 additions & 5 deletions quantum_espresso_workflow.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
import os
import subprocess
import numpy as np

from ase.atoms import Atoms
from ase.build import bulk
Expand Down Expand Up @@ -57,7 +58,15 @@ def generate_structures(structure, strain_lst):
structure_strain.cell * strain ** (1 / 3), scale_atoms=True
)
structure_lst.append(structure_strain)
return {str(i): s.todict() for i, s in enumerate(structure_lst)}
return {f"s_{i}": atoms_to_json_dict(atoms=s) for i, s in enumerate(structure_lst)}


def scale_structure(structure, strain):
structure_strain = Atoms(**structure)
structure_strain.set_cell(
structure_strain.cell * strain ** (1 / 3), scale_atoms=True
)
return structure_strain


def plot_energy_volume_curve(volume_lst, energy_lst):
Expand All @@ -67,9 +76,46 @@ def plot_energy_volume_curve(volume_lst, energy_lst):
plt.savefig("evcurve.png")


def get_bulk_structure(name, a, cubic):
return bulk(
name=name,
def get_bulk_structure(element, a, cubic):
ase_atoms = bulk(
name=element,
a=a,
cubic=cubic,
).todict()
)
return atoms_to_json_dict(atoms=ase_atoms)


def atoms_to_json_dict(atoms):
"""
Convert an ASE Atoms object to a fully JSON-serializable dictionary
that uses only Python base data types.

Parameters:
-----------
atoms : ase.Atoms
The Atoms object to convert

Returns:
--------
dict
A dictionary representation using only Python base types
"""
# Get the dictionary representation from ASE
atoms_dict = atoms.todict()

# Create a new dictionary with JSON-serializable values
json_dict = {}

# Convert numpy arrays to lists
for key, value in atoms_dict.items():
if isinstance(value, np.ndarray):
# Convert numpy boolean values to Python booleans
if value.dtype == np.bool_ or value.dtype == bool:
json_dict[key] = value.tolist()
# Convert numpy arrays of numbers to Python lists
else:
json_dict[key] = value.tolist()
else:
json_dict[key] = value

return json_dict
106 changes: 106 additions & 0 deletions universal_qe_to_aiida.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,106 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"\n",
"from python_workflow_definition.aiida import load_workflow_json"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"outputs [{'identifier': 'workgraph.any', 'name': 'result'}]\n",
"outputs [{'identifier': 'workgraph.any', 'name': 'result'}]\n",
"outputs [{'identifier': 'workgraph.any', 'name': 'result'}]\n",
"outputs [{'identifier': 'workgraph.any', 'name': 'result'}]\n",
"outputs None\n",
"outputs None\n",
"outputs None\n",
"outputs None\n",
"outputs [{'identifier': 'workgraph.any', 'name': 'result'}]\n",
"outputs None\n",
"outputs None\n",
"outputs None\n",
"outputs None\n",
"outputs None\n",
"outputs None\n",
"outputs None\n",
"outputs None\n",
"outputs None\n",
"outputs None\n",
"outputs None\n",
"outputs [{'identifier': 'workgraph.any', 'name': 'result'}]\n"
]
}
],
"source": [
"workgraph = load_workflow_json(filename='workflow_qe.json')"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "81c4d0a6e21042c7b82ccca0fdef201f",
"version_major": 2,
"version_minor": 1
},
"text/plain": [
"NodeGraphWidget(settings={'minimap': True}, style={'width': '90%', 'height': '600px'}, value={'name': 'WorkGra…"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"\n",
"# TODO: Create inputs rather than tasks out of data nodes\n",
"workgraph"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "ADIS",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
Loading
Loading