diff --git a/docs/source/_templates/function.rst b/docs/source/_templates/function.rst
new file mode 100644
index 00000000..54226b73
--- /dev/null
+++ b/docs/source/_templates/function.rst
@@ -0,0 +1,10 @@
+{{objname}}
+{{ underline }}====================
+
+.. currentmodule:: {{ module }}
+
+.. autofunction:: {{ objname }}
+
+.. raw:: html
+
+
diff --git a/docs/source/_templates/layout.html b/docs/source/_templates/layout.html
new file mode 100644
index 00000000..b43b96a9
--- /dev/null
+++ b/docs/source/_templates/layout.html
@@ -0,0 +1,6 @@
+{% extends "pydata_sphinx_theme/layout.html" %}
+
+{% block extrahead %}
+{{ super() }}
+
+{% endblock %}
diff --git a/docs/source/about.rst b/docs/source/about.rst
new file mode 100644
index 00000000..7b342c8b
--- /dev/null
+++ b/docs/source/about.rst
@@ -0,0 +1,107 @@
+.. _about:
+
+=====
+About
+=====
+
+Hyperactive is an optimization and data collection toolbox for convenient and fast
+prototyping of computationally expensive models.
+
+.. toctree::
+ :maxdepth: 1
+
+ about/team
+ about/history
+ about/license
+
+
+About Hyperactive
+-----------------
+
+Hyperactive provides a unified interface for hyperparameter optimization using
+various gradient-free optimization algorithms. It supports optimization for
+scikit-learn, sktime, skpro, and PyTorch Lightning models, as well as custom
+objective functions.
+
+
+Mission
+^^^^^^^
+
+Hyperactive aims to make hyperparameter optimization accessible and practical for
+machine learning practitioners. By providing a unified API across many optimization
+algorithms and ML frameworks, it reduces the barrier to finding optimal model
+configurations.
+
+
+Key Features
+^^^^^^^^^^^^
+
+- **20+ Optimization Algorithms**: From simple hill climbing to advanced Bayesian
+ optimization, population methods, and Optuna integration.
+
+- **Experiment-Based Architecture**: Clean separation between what to optimize
+ (experiments) and how to optimize (algorithms).
+
+- **Framework Integrations**: First-class support for scikit-learn, sktime, skpro,
+ and PyTorch Lightning.
+
+- **Flexible Search Spaces**: Discrete, continuous, and mixed parameter spaces
+ using familiar NumPy/list syntax.
+
+- **Production Ready**: Battle-tested since 2019 with comprehensive testing and
+ active maintenance.
+
+
+Related Projects
+^^^^^^^^^^^^^^^^
+
+Hyperactive is part of a larger ecosystem:
+
+.. list-table::
+ :header-rows: 1
+ :widths: 30 70
+
+ * - Project
+ - Description
+ * - `Gradient-Free-Optimizers `_
+ - Core optimization algorithms used by Hyperactive
+ * - `Search-Data-Collector `_
+ - Save search data during optimization to CSV files
+ * - `Search-Data-Explorer `_
+ - Visualize search data with Plotly in a Streamlit dashboard
+
+
+Sponsorship
+^^^^^^^^^^^
+
+Hyperactive is sponsored by the
+`German Center for Open Source AI (GC.OS) `_.
+
+.. image:: https://img.shields.io/badge/GC.OS-Sponsored%20Project-orange.svg?style=for-the-badge&colorA=0eac92&colorB=2077b4
+ :target: https://gc-os-ai.github.io/
+ :alt: GC.OS Sponsored
+
+
+Citing Hyperactive
+^^^^^^^^^^^^^^^^^^
+
+If you use Hyperactive in your research, please cite it:
+
+.. code-block:: bibtex
+
+ @Misc{hyperactive2021,
+ author = {{Simon Blanke}},
+ title = {{Hyperactive}: An optimization and data collection toolbox
+ for convenient and fast prototyping of computationally
+ expensive models.},
+ howpublished = {\url{https://github.com/SimonBlanke}},
+ year = {since 2019}
+ }
+
+
+Community
+^^^^^^^^^
+
+- **GitHub**: `SimonBlanke/Hyperactive `_
+- **Discord**: `Join the community `_
+- **LinkedIn**: `German Center for Open Source AI `_
diff --git a/docs/source/about/history.rst b/docs/source/about/history.rst
new file mode 100644
index 00000000..ce3033ff
--- /dev/null
+++ b/docs/source/about/history.rst
@@ -0,0 +1,126 @@
+.. _history:
+
+=======
+History
+=======
+
+This page documents the history and evolution of Hyperactive.
+
+
+Project History
+---------------
+
+Hyperactive was created in 2018 by Simon Blanke to address the need for a flexible,
+unified interface for hyperparameter optimization in machine learning workflows.
+
+
+Timeline
+^^^^^^^^
+
+**2018 - Project Creation**
+ Hyperactive was first released as an open-source project, providing a collection
+ of gradient-free optimization algorithms accessible through a simple Python API.
+
+**2019 - Growing Adoption**
+ The project gained traction in the machine learning community, with users
+ appreciating its straightforward interface and variety of optimization algorithms.
+
+**2020-2021 - Ecosystem Expansion**
+ Related projects were developed to complement Hyperactive:
+
+ - **Gradient-Free-Optimizers**: The optimization backend was extracted into its
+ own package, allowing for more modular development.
+ - **Search-Data-Collector**: Tools for saving optimization results.
+ - **Search-Data-Explorer**: Visualization dashboard for exploring search data.
+
+**2022-2023 - Continued Development**
+ Active maintenance continued with bug fixes, new algorithms, and improved
+ documentation. The user base continued to grow.
+
+**2024 - Version 5.0 Redesign**
+ Major architecture redesign introducing:
+
+ - **Experiment-based architecture**: Clean separation between optimization
+ problems (experiments) and optimization algorithms (optimizers).
+ - **Enhanced integrations**: Improved support for scikit-learn, sktime, skpro,
+ and PyTorch Lightning.
+ - **Optuna backend**: Integration with Optuna's optimization algorithms.
+ - **Modern Python support**: Support for Python 3.10 through 3.14.
+
+**2024 - GC.OS Sponsorship**
+ Hyperactive became a sponsored project of the German Center for Open Source AI
+ (GC.OS), ensuring continued development and maintenance.
+
+
+Version History
+---------------
+
+Major Versions
+^^^^^^^^^^^^^^
+
+.. list-table::
+ :header-rows: 1
+ :widths: 15 85
+
+ * - Version
+ - Highlights
+ * - v5.0
+ - Experiment-based architecture, Optuna integration, modern Python support
+ * - v4.x
+ - Improved API stability, additional optimizers
+ * - v3.x
+ - Search data collection features, expanded algorithm library
+ * - v2.x
+ - Multi-processing support, warm starting
+ * - v1.x
+ - Initial public release with core optimization algorithms
+
+
+Breaking Changes
+^^^^^^^^^^^^^^^^
+
+Major version updates (e.g., v4 → v5) may include breaking API changes.
+If you're upgrading from an older version:
+
+1. Check the `GitHub releases `_
+ for migration guides.
+2. Update your code to use the new API patterns.
+3. Alternatively, pin your version to continue using the old API.
+
+.. code-block:: bash
+
+ # Upgrade to latest
+ pip install hyperactive --upgrade
+
+ # Or pin to specific version
+ pip install hyperactive==4.x.x
+
+
+Legacy Documentation
+^^^^^^^^^^^^^^^^^^^^
+
+Documentation for Hyperactive v4 is still available at the legacy documentation site:
+
+`Legacy Documentation (v4) `_
+
+This may be useful if you:
+
+- Are maintaining projects that use Hyperactive v4
+- Need to reference the previous API design
+- Want to compare the old and new approaches
+
+
+Future Roadmap
+--------------
+
+Hyperactive continues to evolve. Planned improvements include:
+
+- Additional optimization algorithms
+- Enhanced visualization tools
+- Improved distributed computing support
+- More framework integrations
+- Performance optimizations
+
+For the latest roadmap, see the
+`GitHub Issues `_ and
+`Discussions `_.
diff --git a/docs/source/about/license.rst b/docs/source/about/license.rst
new file mode 100644
index 00000000..498ed9ad
--- /dev/null
+++ b/docs/source/about/license.rst
@@ -0,0 +1,99 @@
+.. _license:
+
+=======
+License
+=======
+
+Hyperactive is open-source software released under the MIT License.
+
+
+MIT License
+-----------
+
+.. code-block:: text
+
+ MIT License
+
+ Copyright (c) 2018 Simon Blanke
+
+ Permission is hereby granted, free of charge, to any person obtaining a copy
+ of this software and associated documentation files (the "Software"), to deal
+ in the Software without restriction, including without limitation the rights
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ copies of the Software, and to permit persons to whom the Software is
+ furnished to do so, subject to the following conditions:
+
+ The above copyright notice and this permission notice shall be included in all
+ copies or substantial portions of the Software.
+
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ SOFTWARE.
+
+
+What This Means
+---------------
+
+The MIT License is a permissive open-source license that allows you to:
+
+**You Can:**
+
+- Use Hyperactive for commercial projects
+- Modify the source code
+- Distribute copies
+- Use it in private projects
+- Sublicense the code
+
+**Requirements:**
+
+- Include the copyright notice and license text in any copies or substantial
+ portions of the software
+
+**Limitations:**
+
+- The software is provided "as is" without warranty
+- The authors are not liable for any damages
+
+
+Third-Party Licenses
+--------------------
+
+Hyperactive depends on several third-party packages, each with their own licenses:
+
+.. list-table::
+ :header-rows: 1
+ :widths: 30 30 40
+
+ * - Package
+ - License
+ - Purpose
+ * - NumPy
+ - BSD-3-Clause
+ - Numerical operations
+ * - pandas
+ - BSD-3-Clause
+ - Data manipulation
+ * - scikit-learn
+ - BSD-3-Clause
+ - Machine learning integration
+ * - Gradient-Free-Optimizers
+ - MIT
+ - Core optimization algorithms
+ * - tqdm
+ - MIT/MPL-2.0
+ - Progress bars
+
+All dependencies use permissive open-source licenses compatible with commercial use.
+
+
+Questions
+---------
+
+For questions about licensing or usage rights, please contact:
+
+- **Email**: simon.blanke@yahoo.com
+- **GitHub Issues**: `Report an issue `_
diff --git a/docs/source/about/team.rst b/docs/source/about/team.rst
new file mode 100644
index 00000000..352c4006
--- /dev/null
+++ b/docs/source/about/team.rst
@@ -0,0 +1,102 @@
+.. _team:
+
+====
+Team
+====
+
+Hyperactive is developed and maintained by a dedicated team of open-source contributors.
+
+
+Core Team
+---------
+
+Simon Blanke
+^^^^^^^^^^^^
+
+**Creator and Lead Maintainer**
+
+Simon Blanke is the creator and primary maintainer of Hyperactive. He started the
+project in 2018 to make hyperparameter optimization more accessible to machine
+learning practitioners.
+
+- **GitHub**: `@SimonBlanke `_
+- **Email**: simon.blanke@yahoo.com
+
+Simon also maintains the related projects in the Hyperactive ecosystem:
+
+- `Gradient-Free-Optimizers `_
+- `Search-Data-Collector `_
+- `Search-Data-Explorer `_
+
+
+Contributors
+------------
+
+Hyperactive benefits from contributions by many developers. We appreciate everyone
+who has helped improve the project through code, documentation, bug reports, and
+feature suggestions.
+
+To see a full list of contributors, visit the
+`GitHub contributors page `_.
+
+
+Become a Contributor
+^^^^^^^^^^^^^^^^^^^^
+
+We welcome contributions from the community! There are many ways to get involved:
+
+- **Report bugs**: Open an issue on GitHub
+- **Suggest features**: Share your ideas in GitHub Discussions
+- **Contribute code**: Submit a pull request
+- **Improve documentation**: Help make the docs better
+- **Share examples**: Contribute use cases and tutorials
+
+See the :ref:`contributing` guide for more information on how to contribute.
+
+
+Sponsorship
+-----------
+
+German Center for Open Source AI (GC.OS)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Hyperactive is proudly sponsored by the
+`German Center for Open Source AI (GC.OS) `_.
+
+.. image:: https://img.shields.io/badge/GC.OS-Sponsored%20Project-orange.svg?style=for-the-badge&colorA=0eac92&colorB=2077b4
+ :target: https://gc-os-ai.github.io/
+ :alt: GC.OS Sponsored
+
+This sponsorship helps ensure the continued development and maintenance of
+Hyperactive as a high-quality open-source project.
+
+
+Community
+---------
+
+Join the Hyperactive community:
+
+- **Discord**: `Join our Discord server `_
+ for discussions, questions, and announcements.
+
+- **LinkedIn**: Follow the
+ `German Center for Open Source AI `_
+ for updates.
+
+- **GitHub Discussions**: Participate in
+ `discussions `_
+ about features and best practices.
+
+
+Acknowledgments
+---------------
+
+We thank all users who have:
+
+- Reported bugs and issues
+- Suggested new features
+- Contributed to the codebase
+- Helped improve the documentation
+- Shared Hyperactive with others
+
+Your support makes this project possible!
diff --git a/docs/source/api_reference.rst b/docs/source/api_reference.rst
new file mode 100644
index 00000000..7f5996f7
--- /dev/null
+++ b/docs/source/api_reference.rst
@@ -0,0 +1,21 @@
+.. _api_reference:
+
+=============
+API Reference
+=============
+
+Welcome to the API reference for ``hyperactive``.
+
+The API reference provides a technical manual.
+It describes the classes and functions included in Hyperactive.
+
+.. toctree::
+ :maxdepth: 1
+
+ api_reference/base
+ api_reference/optimizers
+ api_reference/experiments_function
+ api_reference/experiments_integrations
+ api_reference/experiments_benchmarks
+ api_reference/sklearn_integration
+ api_reference/utilities
diff --git a/docs/source/api_reference/base.rst b/docs/source/api_reference/base.rst
new file mode 100644
index 00000000..a7dca433
--- /dev/null
+++ b/docs/source/api_reference/base.rst
@@ -0,0 +1,17 @@
+.. _base_ref:
+
+Base Classes
+============
+
+The :mod:`hyperactive.base` module contains the base classes for optimizers and experiments.
+
+All optimizers inherit from ``BaseOptimizer`` and all experiments inherit from ``BaseExperiment``.
+
+.. currentmodule:: hyperactive.base
+
+.. autosummary::
+ :toctree: auto_generated/
+ :template: class.rst
+
+ BaseOptimizer
+ BaseExperiment
diff --git a/docs/source/api_reference/experiments_benchmarks.rst b/docs/source/api_reference/experiments_benchmarks.rst
new file mode 100644
index 00000000..0c46dba5
--- /dev/null
+++ b/docs/source/api_reference/experiments_benchmarks.rst
@@ -0,0 +1,19 @@
+.. _experiments_benchmarks_ref:
+
+Benchmark Function Experiments
+===============================
+
+The :mod:`hyperactive.experiment.bench` module contains standard benchmark functions
+for testing and comparing optimization algorithms.
+
+These mathematical functions have known properties and are commonly used in optimization research.
+
+.. currentmodule:: hyperactive.experiment.bench
+
+.. autosummary::
+ :toctree: auto_generated/
+ :template: class.rst
+
+ Ackley
+ Parabola
+ Sphere
diff --git a/docs/source/api_reference/experiments_function.rst b/docs/source/api_reference/experiments_function.rst
new file mode 100644
index 00000000..bef832ae
--- /dev/null
+++ b/docs/source/api_reference/experiments_function.rst
@@ -0,0 +1,20 @@
+.. _experiments_function_ref:
+
+Function Experiments
+====================
+
+The :mod:`hyperactive.experiment` module contains experiment classes for defining
+optimization objectives.
+
+Generic Function Experiment
+----------------------------
+
+The ``FunctionExperiment`` class allows you to wrap any callable as an optimization target.
+
+.. currentmodule:: hyperactive.experiment.func
+
+.. autosummary::
+ :toctree: auto_generated/
+ :template: class.rst
+
+ FunctionExperiment
diff --git a/docs/source/api_reference/experiments_integrations.rst b/docs/source/api_reference/experiments_integrations.rst
new file mode 100644
index 00000000..913e249a
--- /dev/null
+++ b/docs/source/api_reference/experiments_integrations.rst
@@ -0,0 +1,57 @@
+.. _experiments_integrations_ref:
+
+Framework Integration Experiments
+==================================
+
+The :mod:`hyperactive.experiment.integrations` module contains experiment classes
+for integration with machine learning frameworks.
+
+These experiments provide seamless hyperparameter optimization for scikit-learn,
+sktime, skpro, and PyTorch Lightning models.
+
+Scikit-Learn
+------------
+
+Cross-validation experiments for scikit-learn estimators.
+
+.. currentmodule:: hyperactive.experiment.integrations
+
+.. autosummary::
+ :toctree: auto_generated/
+ :template: class.rst
+
+ SklearnCvExperiment
+
+Sktime - Time Series
+--------------------
+
+Experiments for sktime time series models.
+
+.. autosummary::
+ :toctree: auto_generated/
+ :template: class.rst
+
+ SktimeClassificationExperiment
+ SktimeForecastingExperiment
+
+Skpro - Probabilistic Prediction
+---------------------------------
+
+Experiments for skpro probabilistic regression models.
+
+.. autosummary::
+ :toctree: auto_generated/
+ :template: class.rst
+
+ SkproProbaRegExperiment
+
+PyTorch Lightning
+-----------------
+
+Experiments for PyTorch Lightning models.
+
+.. autosummary::
+ :toctree: auto_generated/
+ :template: class.rst
+
+ TorchExperiment
diff --git a/docs/source/api_reference/optimizers.rst b/docs/source/api_reference/optimizers.rst
new file mode 100644
index 00000000..ab9f0224
--- /dev/null
+++ b/docs/source/api_reference/optimizers.rst
@@ -0,0 +1,128 @@
+.. _optimizers_ref:
+
+Optimizers
+==========
+
+The :mod:`hyperactive.opt` module contains optimization algorithms for hyperparameter tuning.
+
+All optimizers inherit from :class:`hyperactive.base.BaseOptimizer` and share the same interface:
+the ``solve()`` method to run optimization, and configuration via the ``experiment`` and ``search_space`` parameters.
+
+Gradient-Free Optimizers (GFO)
+-------------------------------
+
+These optimizers are based on the gradient-free-optimizers package and implement
+various metaheuristic and numerical optimization algorithms.
+
+Local Search
+~~~~~~~~~~~~
+
+Local search algorithms that explore the neighborhood of the current position.
+
+.. currentmodule:: hyperactive.opt
+
+.. autosummary::
+ :toctree: auto_generated/
+ :template: class.rst
+
+ HillClimbing
+ StochasticHillClimbing
+ RepulsingHillClimbing
+ RandomRestartHillClimbing
+
+Simulated Annealing
+~~~~~~~~~~~~~~~~~~~
+
+Probabilistic techniques for approximating the global optimum.
+
+.. autosummary::
+ :toctree: auto_generated/
+ :template: class.rst
+
+ SimulatedAnnealing
+
+Global Search
+~~~~~~~~~~~~~
+
+Random and systematic search strategies.
+
+.. autosummary::
+ :toctree: auto_generated/
+ :template: class.rst
+
+ RandomSearch
+ GridSearch
+
+Direct Methods
+~~~~~~~~~~~~~~
+
+Direct search methods that do not use gradients.
+
+.. autosummary::
+ :toctree: auto_generated/
+ :template: class.rst
+
+ DownhillSimplexOptimizer
+ PowellsMethod
+ PatternSearch
+ LipschitzOptimizer
+ DirectAlgorithm
+
+Population-Based
+~~~~~~~~~~~~~~~~
+
+Optimization algorithms that maintain and evolve populations of solutions.
+
+.. autosummary::
+ :toctree: auto_generated/
+ :template: class.rst
+
+ ParallelTempering
+ ParticleSwarmOptimizer
+ SpiralOptimization
+ GeneticAlgorithm
+ EvolutionStrategy
+ DifferentialEvolution
+
+Sequential Model-Based
+~~~~~~~~~~~~~~~~~~~~~~
+
+Algorithms that build surrogate models of the objective function.
+
+.. autosummary::
+ :toctree: auto_generated/
+ :template: class.rst
+
+ BayesianOptimizer
+ TreeStructuredParzenEstimators
+ ForestOptimizer
+
+Optuna-Based Optimizers
+------------------------
+
+These optimizers provide an interface to Optuna's optimization algorithms.
+
+.. autosummary::
+ :toctree: auto_generated/
+ :template: class.rst
+
+ TPEOptimizer
+ RandomOptimizer
+ CmaEsOptimizer
+ GPOptimizer
+ GridOptimizer
+ NSGAIIOptimizer
+ NSGAIIIOptimizer
+ QMCOptimizer
+
+Scikit-Learn Style
+-------------------
+
+Optimizers with a scikit-learn compatible interface.
+
+.. autosummary::
+ :toctree: auto_generated/
+ :template: class.rst
+
+ GridSearchSk
+ RandomSearchSk
diff --git a/docs/source/api_reference/sklearn_integration.rst b/docs/source/api_reference/sklearn_integration.rst
new file mode 100644
index 00000000..82e454de
--- /dev/null
+++ b/docs/source/api_reference/sklearn_integration.rst
@@ -0,0 +1,18 @@
+.. _sklearn_integration_ref:
+
+Scikit-Learn Integration
+=========================
+
+The :mod:`hyperactive.integrations.sklearn` module provides scikit-learn compatible
+meta-estimators for hyperparameter optimization.
+
+These classes follow the scikit-learn estimator API and can be used as drop-in replacements
+for scikit-learn's GridSearchCV and RandomizedSearchCV.
+
+.. currentmodule:: hyperactive.integrations.sklearn
+
+.. autosummary::
+ :toctree: auto_generated/
+ :template: class.rst
+
+ OptCV
diff --git a/docs/source/api_reference/utilities.rst b/docs/source/api_reference/utilities.rst
new file mode 100644
index 00000000..c531e044
--- /dev/null
+++ b/docs/source/api_reference/utilities.rst
@@ -0,0 +1,18 @@
+.. _utilities_ref:
+
+Utilities
+=========
+
+The :mod:`hyperactive.utils` module contains utility functions for working with
+Hyperactive estimators.
+
+Estimator Validation
+--------------------
+
+.. currentmodule:: hyperactive.utils
+
+.. autosummary::
+ :toctree: auto_generated/
+ :template: function.rst
+
+ check_estimator
diff --git a/docs/source/conf.py b/docs/source/conf.py
new file mode 100644
index 00000000..de3e263e
--- /dev/null
+++ b/docs/source/conf.py
@@ -0,0 +1,435 @@
+#!/usr/bin/env python3
+"""Configuration file for the Sphinx documentation builder."""
+
+import datetime
+import os
+import re
+import sys
+from pathlib import Path
+
+# -- Path setup --------------------------------------------------------------
+
+# If extensions (or modules to document with autodoc) are in another directory,
+# add these directories to sys.path here. If the directory is relative to the
+# documentation root, use os.path.abspath to make it absolute, like shown here.
+
+ON_READTHEDOCS = os.environ.get("READTHEDOCS") == "True"
+if not ON_READTHEDOCS:
+ sys.path.insert(0, os.path.abspath("../../src"))
+
+import hyperactive # noqa: E402 # must be after sys.path modification
+
+# -- Extract metadata from pyproject.toml ------------------------------------
+# This allows documentation to stay in sync with pyproject.toml automatically
+
+
+def extract_pyproject_metadata():
+ """Extract metadata from pyproject.toml for use in documentation."""
+ pyproject_path = Path(__file__).parent.parent.parent / "pyproject.toml"
+
+ metadata = {
+ "python_versions": [],
+ "min_python": "3.10",
+ "dependencies": [],
+ "version": hyperactive.__version__,
+ }
+
+ if pyproject_path.exists():
+ content = pyproject_path.read_text()
+
+ # Extract Python versions from classifiers
+ # Pattern: "Programming Language :: Python :: 3.XX"
+ py_version_pattern = r'"Programming Language :: Python :: (3\.\d+)"'
+ versions = re.findall(py_version_pattern, content)
+ if versions:
+ metadata["python_versions"] = sorted(set(versions))
+
+ # Extract requires-python
+ requires_python_match = re.search(r'requires-python\s*=\s*"([^"]+)"', content)
+ if requires_python_match:
+ req = requires_python_match.group(1)
+ # Extract minimum version from ">=3.10" or similar
+ min_match = re.search(r">=\s*([\d.]+)", req)
+ if min_match:
+ metadata["min_python"] = min_match.group(1)
+
+ # Extract core dependencies
+ deps_match = re.search(r"dependencies\s*=\s*\[(.*?)\]", content, re.DOTALL)
+ if deps_match:
+ deps_text = deps_match.group(1)
+ # Extract package names (first word before any version specifier)
+ dep_names = re.findall(r'"([a-zA-Z][a-zA-Z0-9_-]*)', deps_text)
+ metadata["dependencies"] = dep_names
+
+ return metadata
+
+
+# Extract metadata once at configuration time
+PYPROJECT_METADATA = extract_pyproject_metadata()
+
+# Build Python version range string from metadata
+_py_versions = PYPROJECT_METADATA["python_versions"]
+if _py_versions:
+ _py_version_range = f"{_py_versions[0]} - {_py_versions[-1]}"
+else:
+ _py_version_range = "3.10+"
+
+# -- Project information -----------------------------------------------------
+current_year = datetime.datetime.now().year
+project = "hyperactive"
+project_copyright = f"2019 - {current_year} (MIT License)"
+author = "Simon Blanke"
+
+# The full version, including alpha/beta/rc tags
+CURRENT_VERSION = f"v{hyperactive.__version__}"
+
+# If on readthedocs, and we're building the latest version, update tag to generate
+# correct links in notebooks
+if ON_READTHEDOCS:
+ READTHEDOCS_VERSION = os.environ.get("READTHEDOCS_VERSION")
+ if READTHEDOCS_VERSION == "latest":
+ CURRENT_VERSION = "main"
+
+# -- General configuration ---------------------------------------------------
+
+# Add any Sphinx extension module names here, as strings. They can be
+# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
+# ones.
+extensions = [
+ "sphinx.ext.autodoc",
+ "sphinx.ext.autosummary",
+ "sphinx.ext.autosectionlabel",
+ "numpydoc",
+ "sphinx.ext.intersphinx",
+ "sphinx.ext.linkcode", # link to GitHub source code via linkcode_resolve()
+ "myst_parser",
+ "sphinx_copybutton",
+ "sphinx_design",
+ "sphinx_issues",
+ "sphinx.ext.doctest",
+]
+
+# Recommended by sphinx_design when using the MyST Parser
+myst_enable_extensions = ["colon_fence"]
+
+# Add any paths that contain templates here, relative to this directory.
+templates_path = ["_templates"]
+
+# The suffix(es) of source filenames.
+# You can specify multiple suffix as a list of string:
+source_suffix = {
+ ".rst": "restructuredtext",
+ ".md": "markdown",
+}
+
+# The main toctree document.
+master_doc = "index"
+
+# The language for content autogenerated by Sphinx. Refer to documentation
+# for a list of supported languages.
+#
+# This is also used if you do content translation via gettext catalogs.
+# Usually you set "language" from the command line for these cases.
+language = "en"
+
+# List of patterns, relative to source directory, that match files and
+# directories to ignore when looking for source files.
+# This pattern also affects html_static_path and html_extra_path.
+exclude_patterns = [
+ "_build",
+ ".ipynb_checkpoints",
+ "Thumbs.db",
+ ".DS_Store",
+]
+
+add_module_names = False
+
+# The name of the Pygments (syntax highlighting) style to use.
+pygments_style = "sphinx"
+
+# see http://stackoverflow.com/q/12206334/562769
+numpydoc_show_class_members = True
+# this is needed for some reason...
+# see https://github.com/numpy/numpydoc/issues/69
+numpydoc_class_members_toctree = False
+
+# https://numpydoc.readthedocs.io/en/latest/validation.html#built-in-validation-checks
+# Let's turn off the check for building but keep it in pre-commit hooks
+numpydoc_validation_checks = set()
+
+# generate autosummary even if no references
+autosummary_generate = True
+
+# Members and inherited-members default to showing methods and attributes from a
+# class or those inherited.
+# Member-order orders the documentation in the order of how the members are defined in
+# the source code.
+autodoc_default_options = {
+ "members": True,
+ "inherited-members": True,
+ "member-order": "bysource",
+}
+
+# If true, '()' will be appended to :func: etc. cross-reference text.
+add_function_parentheses = False
+
+# Suppress warnings
+suppress_warnings = [
+ "myst.mathjax",
+ "docutils",
+ "toc.not_included",
+ "autodoc.import_object",
+ "autosectionlabel",
+ "ref",
+]
+
+show_warning_types = True
+
+# Link to GitHub repo for github_issues extension
+issues_github_path = "SimonBlanke/Hyperactive"
+
+
+def linkcode_resolve(domain, info):
+ """Return URL to source code corresponding.
+
+ Parameters
+ ----------
+ domain : str
+ info : dict
+
+ Returns
+ -------
+ url : str
+ """
+
+ def find_source():
+ # try to find the file and line number, based on code from numpy:
+ # https://github.com/numpy/numpy/blob/main/doc/source/conf.py#L286
+ obj = sys.modules[info["module"]]
+ for part in info["fullname"].split("."):
+ obj = getattr(obj, part)
+ import inspect
+ import os
+
+ fn = inspect.getsourcefile(obj)
+ fn = os.path.relpath(fn, start=os.path.dirname(hyperactive.__file__))
+ source, lineno = inspect.getsourcelines(obj)
+ return fn, lineno, lineno + len(source) - 1
+
+ if domain != "py" or not info["module"]:
+ return None
+ try:
+ filename = "src/hyperactive/%s#L%d-L%d" % find_source()
+ except Exception:
+ filename = info["module"].replace(".", "/") + ".py"
+ return (
+ f"https://github.com/SimonBlanke/Hyperactive/blob/{CURRENT_VERSION}/{filename}"
+ )
+
+
+# -- Options for HTML output -------------------------------------------------
+
+# The theme to use for HTML and HTML Help pages. See the documentation for
+# a list of builtin themes.
+
+html_theme = "pydata_sphinx_theme"
+
+# Theme options are theme-specific and customize the look and feel of a theme
+# further. For a list of options available for each theme, see the
+# documentation.
+
+html_theme_options = {
+ "logo": {
+ "image_light": "_static/images/navbar_logo.svg",
+ "image_dark": "_static/images/navbar_logo_dark.svg",
+ },
+ "icon_links": [
+ {
+ "name": "GitHub",
+ "url": "https://github.com/SimonBlanke/Hyperactive",
+ "icon": "fab fa-github",
+ },
+ {
+ "name": "Star on GitHub",
+ "url": "https://github.com/SimonBlanke/Hyperactive/stargazers",
+ "icon": "fa-regular fa-star",
+ },
+ ],
+ "show_prev_next": False,
+ "use_edit_page_button": False,
+ "navbar_start": ["navbar-logo"],
+ "navbar_center": ["navbar-nav"],
+ "navbar_end": ["theme-switcher", "navbar-icon-links"],
+ "show_toc_level": 2,
+ "secondary_sidebar_items": ["page-toc", "sourcelink"],
+}
+
+html_title = "Hyperactive"
+html_context = {
+ "github_user": "SimonBlanke",
+ "github_repo": "Hyperactive",
+ "github_version": "master",
+ "doc_path": "auto-doc/source/",
+ "python_version_range": _py_version_range,
+}
+
+html_sidebars = {
+ "**": ["sidebar-nav-bs.html"],
+ "index": [],
+ "get_started": [],
+ "installation": [],
+ "search": [],
+}
+
+# Add any paths that contain custom static files (such as style sheets) here,
+# relative to this directory. They are copied after the builtin static files,
+# so a file named "default.css" will overwrite the builtin "default.css".
+html_static_path = ["_static"]
+html_css_files = ["css/custom.css"]
+
+html_show_sourcelink = False
+
+# -- Options for HTMLHelp output ---------------------------------------------
+
+# Output file base name for HTML help builder.
+htmlhelp_basename = "hyperactivedoc"
+
+# -- Options for LaTeX output ------------------------------------------------
+
+latex_elements = {
+ # The paper size ('letterpaper' or 'a4paper').
+ # 'papersize': 'letterpaper',
+ # The font size ('10pt', '11pt' or '12pt').
+ # 'pointsize': '10pt',
+ # Additional stuff for the LaTeX preamble.
+ # 'preamble': '',
+ # Latex figure (float) alignment
+ # 'figure_align': 'htbp',
+}
+
+# Grouping the document tree into LaTeX files. List of tuples
+# (source start file, target name, title,
+# author, documentclass [howto, manual, or own class]).
+latex_documents = [
+ (
+ master_doc,
+ "hyperactive.tex",
+ "Hyperactive Documentation",
+ "Simon Blanke",
+ "manual",
+ ),
+]
+
+# -- Options for manual page output ------------------------------------------
+
+# One entry per manual page. List of tuples
+# (source start file, name, description, authors, manual section).
+man_pages = [(master_doc, "hyperactive", "Hyperactive Documentation", [author], 1)]
+
+# -- Options for Texinfo output ----------------------------------------------
+
+# Grouping the document tree into Texinfo files. List of tuples
+# (source start file, target name, title, author,
+# dir menu entry, description, category)
+texinfo_documents = [
+ (
+ master_doc,
+ "hyperactive",
+ "Hyperactive Documentation",
+ author,
+ "hyperactive",
+ "An optimization and data collection toolbox for convenient and fast prototyping.",
+ "Miscellaneous",
+ ),
+]
+
+
+def setup(app):
+ """Set up sphinx builder.
+
+ Parameters
+ ----------
+ app : Sphinx application object
+ """
+
+ def adds(pth):
+ print("Adding stylesheet: %s" % pth) # noqa: T201, T001
+ app.add_css_file(pth)
+
+ adds("fields.css") # for parameters, etc.
+
+
+# -- Extension configuration -------------------------------------------------
+
+# -- Options for intersphinx extension ---------------------------------------
+
+# Example configuration for intersphinx: refer to the Python standard library.
+intersphinx_mapping = {
+ "python": (f"https://docs.python.org/{sys.version_info.major}", None),
+ "numpy": ("https://numpy.org/doc/stable/", None),
+ "scipy": ("https://docs.scipy.org/doc/scipy/", None),
+ "matplotlib": ("https://matplotlib.org/stable/", None),
+ "pandas": ("https://pandas.pydata.org/pandas-docs/stable/", None),
+ "joblib": ("https://joblib.readthedocs.io/en/stable/", None),
+ "scikit-learn": ("https://scikit-learn.org/stable/", None),
+}
+
+# -- Options for todo extension ----------------------------------------------
+todo_include_todos = False
+
+copybutton_prompt_text = r">>> |\.\.\. |\$ "
+copybutton_prompt_is_regexp = True
+copybutton_line_continuation_character = "\\"
+
+# -- RST Epilog: Make metadata available as substitutions in RST files -------
+# These can be used as |variable_name| in RST files
+
+# Build additional Python versions formatting for RST
+if _py_versions:
+ _py_versions_inline = ", ".join(_py_versions)
+else:
+ _py_versions_inline = "3.10+"
+
+
+# -- Count algorithms and integrations dynamically ----------------------------
+def count_from_all_list(module_path: str) -> int:
+ """Count items in __all__ list of a Python module file."""
+ import ast
+
+ file_path = Path(__file__).parent.parent.parent / "src" / module_path
+ if not file_path.exists():
+ return 0
+
+ try:
+ tree = ast.parse(file_path.read_text())
+ for node in ast.walk(tree):
+ if isinstance(node, ast.Assign):
+ for target in node.targets:
+ if isinstance(target, ast.Name) and target.id == "__all__":
+ if isinstance(node.value, ast.List):
+ return len(node.value.elts)
+ except Exception:
+ pass
+ return 0
+
+
+# Count algorithms from opt/__init__.py
+_n_algorithms = count_from_all_list("hyperactive/opt/__init__.py")
+
+# Count integrations from experiment/integrations/__init__.py
+_n_integrations = count_from_all_list("hyperactive/experiment/integrations/__init__.py")
+
+# Backends are conceptual (GFO, Optuna, sklearn) - hardcoded
+_n_backends = 3
+
+
+rst_epilog = f"""
+.. |version| replace:: {PYPROJECT_METADATA["version"]}
+.. |min_python| replace:: {PYPROJECT_METADATA["min_python"]}
+.. |python_versions_list| replace:: {_py_versions_inline}
+.. |python_version_range| replace:: {_py_version_range}
+.. |current_year| replace:: {current_year}
+.. |n_algorithms| replace:: {_n_algorithms}
+.. |n_backends| replace:: {_n_backends}
+.. |n_integrations| replace:: {_n_integrations}
+"""
diff --git a/docs/source/examples.rst b/docs/source/examples.rst
new file mode 100644
index 00000000..c8b88c45
--- /dev/null
+++ b/docs/source/examples.rst
@@ -0,0 +1,83 @@
+.. _examples:
+
+========
+Examples
+========
+
+This section provides a collection of examples demonstrating Hyperactive's capabilities.
+All examples are available in the
+`examples directory `_
+on GitHub.
+
+.. toctree::
+ :maxdepth: 1
+
+ examples/general
+ examples/local_search
+ examples/global_search
+ examples/population_based
+ examples/sequential_model_based
+ examples/optuna_backend
+ examples/sklearn_backend
+ examples/integrations
+ examples/other
+ examples/interactive_tutorial
+
+
+Overview
+--------
+
+Hyperactive provides examples for all optimization algorithms and integration
+patterns. The examples are organized by algorithm category:
+
+
+Gradient-Free Optimizers (GFO)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+:ref:`examples_general`
+ Basic examples showing custom function optimization and sklearn model tuning.
+
+:ref:`examples_local_search`
+ Hill Climbing, Simulated Annealing, Downhill Simplex, and other local
+ search methods that explore by making incremental moves.
+
+:ref:`examples_global_search`
+ Random Search, Grid Search, Powell's Method, and other algorithms that
+ explore the search space more broadly.
+
+:ref:`examples_population_based`
+ Particle Swarm, Genetic Algorithm, Evolution Strategy, and other
+ nature-inspired population methods.
+
+:ref:`examples_sequential_model_based`
+ Bayesian Optimization, Tree-Parzen Estimators, and other model-based
+ methods that learn from previous evaluations.
+
+
+Backend Examples
+^^^^^^^^^^^^^^^^
+
+:ref:`examples_optuna_backend`
+ Examples using Optuna's samplers including TPE, CMA-ES, NSGA-II/III,
+ and Gaussian Process optimization.
+
+:ref:`examples_sklearn_backend`
+ Scikit-learn compatible interfaces as drop-in replacements for
+ GridSearchCV and RandomizedSearchCV.
+
+
+Integration Examples
+^^^^^^^^^^^^^^^^^^^^
+
+:ref:`examples_integrations`
+ Time series optimization with sktime and other framework integrations.
+
+
+Advanced Topics
+^^^^^^^^^^^^^^^
+
+:ref:`examples_other`
+ Advanced usage patterns including warm starting and optimizer comparison.
+
+:ref:`examples_interactive_tutorial`
+ Comprehensive Jupyter notebook tutorial covering all Hyperactive features.
diff --git a/docs/source/examples/general.rst b/docs/source/examples/general.rst
new file mode 100644
index 00000000..113171b8
--- /dev/null
+++ b/docs/source/examples/general.rst
@@ -0,0 +1,47 @@
+.. _examples_general:
+
+================
+General Examples
+================
+
+These examples demonstrate Hyperactive's core functionality with simple,
+illustrative use cases.
+
+
+Running Examples
+----------------
+
+All examples are available in the
+`examples directory `_
+on GitHub. You can run any example directly:
+
+.. code-block:: bash
+
+ # Clone the repository
+ git clone https://github.com/SimonBlanke/Hyperactive.git
+ cd Hyperactive/examples
+
+ # Run an example
+ python gfo/hill_climbing_example.py
+
+
+Custom Function Optimization
+----------------------------
+
+The simplest use case: optimizing a mathematical function.
+
+.. literalinclude:: ../_snippets/examples/basic_examples.py
+ :language: python
+ :start-after: # [start:custom_function]
+ :end-before: # [end:custom_function]
+
+
+Scikit-learn Model Tuning
+-------------------------
+
+Hyperparameter optimization for machine learning models.
+
+.. literalinclude:: ../_snippets/examples/basic_examples.py
+ :language: python
+ :start-after: # [start:sklearn_tuning]
+ :end-before: # [end:sklearn_tuning]
diff --git a/docs/source/examples/global_search.rst b/docs/source/examples/global_search.rst
new file mode 100644
index 00000000..d50ec5d5
--- /dev/null
+++ b/docs/source/examples/global_search.rst
@@ -0,0 +1,44 @@
+.. _examples_global_search:
+
+========================
+Global Search Algorithms
+========================
+
+Global search algorithms explore the search space more broadly, using
+randomization or systematic patterns to avoid getting trapped in local optima.
+
+
+Algorithm Examples
+------------------
+
+.. list-table::
+ :header-rows: 1
+ :widths: 30 70
+
+ * - Algorithm
+ - Example
+ * - Random Search
+ - `random_search_example.py `_
+ * - Grid Search
+ - `grid_search_example.py `_
+ * - Random Restart Hill Climbing
+ - `random_restart_hill_climbing_example.py `_
+ * - Stochastic Hill Climbing
+ - `stochastic_hill_climbing_example.py `_
+ * - Powell's Method
+ - `powells_method_example.py `_
+ * - Pattern Search
+ - `pattern_search_example.py `_
+
+
+When to Use Global Search
+-------------------------
+
+Global search algorithms are best suited for:
+
+- **Multimodal search spaces** with multiple local optima
+- **Initial exploration** before fine-tuning with local search
+- **Unknown search spaces** where the landscape is not well understood
+- **Baseline comparisons** (especially Random Search)
+
+See :ref:`user_guide_optimizers` for detailed algorithm descriptions.
diff --git a/docs/source/examples/integrations.rst b/docs/source/examples/integrations.rst
new file mode 100644
index 00000000..2fff131c
--- /dev/null
+++ b/docs/source/examples/integrations.rst
@@ -0,0 +1,49 @@
+.. _examples_integrations:
+
+============
+Integrations
+============
+
+Hyperactive integrates with popular machine learning frameworks beyond
+scikit-learn, including time series libraries.
+
+
+Sktime Integration
+------------------
+
+For time series forecasting and classification with sktime:
+
+.. list-table::
+ :header-rows: 1
+ :widths: 30 70
+
+ * - Use Case
+ - Example
+ * - Time Series Forecasting
+ - `sktime_forecasting_example.py `_
+ * - Time Series Classification
+ - `sktime_tsc_example.py `_
+
+.. note::
+
+ Sktime integration requires additional dependencies:
+
+ .. code-block:: bash
+
+ pip install hyperactive[sktime-integration]
+
+
+Installing Extras
+-----------------
+
+Install integration extras as needed:
+
+.. code-block:: bash
+
+ # Sktime/skpro for time series
+ pip install hyperactive[sktime-integration]
+
+ # All extras including PyTorch Lightning
+ pip install hyperactive[all_extras]
+
+See :ref:`user_guide_integrations` for complete integration documentation.
diff --git a/docs/source/examples/interactive_tutorial.rst b/docs/source/examples/interactive_tutorial.rst
new file mode 100644
index 00000000..b4e63e75
--- /dev/null
+++ b/docs/source/examples/interactive_tutorial.rst
@@ -0,0 +1,50 @@
+.. _examples_interactive_tutorial:
+
+====================
+Interactive Tutorial
+====================
+
+For hands-on learning, we provide a comprehensive Jupyter notebook tutorial
+that covers all aspects of Hyperactive.
+
+
+Tutorial Notebook
+-----------------
+
+`Hyperactive Tutorial Notebook `_
+
+This interactive notebook covers:
+
+- **Basic optimization concepts** — Understanding search spaces and objective functions
+- **All optimizer categories** — Hands-on examples with each algorithm type
+- **Real-world ML examples** — Practical hyperparameter optimization workflows
+- **Best practices and tips** — Common pitfalls and how to avoid them
+
+
+Running the Tutorial
+--------------------
+
+You can run the tutorial locally:
+
+.. code-block:: bash
+
+ # Clone the tutorial repository
+ git clone https://github.com/SimonBlanke/hyperactive-tutorial.git
+ cd hyperactive-tutorial
+
+ # Install dependencies
+ pip install -r requirements.txt
+
+ # Launch Jupyter
+ jupyter notebook notebooks/hyperactive_tutorial.ipynb
+
+Or view it directly on `nbviewer `_
+without any installation.
+
+
+Additional Resources
+--------------------
+
+- `Hyperactive GitHub Repository `_
+- `Gradient-Free-Optimizers `_ — The underlying optimization library
+- :ref:`user_guide` — Detailed documentation of all features
diff --git a/docs/source/examples/local_search.rst b/docs/source/examples/local_search.rst
new file mode 100644
index 00000000..d4c9ad7a
--- /dev/null
+++ b/docs/source/examples/local_search.rst
@@ -0,0 +1,41 @@
+.. _examples_local_search:
+
+=======================
+Local Search Algorithms
+=======================
+
+Local search algorithms explore the search space by making small, incremental
+moves from the current position. They are efficient for finding local optima
+but may get stuck without escaping mechanisms.
+
+
+Algorithm Examples
+------------------
+
+.. list-table::
+ :header-rows: 1
+ :widths: 30 70
+
+ * - Algorithm
+ - Example
+ * - Hill Climbing
+ - `hill_climbing_example.py `_
+ * - Repulsing Hill Climbing
+ - `repulsing_hill_climbing_example.py `_
+ * - Simulated Annealing
+ - `simulated_annealing_example.py `_
+ * - Downhill Simplex
+ - `downhill_simplex_example.py `_
+
+
+When to Use Local Search
+------------------------
+
+Local search algorithms are best suited for:
+
+- **Smooth search spaces** where nearby points have similar scores
+- **Fine-tuning** around a known good region
+- **Fast convergence** when a good starting point is available
+- **Limited computational budget** where few evaluations are possible
+
+See :ref:`user_guide_optimizers` for detailed algorithm descriptions.
diff --git a/docs/source/examples/optuna_backend.rst b/docs/source/examples/optuna_backend.rst
new file mode 100644
index 00000000..3a8067b7
--- /dev/null
+++ b/docs/source/examples/optuna_backend.rst
@@ -0,0 +1,56 @@
+.. _examples_optuna_backend:
+
+==============
+Optuna Backend
+==============
+
+Hyperactive provides wrappers for Optuna's optimization algorithms, allowing
+you to use Optuna's powerful samplers with Hyperactive's interface.
+
+.. note::
+
+ Optuna must be installed separately:
+
+ .. code-block:: bash
+
+ pip install optuna
+ # or
+ pip install hyperactive[all_extras]
+
+
+Sampler Examples
+----------------
+
+.. list-table::
+ :header-rows: 1
+ :widths: 30 70
+
+ * - Sampler
+ - Example
+ * - TPE (Tree-Parzen Estimator)
+ - `tpe_sampler_example.py `_
+ * - CMA-ES
+ - `cmaes_sampler_example.py `_
+ * - Gaussian Process
+ - `gp_sampler_example.py `_
+ * - NSGA-II
+ - `nsga_ii_sampler_example.py `_
+ * - NSGA-III
+ - `nsga_iii_sampler_example.py `_
+ * - QMC (Quasi-Monte Carlo)
+ - `qmc_sampler_example.py `_
+ * - Random
+ - `random_sampler_example.py `_
+ * - Grid
+ - `grid_sampler_example.py `_
+
+
+When to Use Optuna Backend
+--------------------------
+
+The Optuna backend is useful when you need:
+
+- **Multi-objective optimization** (NSGA-II, NSGA-III)
+- **Advanced sampling strategies** like CMA-ES or QMC
+- **Optuna's pruning capabilities** for early stopping
+- **Compatibility** with existing Optuna workflows
diff --git a/docs/source/examples/other.rst b/docs/source/examples/other.rst
new file mode 100644
index 00000000..ce96ca09
--- /dev/null
+++ b/docs/source/examples/other.rst
@@ -0,0 +1,46 @@
+.. _examples_other:
+
+=================
+Advanced Examples
+=================
+
+These examples demonstrate advanced Hyperactive features for more sophisticated
+optimization workflows.
+
+
+Warm Starting Optimization
+--------------------------
+
+Start optimization from known good points to accelerate convergence:
+
+.. literalinclude:: ../_snippets/examples/advanced_examples.py
+ :language: python
+ :start-after: # [start:warm_starting]
+ :end-before: # [end:warm_starting]
+
+
+Comparing Optimizers
+--------------------
+
+Compare different optimization strategies on the same problem:
+
+.. literalinclude:: ../_snippets/examples/advanced_examples.py
+ :language: python
+ :start-after: # [start:comparing_optimizers]
+ :end-before: # [end:comparing_optimizers]
+
+
+Tips for Advanced Usage
+-----------------------
+
+**Warm Starting**
+
+- Use results from previous runs to seed new optimizations
+- Helpful when iterating on model architecture or features
+- Combine with local search for fine-tuning around known good points
+
+**Optimizer Comparison**
+
+- Always use the same ``random_state`` for reproducible comparisons
+- Run multiple trials to account for optimizer randomness
+- Consider both final score and convergence speed
diff --git a/docs/source/examples/population_based.rst b/docs/source/examples/population_based.rst
new file mode 100644
index 00000000..f115457f
--- /dev/null
+++ b/docs/source/examples/population_based.rst
@@ -0,0 +1,45 @@
+.. _examples_population_based:
+
+===========================
+Population-Based Algorithms
+===========================
+
+Population-based algorithms maintain multiple candidate solutions simultaneously,
+using mechanisms inspired by natural evolution or swarm behavior to explore
+the search space efficiently.
+
+
+Algorithm Examples
+------------------
+
+.. list-table::
+ :header-rows: 1
+ :widths: 30 70
+
+ * - Algorithm
+ - Example
+ * - Particle Swarm
+ - `particle_swarm_example.py `_
+ * - Genetic Algorithm
+ - `genetic_algorithm_example.py `_
+ * - Evolution Strategy
+ - `evolution_strategy_example.py `_
+ * - Differential Evolution
+ - `differential_evolution_example.py `_
+ * - Parallel Tempering
+ - `parallel_tempering_example.py `_
+ * - Spiral Optimization
+ - `spiral_optimization_example.py `_
+
+
+When to Use Population-Based Methods
+------------------------------------
+
+Population-based algorithms are best suited for:
+
+- **Complex, multimodal landscapes** with many local optima
+- **Parallelizable evaluations** where multiple candidates can be evaluated simultaneously
+- **Robust optimization** where diversity helps avoid premature convergence
+- **Large search spaces** requiring extensive exploration
+
+See :ref:`user_guide_optimizers` for detailed algorithm descriptions.
diff --git a/docs/source/examples/sequential_model_based.rst b/docs/source/examples/sequential_model_based.rst
new file mode 100644
index 00000000..bde4fca2
--- /dev/null
+++ b/docs/source/examples/sequential_model_based.rst
@@ -0,0 +1,43 @@
+.. _examples_sequential_model_based:
+
+=================================
+Sequential Model-Based Algorithms
+=================================
+
+Sequential model-based optimization (SMBO) algorithms build a surrogate model
+of the objective function to guide the search. They are particularly effective
+when function evaluations are expensive.
+
+
+Algorithm Examples
+------------------
+
+.. list-table::
+ :header-rows: 1
+ :widths: 30 70
+
+ * - Algorithm
+ - Example
+ * - Bayesian Optimization
+ - `bayesian_optimization_example.py `_
+ * - Tree-Parzen Estimators
+ - `tree_structured_parzen_estimators_example.py `_
+ * - Forest Optimizer
+ - `forest_optimizer_example.py `_
+ * - Lipschitz Optimizer
+ - `lipschitz_optimizer_example.py `_
+ * - DIRECT Algorithm
+ - `direct_algorithm_example.py `_
+
+
+When to Use Model-Based Methods
+-------------------------------
+
+Sequential model-based algorithms are best suited for:
+
+- **Expensive objective functions** (e.g., training neural networks, simulations)
+- **Limited evaluation budgets** where each evaluation counts
+- **Smooth, continuous search spaces** where surrogate models work well
+- **Hyperparameter optimization** for machine learning models
+
+See :ref:`user_guide_optimizers` for detailed algorithm descriptions.
diff --git a/docs/source/examples/sklearn_backend.rst b/docs/source/examples/sklearn_backend.rst
new file mode 100644
index 00000000..b74f0623
--- /dev/null
+++ b/docs/source/examples/sklearn_backend.rst
@@ -0,0 +1,57 @@
+.. _examples_sklearn_backend:
+
+===============
+Sklearn Backend
+===============
+
+Hyperactive provides scikit-learn compatible interfaces that work as drop-in
+replacements for ``GridSearchCV`` and ``RandomizedSearchCV``.
+
+
+Example Files
+-------------
+
+.. list-table::
+ :header-rows: 1
+ :widths: 30 70
+
+ * - Use Case
+ - Example
+ * - Classification with OptCV
+ - `sklearn_classif_example.py `_
+ * - Grid Search
+ - `grid_search_example.py `_
+ * - Random Search
+ - `random_search_example.py `_
+
+
+Usage Overview
+--------------
+
+Hyperactive's sklearn-compatible classes follow the familiar fit/predict pattern:
+
+.. code-block:: python
+
+ from hyperactive.integrations.sklearn import HyperactiveSearchCV
+ from sklearn.ensemble import RandomForestClassifier
+
+ # Define search space
+ param_space = {
+ "n_estimators": [50, 100, 200],
+ "max_depth": [5, 10, 15, None],
+ }
+
+ # Create search object
+ search = HyperactiveSearchCV(
+ estimator=RandomForestClassifier(),
+ param_space=param_space,
+ n_iter=50,
+ )
+
+ # Fit like any sklearn estimator
+ search.fit(X_train, y_train)
+
+ # Access best parameters
+ print(search.best_params_)
+
+See :ref:`user_guide_integrations` for complete documentation.
diff --git a/docs/source/faq.rst b/docs/source/faq.rst
new file mode 100644
index 00000000..ed5360a9
--- /dev/null
+++ b/docs/source/faq.rst
@@ -0,0 +1,40 @@
+.. _faq:
+
+==========================
+Frequently Asked Questions
+==========================
+
+This section answers common questions about Hyperactive. For migration from v4,
+see the :ref:`user_guide_migration`.
+
+.. toctree::
+ :maxdepth: 1
+
+ faq/getting_started
+ faq/search_space
+ faq/common_issues
+ faq/advanced_usage
+ faq/integrations
+ faq/getting_help
+
+
+Overview
+--------
+
+:ref:`faq_getting_started`
+ Choosing optimizers, iteration counts, and understanding maximization.
+
+:ref:`faq_search_space`
+ Defining continuous, discrete, and mixed parameter spaces.
+
+:ref:`faq_common_issues`
+ Slow optimization, reproducibility, handling errors.
+
+:ref:`faq_advanced_usage`
+ Parallel execution, callbacks, parameter constraints.
+
+:ref:`faq_integrations`
+ Using Hyperactive with PyTorch, XGBoost, and other frameworks.
+
+:ref:`faq_getting_help`
+ Where to report bugs and get support.
diff --git a/docs/source/faq/advanced_usage.rst b/docs/source/faq/advanced_usage.rst
new file mode 100644
index 00000000..2199cc38
--- /dev/null
+++ b/docs/source/faq/advanced_usage.rst
@@ -0,0 +1,69 @@
+.. _faq_advanced_usage:
+
+==============
+Advanced Usage
+==============
+
+Can I run optimizations in parallel?
+------------------------------------
+
+Currently, Hyperactive v5 runs single optimizer instances.
+For parallel evaluation of candidates, consider using Optuna
+backend optimizers which support parallel trials:
+
+.. code-block:: python
+
+ from hyperactive.opt.optuna import TPEOptimizer
+
+ optimizer = TPEOptimizer(
+ search_space=search_space,
+ n_iter=100,
+ experiment=objective,
+ # Optuna handles parallelization
+ )
+
+
+Can I save and resume optimization?
+-----------------------------------
+
+This feature is planned but not yet available in v5. As a workaround,
+you can log results during optimization and use them as initial points
+for a new run.
+
+
+Are callbacks supported?
+------------------------
+
+User-defined callbacks during optimization are not currently supported in v5.
+The Optuna backend has internal early-stopping callbacks, but there's no
+general callback interface for tracking progress or modifying behavior during
+optimization.
+
+For progress monitoring, you can add logging inside your objective function:
+
+.. code-block:: python
+
+ iteration = 0
+
+ def objective(params):
+ global iteration
+ iteration += 1
+ score = evaluate_model(params)
+ print(f"Iteration {iteration}: score={score:.4f}")
+ return score
+
+
+How do I add constraints between parameters?
+--------------------------------------------
+
+Handle constraints in your objective function by returning a poor score
+for invalid combinations:
+
+.. code-block:: python
+
+ def objective(params):
+ # Constraint: min_samples_split must be >= min_samples_leaf
+ if params["min_samples_split"] < params["min_samples_leaf"]:
+ return -np.inf # Invalid configuration
+
+ return evaluate_model(params)
diff --git a/docs/source/faq/common_issues.rst b/docs/source/faq/common_issues.rst
new file mode 100644
index 00000000..e95f3466
--- /dev/null
+++ b/docs/source/faq/common_issues.rst
@@ -0,0 +1,72 @@
+.. _faq_common_issues:
+
+=============
+Common Issues
+=============
+
+Why is my optimization slow?
+----------------------------
+
+**Slow objective function**: The optimizer only controls search strategy.
+If each evaluation takes a long time, consider:
+
+- Reducing cross-validation folds
+- Using a subset of training data for tuning
+- Simplifying your model during search
+
+**Large search space**: More combinations require more iterations.
+Consider reducing parameter granularity or using smarter optimizers
+like Bayesian optimization.
+
+**Too many iterations**: Start with fewer iterations and increase
+if needed.
+
+
+Why does my score vary between runs?
+------------------------------------
+
+Optimization algorithms are stochastic. To get reproducible results,
+set a random seed:
+
+.. code-block:: python
+
+ optimizer = HillClimbing(
+ search_space=search_space,
+ n_iter=100,
+ experiment=objective,
+ random_state=42, # Set seed for reproducibility
+ )
+
+
+My objective function returns NaN or raises exceptions
+------------------------------------------------------
+
+Handle invalid configurations in your objective function:
+
+.. code-block:: python
+
+ def objective(params):
+ try:
+ score = evaluate_model(params)
+ if np.isnan(score):
+ return -np.inf # Return worst possible score
+ return score
+ except Exception:
+ return -np.inf # Return worst possible score on error
+
+
+How do I see what parameters were tried?
+----------------------------------------
+
+Access the search history after optimization:
+
+.. code-block:: python
+
+ best_params = optimizer.solve()
+
+ # Access results
+ print(f"Best parameters: {optimizer.best_params_}")
+ print(f"Best score: {optimizer.best_score_}")
+
+ # Full search history (if available)
+ # Check optimizer attributes for search_data or similar
diff --git a/docs/source/faq/getting_help.rst b/docs/source/faq/getting_help.rst
new file mode 100644
index 00000000..acb918cb
--- /dev/null
+++ b/docs/source/faq/getting_help.rst
@@ -0,0 +1,28 @@
+.. _faq_getting_help:
+
+============
+Getting Help
+============
+
+Where can I report bugs or request features?
+--------------------------------------------
+
+Open an issue on `GitHub `_.
+
+
+Where can I get help?
+---------------------
+
+- Check the :ref:`examples` for code samples
+- Read the :ref:`user_guide` for detailed explanations
+- Join the `Discord `_ community
+- Search or ask on `GitHub Discussions `_
+
+
+Where is the documentation for older versions?
+----------------------------------------------
+
+Documentation for Hyperactive v4 is available at:
+`Legacy Documentation (v4) `_
+
+If you're migrating from v4 to v5, see the :ref:`user_guide_migration`.
diff --git a/docs/source/faq/getting_started.rst b/docs/source/faq/getting_started.rst
new file mode 100644
index 00000000..fbb3f01d
--- /dev/null
+++ b/docs/source/faq/getting_started.rst
@@ -0,0 +1,53 @@
+.. _faq_getting_started:
+
+===============
+Getting Started
+===============
+
+Which optimizer should I use?
+-----------------------------
+
+For most problems, start with one of these recommendations:
+
+**Small search spaces (<100 combinations)**
+ Use :class:`~hyperactive.opt.gfo.GridSearch` to exhaustively evaluate all options.
+
+**General-purpose optimization**
+ :class:`~hyperactive.opt.gfo.BayesianOptimizer` works well for expensive
+ objective functions where you want to minimize evaluations.
+
+**Fast, simple problems**
+ :class:`~hyperactive.opt.gfo.HillClimbing` or
+ :class:`~hyperactive.opt.gfo.RandomSearch` are good starting points.
+
+**High-dimensional spaces**
+ Population-based methods like :class:`~hyperactive.opt.gfo.ParticleSwarmOptimizer`
+ or :class:`~hyperactive.opt.gfo.EvolutionStrategyOptimizer` handle many
+ parameters well.
+
+See :ref:`user_guide_optimizers` for detailed guidance on choosing optimizers.
+
+
+How many iterations do I need?
+------------------------------
+
+This depends on your search space size and objective function:
+
+- **Rule of thumb**: Start with ``n_iter = 10 * number_of_parameters``
+- **Expensive functions**: Use fewer iterations with Bayesian optimization
+- **Fast functions**: Use more iterations with simpler optimizers
+
+You can monitor progress and stop early if the score plateaus.
+
+
+Does Hyperactive minimize or maximize?
+--------------------------------------
+
+**Hyperactive maximizes** the objective function. If you want to minimize,
+return the negative of your metric:
+
+.. code-block:: python
+
+ def objective(params):
+ error = compute_error(params)
+ return -error # Negate to minimize
diff --git a/docs/source/faq/integrations.rst b/docs/source/faq/integrations.rst
new file mode 100644
index 00000000..c6d693b6
--- /dev/null
+++ b/docs/source/faq/integrations.rst
@@ -0,0 +1,64 @@
+.. _faq_integrations:
+
+============
+Integrations
+============
+
+Can I use Hyperactive with PyTorch (not Lightning)?
+---------------------------------------------------
+
+Yes, create a custom objective function:
+
+.. code-block:: python
+
+ import torch
+
+ def objective(params):
+ model = MyPyTorchModel(
+ hidden_size=params["hidden_size"],
+ dropout=params["dropout"],
+ )
+ # Train and evaluate your model
+ train_model(model, train_loader)
+ accuracy = evaluate_model(model, val_loader)
+ return accuracy
+
+
+How does Hyperactive compare to Optuna?
+---------------------------------------
+
+**Hyperactive with native GFO backend**:
+
+- Simple, unified API
+- Wide variety of optimization algorithms
+- Great for hyperparameter tuning
+
+**Hyperactive with Optuna backend**:
+
+- Access Optuna's algorithms through Hyperactive's interface
+- Combine the strengths of both libraries
+
+**Pure Optuna**:
+
+- More features (pruning, distributed, database storage)
+- Larger community and ecosystem
+- More configuration options
+
+Choose based on your needs: Hyperactive for simplicity, Optuna for
+advanced features.
+
+
+Can I use Hyperactive with other ML frameworks?
+-----------------------------------------------
+
+Yes, any framework works with custom objective functions:
+
+.. code-block:: python
+
+ # XGBoost example
+ import xgboost as xgb
+
+ def objective(params):
+ model = xgb.XGBClassifier(**params)
+ scores = cross_val_score(model, X, y, cv=3)
+ return scores.mean()
diff --git a/docs/source/faq/search_space.rst b/docs/source/faq/search_space.rst
new file mode 100644
index 00000000..eefccc36
--- /dev/null
+++ b/docs/source/faq/search_space.rst
@@ -0,0 +1,49 @@
+.. _faq_search_space:
+
+======================
+Search Space Questions
+======================
+
+How do I define a continuous search space?
+------------------------------------------
+
+Use NumPy to create arrays of values:
+
+.. code-block:: python
+
+ import numpy as np
+
+ search_space = {
+ "learning_rate": np.logspace(-4, -1, 50), # 0.0001 to 0.1
+ "momentum": np.linspace(0.5, 0.99, 50), # 0.5 to 0.99
+ }
+
+Hyperactive samples from these arrays, so finer granularity gives
+more precision at the cost of a larger search space.
+
+
+Can I mix discrete and continuous parameters?
+---------------------------------------------
+
+Yes, mix freely:
+
+.. code-block:: python
+
+ search_space = {
+ "n_estimators": [10, 50, 100, 200], # Discrete
+ "max_depth": list(range(3, 20)), # Discrete range
+ "learning_rate": np.linspace(0.01, 0.3, 30), # Continuous
+ "algorithm": ["SAMME", "SAMME.R"], # Categorical
+ }
+
+
+How do I include None as a parameter value?
+-------------------------------------------
+
+Include ``None`` directly in your list:
+
+.. code-block:: python
+
+ search_space = {
+ "max_depth": [None, 3, 5, 10, 20],
+ }
diff --git a/docs/source/get_involved.rst b/docs/source/get_involved.rst
new file mode 100644
index 00000000..a7ada9ac
--- /dev/null
+++ b/docs/source/get_involved.rst
@@ -0,0 +1,123 @@
+.. _get_involved:
+
+============
+Get Involved
+============
+
+Hyperactive is an open-source project and we welcome contributions from the community!
+There are many ways to get involved, whether you're a developer, researcher, or user.
+
+.. toctree::
+ :maxdepth: 1
+
+ get_involved/contributing
+ get_involved/code_of_conduct
+
+
+Ways to Contribute
+------------------
+
+Report Bugs
+^^^^^^^^^^^
+
+Found a bug? Please report it on GitHub:
+
+1. Search `existing issues `_
+ to see if it's already reported.
+2. If not, `open a new issue `_
+ with:
+
+ - A clear description of the problem
+ - Steps to reproduce the issue
+ - Expected vs actual behavior
+ - Your Python version and Hyperactive version
+ - Minimal code example that reproduces the issue
+
+Suggest Features
+^^^^^^^^^^^^^^^^
+
+Have an idea for a new feature or improvement?
+
+1. Open a `GitHub Discussion `_
+ to discuss your idea with the community.
+2. If there's consensus, create an issue or pull request.
+
+Contribute Code
+^^^^^^^^^^^^^^^
+
+Ready to contribute code? See the :ref:`contributing` guide for:
+
+- Setting up your development environment
+- Coding standards and style guide
+- How to submit pull requests
+- Testing requirements
+
+Improve Documentation
+^^^^^^^^^^^^^^^^^^^^^
+
+Documentation improvements are always welcome:
+
+- Fix typos or unclear explanations
+- Add examples and tutorials
+- Improve API documentation
+- Translate documentation
+
+Share Your Work
+^^^^^^^^^^^^^^^
+
+Using Hyperactive in your project? Share your experience:
+
+- Write a blog post or tutorial
+- Present at a meetup or conference
+- Share on social media
+- Add your project to the examples
+
+
+Community
+---------
+
+Connect with the Hyperactive community:
+
+Discord
+^^^^^^^
+
+Join our `Discord server `_ for:
+
+- Real-time discussions
+- Questions and answers
+- Announcements
+
+GitHub Discussions
+^^^^^^^^^^^^^^^^^^
+
+Use `GitHub Discussions `_ for:
+
+- Feature proposals
+- Best practices
+- Show and tell
+
+LinkedIn
+^^^^^^^^
+
+Follow the `German Center for Open Source AI `_
+for news and updates.
+
+
+Support
+-------
+
+Getting Help
+^^^^^^^^^^^^
+
+If you need help:
+
+1. Check the :ref:`user_guide` and :ref:`api_reference`
+2. Search `existing issues `_
+3. Ask on `Discord `_
+4. Open a new issue with the "question" label
+
+Star the Project
+^^^^^^^^^^^^^^^^
+
+If you find Hyperactive useful, please `star it on GitHub `_!
+Stars help increase visibility and attract more contributors.
diff --git a/docs/source/get_involved/code_of_conduct.rst b/docs/source/get_involved/code_of_conduct.rst
new file mode 100644
index 00000000..ce13223e
--- /dev/null
+++ b/docs/source/get_involved/code_of_conduct.rst
@@ -0,0 +1,114 @@
+.. _code_of_conduct:
+
+===============
+Code of Conduct
+===============
+
+Hyperactive is committed to providing a welcoming and inclusive environment for everyone.
+This Code of Conduct outlines our expectations for participant behavior and the
+consequences for unacceptable behavior.
+
+
+Our Pledge
+----------
+
+We as members, contributors, and leaders pledge to make participation in our
+community a harassment-free experience for everyone, regardless of age, body
+size, visible or invisible disability, ethnicity, sex characteristics, gender
+identity and expression, level of experience, education, socio-economic status,
+nationality, personal appearance, race, religion, or sexual identity and orientation.
+
+We pledge to act and interact in ways that contribute to an open, welcoming,
+diverse, inclusive, and healthy community.
+
+
+Community Guidelines
+--------------------
+
+Expected Behavior
+^^^^^^^^^^^^^^^^^
+
+We expect all community members to:
+
+- **Be respectful**: Treat everyone with respect and consideration.
+- **Be constructive**: Provide helpful feedback and suggestions.
+- **Be inclusive**: Welcome newcomers and help them get started.
+- **Be collaborative**: Work together toward common goals.
+- **Be patient**: Remember that people have different skill levels and backgrounds.
+- **Be professional**: Keep discussions focused on the project.
+
+
+Unacceptable Behavior
+^^^^^^^^^^^^^^^^^^^^^
+
+The following behaviors are not acceptable:
+
+- Harassment, intimidation, or discrimination of any kind
+- Offensive comments related to personal characteristics
+- Sexual language or imagery in any community space
+- Personal attacks or insults
+- Trolling or deliberately inflammatory comments
+- Publishing others' private information without permission
+- Other conduct that could reasonably be considered inappropriate
+
+
+Enforcement
+-----------
+
+Reporting Issues
+^^^^^^^^^^^^^^^^
+
+If you experience or witness unacceptable behavior, please report it by:
+
+- **Email**: Contact Simon Blanke at simon.blanke@yahoo.com
+- **GitHub**: Open a private issue or contact maintainers directly
+
+All complaints will be reviewed and investigated promptly and fairly.
+
+
+Consequences
+^^^^^^^^^^^^
+
+Community leaders will determine appropriate action for violations, which may include:
+
+1. **Correction**: A private, written warning with clarity about the nature of
+ the violation and an explanation of why the behavior was inappropriate.
+
+2. **Warning**: A warning with consequences for continued behavior. No interaction
+ with the people involved for a specified period of time.
+
+3. **Temporary Ban**: A temporary ban from any sort of interaction or public
+ communication with the community for a specified period of time.
+
+4. **Permanent Ban**: A permanent ban from any sort of public interaction within
+ the community.
+
+
+Scope
+-----
+
+This Code of Conduct applies within all community spaces, including:
+
+- GitHub repositories (issues, pull requests, discussions)
+- Discord server
+- Social media interactions
+- In-person events
+
+It also applies when an individual is officially representing the community
+in public spaces.
+
+
+Attribution
+-----------
+
+This Code of Conduct is adapted from the
+`Contributor Covenant `_, version 2.0.
+
+
+Questions
+---------
+
+If you have questions about this Code of Conduct, please contact:
+
+- **Email**: simon.blanke@yahoo.com
+- **GitHub**: `Open a discussion `_
diff --git a/docs/source/get_involved/contributing.rst b/docs/source/get_involved/contributing.rst
new file mode 100644
index 00000000..f8282f1c
--- /dev/null
+++ b/docs/source/get_involved/contributing.rst
@@ -0,0 +1,250 @@
+.. _contributing:
+
+============
+Contributing
+============
+
+Thank you for your interest in contributing to Hyperactive! This guide will help
+you get started with development and submit your contributions.
+
+
+How to Contribute
+-----------------
+
+Contribution Workflow
+^^^^^^^^^^^^^^^^^^^^^
+
+1. **Fork the repository** on GitHub
+2. **Clone your fork** locally
+3. **Create a branch** for your changes
+4. **Make your changes** with tests
+5. **Run the test suite** to ensure everything works
+6. **Submit a pull request** for review
+
+
+Types of Contributions
+^^^^^^^^^^^^^^^^^^^^^^
+
+We welcome many types of contributions:
+
+- **Bug fixes**: Fix issues and improve stability
+- **New features**: Add new optimizers, experiments, or integrations
+- **Documentation**: Improve guides, examples, and API docs
+- **Tests**: Increase test coverage
+- **Performance**: Optimize code for speed or memory
+
+
+Development Setup
+-----------------
+
+Prerequisites
+^^^^^^^^^^^^^
+
+- Python 3.10 or higher
+- Git
+- pip
+
+Setting Up Your Environment
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+1. Fork and clone the repository:
+
+ .. code-block:: bash
+
+ git clone https://github.com/YOUR-USERNAME/Hyperactive.git
+ cd Hyperactive
+
+2. Create a virtual environment:
+
+ .. code-block:: bash
+
+ python -m venv venv
+ source venv/bin/activate # On Windows: venv\Scripts\activate
+
+3. Install in development mode with test dependencies:
+
+ .. code-block:: bash
+
+ pip install -e ".[test,docs]"
+
+4. Verify the installation:
+
+ .. code-block:: bash
+
+ python -c "import hyperactive; print(hyperactive.__version__)"
+
+
+Running Tests
+^^^^^^^^^^^^^
+
+Run the test suite to ensure everything works:
+
+.. code-block:: bash
+
+ # Run all tests
+ pytest
+
+ # Run with coverage report
+ pytest --cov=hyperactive
+
+ # Run specific test file
+ pytest tests/test_specific.py
+
+ # Run tests matching a pattern
+ pytest -k "test_hill_climbing"
+
+
+Code Style
+----------
+
+Formatting
+^^^^^^^^^^
+
+Hyperactive uses `Black `_ for code formatting
+and `Ruff `_ for linting:
+
+.. code-block:: bash
+
+ # Format code
+ black src/hyperactive tests
+
+ # Check linting
+ ruff check src/hyperactive tests
+
+ # Auto-fix linting issues
+ ruff check --fix src/hyperactive tests
+
+
+Docstrings
+^^^^^^^^^^
+
+Use NumPy-style docstrings for all public functions and classes:
+
+.. code-block:: python
+
+ def my_function(param1, param2):
+ """Short description of the function.
+
+ Longer description if needed.
+
+ Parameters
+ ----------
+ param1 : type
+ Description of param1.
+ param2 : type
+ Description of param2.
+
+ Returns
+ -------
+ type
+ Description of return value.
+
+ Examples
+ --------
+ >>> my_function(1, 2)
+ 3
+ """
+ return param1 + param2
+
+
+Type Hints
+^^^^^^^^^^
+
+Add type hints to function signatures:
+
+.. code-block:: python
+
+ def optimize(
+ self,
+ search_space: dict,
+ n_iter: int,
+ experiment: Callable,
+ ) -> dict:
+ ...
+
+
+Submitting Changes
+------------------
+
+Creating a Pull Request
+^^^^^^^^^^^^^^^^^^^^^^^
+
+1. **Create a branch** for your changes:
+
+ .. code-block:: bash
+
+ git checkout -b feature/my-new-feature
+
+2. **Make your changes** and commit:
+
+ .. code-block:: bash
+
+ git add .
+ git commit -m "Add my new feature"
+
+3. **Push to your fork**:
+
+ .. code-block:: bash
+
+ git push origin feature/my-new-feature
+
+4. **Open a pull request** on GitHub from your branch to the main repository.
+
+
+Pull Request Guidelines
+^^^^^^^^^^^^^^^^^^^^^^^
+
+- **Clear title**: Describe what the PR does
+- **Description**: Explain the changes and motivation
+- **Tests**: Include tests for new functionality
+- **Documentation**: Update docs if needed
+- **Small scope**: Keep PRs focused on one thing
+
+
+Commit Messages
+^^^^^^^^^^^^^^^
+
+Write clear, descriptive commit messages:
+
+.. code-block:: text
+
+ Add Bayesian optimizer warm start support
+
+ - Add warm_start parameter to BayesianOptimizer
+ - Update documentation with usage examples
+ - Add tests for warm start functionality
+
+
+Review Process
+--------------
+
+What to Expect
+^^^^^^^^^^^^^^
+
+1. **Automated checks**: CI will run tests and linting
+2. **Code review**: Maintainers will review your code
+3. **Feedback**: You may be asked to make changes
+4. **Merge**: Once approved, your PR will be merged
+
+
+Response Time
+^^^^^^^^^^^^^
+
+Maintainers are volunteers, so response times may vary. We aim to:
+
+- Acknowledge PRs within a few days
+- Provide initial review within a week
+- Merge approved PRs promptly
+
+
+Getting Help
+------------
+
+If you need help:
+
+- Check existing `issues `_
+ and `discussions `_
+- Ask on `Discord `_
+- Tag @SimonBlanke in your PR for attention
+
+Thank you for contributing to Hyperactive!
diff --git a/docs/source/get_started.rst b/docs/source/get_started.rst
new file mode 100644
index 00000000..1bb1ac1f
--- /dev/null
+++ b/docs/source/get_started.rst
@@ -0,0 +1,111 @@
+.. _get_started:
+
+===========
+Get Started
+===========
+
+This guide will help you get up and running with Hyperactive in just a few minutes.
+By the end, you'll understand the core concepts and be able to run your first optimization.
+
+Quick Start
+-----------
+
+Hyperactive makes hyperparameter optimization simple. Here's a complete example
+that optimizes a custom function:
+
+.. literalinclude:: _snippets/getting_started/quick_start.py
+ :language: python
+ :start-after: # [start:full_example]
+ :end-before: # [end:full_example]
+
+That's it! Let's break down what happened:
+
+1. **Objective function**: A callable that takes a dictionary of parameters and returns a score.
+ Hyperactive **maximizes** this score by default.
+
+2. **Search space**: A dictionary mapping parameter names to their possible values.
+ Use NumPy arrays or lists to define discrete search spaces.
+
+3. **Optimizer**: Choose from 20+ optimization algorithms. Each optimizer explores the
+ search space differently to find optimal parameters.
+
+
+First Steps
+-----------
+
+Optimizing a Scikit-learn Model
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The most common use case is tuning machine learning models. Here's how to optimize
+a Random Forest classifier:
+
+.. literalinclude:: _snippets/getting_started/sklearn_random_forest.py
+ :language: python
+ :start-after: # [start:full_example]
+ :end-before: # [end:full_example]
+
+
+Using the Sklearn Integration
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+For even simpler sklearn integration, use the ``OptCV`` wrapper that behaves like
+scikit-learn's ``GridSearchCV``:
+
+.. literalinclude:: _snippets/getting_started/sklearn_optcv.py
+ :language: python
+ :start-after: # [start:full_example]
+ :end-before: # [end:full_example]
+
+
+Choosing an Optimizer
+^^^^^^^^^^^^^^^^^^^^^
+
+Hyperactive provides many optimization algorithms. Here are some common choices:
+
+.. list-table::
+ :header-rows: 1
+ :widths: 25 75
+
+ * - Optimizer
+ - Best For
+ * - ``HillClimbing``
+ - Quick local optimization, good starting point
+ * - ``RandomSearch``
+ - Exploring large search spaces, baseline comparison
+ * - ``BayesianOptimizer``
+ - Expensive evaluations, smart exploration
+ * - ``ParticleSwarmOptimizer``
+ - Multi-modal problems, avoiding local optima
+ * - ``GeneticAlgorithm``
+ - Complex landscapes, combinatorial problems
+
+Example with Bayesian Optimization:
+
+.. literalinclude:: _snippets/getting_started/bayesian_optimizer.py
+ :language: python
+ :start-after: # [start:full_example]
+ :end-before: # [end:full_example]
+
+.. literalinclude:: _snippets/getting_started/bayesian_optimizer.py
+ :language: python
+ :start-after: # [start:optimizer_usage]
+ :end-before: # [end:optimizer_usage]
+
+
+Next Steps
+----------
+
+Now that you've seen the basics, explore these topics:
+
+- :ref:`installation` - Detailed installation instructions
+- :ref:`user_guide` - In-depth tutorials and concepts
+- :ref:`api_reference` - Complete API documentation
+- :ref:`examples` - More code examples
+
+Key Concepts to Learn
+^^^^^^^^^^^^^^^^^^^^^
+
+1. **Experiments**: Abstractions that define *what* to optimize (see :ref:`user_guide_experiments`)
+2. **Optimizers**: Algorithms that define *how* to optimize (see :ref:`user_guide_optimizers`)
+3. **Search Spaces**: Define the parameter ranges to explore
+4. **Integrations**: Built-in support for sklearn, sktime, and PyTorch (see :ref:`user_guide_integrations`)
diff --git a/docs/source/index.rst b/docs/source/index.rst
new file mode 100644
index 00000000..23759cb3
--- /dev/null
+++ b/docs/source/index.rst
@@ -0,0 +1,498 @@
+.. _home:
+
+.. raw:: html
+
+
+
+
Hyperactive
+
A unified interface for optimization algorithms and experiments
+
+
+
+
+
+
+
31
+
Algorithms
+
+
+
3
+
Backends
+
+
+
5
+
Integrations
+
+
+
1
+
Unified API
+
+
+
+
+Hyperactive provides a collection of optimization algorithms, accessible through a unified
+experiment-based interface that separates optimization problems from algorithms. The library
+provides native implementations of algorithms from the Gradient-Free-Optimizers package
+alongside direct interfaces to Optuna and scikit-learn optimizers.
+
+.. raw:: html
+
+