diff --git a/docs/images/architecture_1.png b/docs/images/architecture_1.png
new file mode 100644
index 0000000000000000000000000000000000000000..ac3c9bea3adf4639b284488561b17964b7c45578
Binary files /dev/null and b/docs/images/architecture_1.png differ
diff --git a/docs/images/architecture_2.png b/docs/images/architecture_2.png
new file mode 100644
index 0000000000000000000000000000000000000000..8bbb8898b1b5a7f53400ecaf0b94a755bb7a8432
Binary files /dev/null and b/docs/images/architecture_2.png differ
diff --git a/docs/images/architecture_3.png b/docs/images/architecture_3.png
new file mode 100644
index 0000000000000000000000000000000000000000..a45533f8e44a42ba7a04fbc5b5e6a5d1ee79110c
Binary files /dev/null and b/docs/images/architecture_3.png differ
diff --git a/docs/images/architecture_4.png b/docs/images/architecture_4.png
new file mode 100644
index 0000000000000000000000000000000000000000..8f58cdd6ae2d37eb88f0033792accdd492be6565
Binary files /dev/null and b/docs/images/architecture_4.png differ
diff --git a/docs/images/sharebutton_2.png b/docs/images/sharebutton_2.png
index 07c52b73a2dfdc24433ef13be305e1ee38442bf0..9211081c043f6e575ed7f1caaccfe1da5c548ab3 100644
Binary files a/docs/images/sharebutton_2.png and b/docs/images/sharebutton_2.png differ
diff --git a/docs/providers/architecture.md b/docs/providers/architecture.md
new file mode 100644
index 0000000000000000000000000000000000000000..2e584b9094c12f9bccee76205b721bab96c5efbc
--- /dev/null
+++ b/docs/providers/architecture.md
@@ -0,0 +1,204 @@
+# JupyterHub Outpost Architecture
+
+Understanding the JupyterHub Outpost architecture will help you set up and manage Outpost instances effectively, while ensuring your resources remain secure.  
+The architecture is divided into two main components: **Local Cluster Components** (associated with the Central JupyterHub) and **Remote Cluster Components** (related to the JupyterHub Outpost installed on a separate cluster from the Central JupyterHub).
+
+## Local Cluster Components
+
+The following key components are part of the central JupyterHub.
+> It's recommended to run it on a Kubernetes cluster. Other setups will work as well, but are not covered in this section.
+
+<h3>1. <strong><a href="https://github.com/kreuzert/jupyterhub-outpostspawner">OutpostSpawner</a></strong></h3>
+
+The OutpostSpawner takes on a user's start request for a jupyter server. Instead of starting the server locally, it communicates with a JupyterHub Outpost REST API. It can be used with multiple JupyterHub Outposts, allowing the central JupyterHub to support countless remote systems. For more information look into the OutpostSpawner [documentation](https://jupyterhub-outpostspawner.readthedocs.io/en/latest/).
+
+<h3>2. <strong><a href="https://kubernetes.io/docs/concepts/services-networking/service/">Kubernetes Service</a></strong> (optional) </h3>
+
+Each Jupyter server of a user will receive it's own Kubernetes Service. JupyterHub will be able to communicate with the remote Jupyter server via this local Kubernetes Service, by creating a ssh tunnel to the JupyterHub Outpost.
+
+> If the Jupyter server of a user is reachable from the outside world, e.g. through a Proxy on the remote cluster, it is not required.
+
+## Remote Cluster Components
+
+<h3>1. <strong>JupyterHub Outpost</strong></h3>
+The Outpost manages the users Jupyter Servers. It can be configured with any Spawner and has a additional features allowing administrators to be in charge of their own resources. For more information, check the [installation](installation.md) and [configuration](configuration.md) sections.  
+
+> It is recommended to install the JupyterHub Outpost on a Kubernetes cluster using this [Helm Chart](https://artifacthub.io/packages/helm/jupyter-jsc/jupyterhub-outpost). Other setups like docker swarm will work as well, but might require some extra steps.  
+
+
+## Outpost Setup Scenarios
+
+The diagrams below illustrate various setup configurations with the JupyterHub Outpost. You have the flexibility to add as many systems and Outposts to the architecture as needed.
+> Check out the [JupyterHub vanilla architecture](https://jupyterhub.readthedocs.io/en/latest/reference/technical-overview.html#the-major-subsystems-hub-proxy-single-user-notebook-server) for more information about the shown components.
+
+<details>
+  <summary>One Remote System
+  </summary>
+      <p>
+        A central JupyterHub initiates Jupyter servers on a remote Kubernetes cluster. 
+        The JupyterHub Outpost listens on port 
+        <span style="color: #007bff; font-weight: bold;">8080</span> 
+        for incoming requests and on port 
+        <span style="color: #007bff; font-weight: bold;">22</span> 
+        for SSH tunnels, enabling the Jupyter servers (notebooks) to be accessible to the central JupyterHub.
+      </p>
+  <br><br>
+  <div style="display: flex; align-items: flex-start; gap: 20px;">
+    <div style="display: flex; justify-content: center; align-items: center; min-width: 45%; max-width: 45%;">
+      <img src="../../images/architecture_1.png" alt="Architecture Example with one remote system" style="width: 90%;">
+    </div>
+    <div style="min-width: 45%; max-width: 45%;">
+      <h3>1. <strong>Send Request</strong></h3>
+      <p>
+      <details><summary>
+        The OutpostSpawner handles a user’s request to launch a notebook server. 
+      </summary>
+        Rather than starting the server itself, it gathers all the necessary details for initiating a single-user server. These typically include the 
+        <span style="background-color: #f0f0f0; font-weight: bold;">name</span>, 
+        <span style="background-color: #f0f0f0; font-weight: bold;">environment</span>, and 
+        <span style="background-color: #f0f0f0; font-weight: bold;">selected user options</span>. 
+        Additionally, optional data, such as <span style="background-color: #f0f0f0; font-weight: bold;">certificates</span> 
+        or <span style="background-color: #f0f0f0; font-weight: bold;">trust bundles</span> (used for internal SSL), 
+        is sent to the <span style="background-color: #f0f0f0; font-weight: bold;">JupyterHub Outpost</span> when required.
+      </details>
+      </p>
+      <h3>2. <strong>Spawner.start()</strong></h3>
+      <p>
+      <details><summary>
+        The JupyterHub Outpost utilizes the configured 
+        JupyterHub Spawner to launch the single-user server.
+      </summary>
+        This process, typically managed directly by <span style="background-color: #f0f0f0; font-weight: bold;">JupyterHub</span>, 
+        follows the same sequence of functions used during a standard startup, including 
+        <span style="background-color: #f0f0f0; font-weight: bold;">run_pre_spawn_hook</span>, 
+        <span style="background-color: #f0f0f0; font-weight: bold;">move_certs</span>, and 
+        <span style="background-color: #f0f0f0; font-weight: bold;">start</span>. 
+        Any events produced by <span style="background-color: #f0f0f0; font-weight: bold;">_generate_progress()</span> 
+        are relayed back to <span style="background-color: #f0f0f0; font-weight: bold;">JupyterHub</span>, ensuring users receive all 
+        critical updates without interruption.
+      </details>
+      </p>
+      <h3>3. <strong>Send service address</strong></h3>
+      <p>
+      <details><summary>
+        JupyterHub requires the 
+        service address 
+        (typically a combination of IP and 
+        port) to establish 
+        SSH port forwarding.
+      </summary>
+        This forwarding allows users to access 
+        the remote single-user notebook server, even if it is operating within a restricted or isolated environment.
+      </details>
+      </p>
+      <h3>4. <strong>Port forwarding</strong></h3>
+      <p>
+      <details><summary>
+        JupyterHub uses a random available local port (random_port) 
+        to forward traffic for the single-user server to the JupyterHub Outpost. 
+      </summary>
+        It employs 
+        <span style="background-color: #f0f0f0; font-weight: bold;">SSH multiplexing</span> to minimize the number of connections. 
+        In this setup, the JupyterHub Outpost must have access to the notebook server's 
+        <span style="background-color: #f0f0f0; font-weight: bold;">IP address (service_address)</span> 
+        and <span style="background-color: #f0f0f0; font-weight: bold;">port (single-user_port)</span>.
+        <br>
+        Simplified port forward command:
+      <pre style="background-color: #f9f9f9; padding: 10px; border-radius: 5px;">
+        <code>ssh -L 0.0.0.0:[random_port]:[service_address]:[single-user_port] jhuboutpost@[outpost-ip]</code>
+      </pre>
+        It is also possible to define a <span style="background-color: #f0f0f0; font-weight: bold;">customized port forwarding function</span> 
+        (e.g., to delegate port-forwarding to an external pod, see <em>external tunneling</em>). Alternatively, you can 
+        <span style="background-color: #f0f0f0; font-weight: bold;">tunnel directly to the system</span> where the notebook server is running 
+        without routing through a JupyterHub Outpost, as described in <em>delayed tunneling</em>.
+      </details>
+      </p>
+      <h3>5. <strong>Create service</strong></h3>
+      <p>
+      <details><summary>
+        At this step, the JupyterHub OutpostSpawner 
+        will create a Kubernetes Service, enabling the Configurable HTTP Proxy to communicate with the single-user server via this service.
+      </summary>
+        <br>
+        In the default configuration, the <span style="background-color: #f0f0f0; font-weight: bold;">Hub pod</span> is the target of the Kubernetes service, 
+        as it manages the SSH connections. Consequently, all traffic between the client and the single-user server is forwarded through the hub container. 
+        <br>
+        It is also possible to adjust the <span style="background-color: #f0f0f0; font-weight: bold;">Kubernetes service selector</span> 
+        or to define a <span style="background-color: #f0f0f0; font-weight: bold;">customized service creation function</span> 
+        (e.g., to delegate port-forwarding to an external pod).
+      </details>
+      </p>
+    </div>
+  </div>
+</details>
+
+
+
+<details>
+  <summary>Remote + Local System
+  </summary>
+    <p>
+      This architecture mirrors the one described in the previous section, with the key difference being the addition of a 
+      <span style="background-color: #f0f0f0; font-weight: bold;">local JupyterHub Outpost service</span> running on the same 
+      <span style="background-color: #f0f0f0; font-weight: bold;">Kubernetes cluster</span> as <span style="background-color: #f0f0f0; font-weight: bold;">JupyterHub</span>. 
+      It highlights that, in the case of a local Outpost service, there is no need to enable <span style="background-color: #f0f0f0; font-weight: bold;">SSH port-forwarding</span>, as the 
+      <span style="background-color: #f0f0f0; font-weight: bold;">notebook servers</span> will be directly accessible through 
+      <span style="background-color: #f0f0f0; font-weight: bold;">Kubernetes’ internal DNS</span> resolution.
+    </p>
+
+  <br><br>
+  <div style="display: flex; align-items: flex-start; gap: 20px;">
+    <div style="display: flex; justify-content: center; align-items: center;">
+      <img src="../../images/architecture_2.png" alt="Architecture Example with one remote and one local system" style="width: 70%;">
+    </div>
+  </div>
+</details>
+
+
+<details>
+  <summary>External Tunneling
+  </summary>
+  <p>
+    In this scenario, an additional <span style="background-color: #f0f0f0; font-weight: bold;">pod</span> was created to manage the 
+    <span style="background-color: #f0f0f0; font-weight: bold;">port forwarding</span>. This means the management of <span style="background-color: #f0f0f0; font-weight: bold;">SSH tunnels</span> 
+    to <span style="background-color: #f0f0f0; font-weight: bold;">single-user notebook servers</span> is delegated from the <span style="background-color: #f0f0f0; font-weight: bold;">JupyterHub pod</span> 
+    to the external <span style="background-color: #f0f0f0; font-weight: bold;">port forwarding pod</span>.
+  </p>
+  <p>
+    With this setup, <span style="background-color: #f0f0f0; font-weight: bold;">single-user servers</span> remain reachable even if 
+    <span style="background-color: #f0f0f0; font-weight: bold;">JupyterHub</span> itself is offline. Instead of tunneling through the 
+    <span style="background-color: #f0f0f0; font-weight: bold;">Hub pod</span>, traffic between the client and the single-user server 
+    travels through the <span style="background-color: #f0f0f0; font-weight: bold;">port forwarding pod</span>. The <span style="background-color: #f0f0f0; font-weight: bold;">Kubernetes service</span> 
+    for the single-user server is then configured to target the <span style="background-color: #f0f0f0; font-weight: bold;">port forwarding pod</span> 
+    rather than the <span style="background-color: #f0f0f0; font-weight: bold;">Hub pod</span>.
+  </p>
+  <div style="display: flex; align-items: flex-start; gap: 20px;">
+    <div style="display: flex; justify-content: center; align-items: center;">
+      <img src="../../images/architecture_3.png" alt="Architecture Example with external tunneling" style="width: 70%;">
+    </div>
+  </div>
+</details>
+
+<details>
+  <summary>Delayed Tunneling
+  </summary>
+  <p>
+    In this scenario, an additional <span style="background-color: #f0f0f0; font-weight: bold;">pod</span> was created to manage the 
+    <span style="background-color: #f0f0f0; font-weight: bold;">port forwarding</span>. This means the management of <span style="background-color: #f0f0f0; font-weight: bold;">SSH tunnels</span> 
+    to <span style="background-color: #f0f0f0; font-weight: bold;">single-user notebook servers</span> is delegated from the <span style="background-color: #f0f0f0; font-weight: bold;">JupyterHub pod</span> 
+    to the external <span style="background-color: #f0f0f0; font-weight: bold;">port forwarding pod</span>.
+  </p>
+  <p>
+    With this setup, <span style="background-color: #f0f0f0; font-weight: bold;">single-user servers</span> remain reachable even if 
+    <span style="background-color: #f0f0f0; font-weight: bold;">JupyterHub</span> itself is offline. Instead of tunneling through the 
+    <span style="background-color: #f0f0f0; font-weight: bold;">Hub pod</span>, traffic between the client and the single-user server 
+    travels through the <span style="background-color: #f0f0f0; font-weight: bold;">port forwarding pod</span>. The <span style="background-color: #f0f0f0; font-weight: bold;">Kubernetes service</span> 
+    for the single-user server is then configured to target the <span style="background-color: #f0f0f0; font-weight: bold;">port forwarding pod</span> 
+    rather than the <span style="background-color: #f0f0f0; font-weight: bold;">Hub pod</span>.
+  </p>
+  <div style="display: flex; align-items: flex-start; gap: 20px;">
+    <div style="display: flex; justify-content: center; align-items: center;">
+      <img src="../../images/architecture_4.png" alt="Architecture Example with external tunneling" style="width: 70%;">
+    </div>
+  </div>
+</details>
\ No newline at end of file
diff --git a/docs/providers/configuration.md b/docs/providers/configuration.md
new file mode 100644
index 0000000000000000000000000000000000000000..fb990b9355e3ce3406952c2a8c00fe602be8020c
--- /dev/null
+++ b/docs/providers/configuration.md
@@ -0,0 +1,252 @@
+# Application Configuration
+
+The JupyterHub Outpost uses a configuration file `outpost_config.py` similar to `jupyterhub_config.py` of JupyterHub. The Spawner configuration for the Outpost is therefore similar to the [Spawner configuration in JupyterHub](https://jupyterhub.readthedocs.io/en/stable/reference/api/spawner.html). 
+The easiest way is configure the Outpost's configuration file is via the `outpostConfig` key in the helm chart's `values.yaml` file.
+
+## Persistent database
+
+To use a persistent database such as postgresql with JupyterHub Outpost, use `extraEnvVarsSecrets` in your `values.yaml` file. All possible values related to the database connection can be found in the [source code](https://github.com/kreuzert/jupyterhub-outpost/blob/main/project/app/database/__init__.py) itself.
+
+Ensure that you have a database such [postgres](https://artifacthub.io/packages/helm/bitnami/postgresql) installed and that a JupyterHub Outpost user and database exists.
+
+Example SQL commands for postgresql:
+```sql
+CREATE USER jupyterhuboutpost WITH ENCRYPTED PASSWORD '...';
+CREATE DATABASE jupyterhuboutpost OWNER jupyterhuboutpost;
+```
+
+Create a secret in your Outpost namespace with the required values before installing JupyterHub Outpost:
+
+```yaml
+kind: Secret
+metadata:
+  name: my-db-secret
+...
+stringData:
+  SQL_TYPE: "postgresql"
+  SQL_USER: "jupyterhuboutpost"
+  SQL_PASSWORD: "..."
+  SQL_HOST: "postgres.database.svc"
+  SQL_PORT: "5432"
+  SQL_DATABASE: "jupyterhuboutpost"
+```
+
+And add the database secret to your Outpost values.yaml file:  
+
+```yaml
+...
+extraEnvVarsSecrets:
+  - my-db-secret
+```
+  
+## Simple KubeSpawner
+
+Jupyter4NFDI sends a **profile** to your Spawner. This is a small example which ignores the profile and always runs the Jupyter Server with the predefined setup.
+
+values.yaml file:
+```yaml
+outpostConfig: |
+  from kubespawner import KubeSpawner
+  c.JupyterHubOutpost.spawner_class = KubeSpawner
+
+  c.KubeSpawner.start_timeout = 600
+
+  async def profile_list(spawner):
+      jupyterhub_name = spawner.jupyterhub_name
+      spawner.log.info(f"{spawner._log_name} - Received these user_options from {jupyterhub_name}-JupyterHub: {spawner.user_options}")
+      slug = spawner.user_options.get("profile", "default")
+      default_image = "jupyter/minimal-notebook:notebook-7.0.3"
+      return [
+        {
+            "display_name": slug,
+            "slug": slug,
+            "kubespawner_override": {
+                "image": default_image
+            }
+        }
+      ]
+  
+  c.KubeSpawner.profile_list = profile_list
+```
+
+Update or install JupyterHub Outpost with values.yaml file:
+```bash
+helm upgrade --install -f values.yaml --namespace outpost outpost jupyterhub-outpost/jupyterhub-outpost
+```
+
+In this example we use the [KubeSpawner](https://jupyterhub-kubespawner.readthedocs.io/en/latest/), but you can use any [JupyterHub Spawner](https://jupyterhub.readthedocs.io/en/latest/reference/spawners.html).
+
+
+## Customize Logging
+For the logging configuration, the Outpost offers these options (corresponding to the [logging options](https://jupyterhub.readthedocs.io/en/stable/reference/api/app.html#jupyterhub.app.JupyterHub.log_datefmt) of JupyterHub):
+```python
+c.JupyterHubOutpost.log_level = ...
+c.JupyterHubOutpost.log_datafmt = ...
+c.JupyterHubOutpost.log_format = ...
+```
+
+If more customization is required, one can do this directly in the `outpost_config.py` file itself (possible via the `outpostConfig` key of the helm chart).
+```python
+
+# Suppress /ping loggings, created by k8s livenessprobe
+uvicorn_access = logging.getLogger("uvicorn.access")
+class UvicornFilter(logging.Filter):
+    def filter(self, record):
+        try:
+            if "/ping" in record.args:
+                return False
+        except:
+            pass
+        return True
+
+uvicorn_access.addFilter(UvicornFilter())
+
+# Suppress missing static files by Tornado Logger
+tornado_general = logging.getLogger("tornado.general")
+class TornadoGeneralLoggingFilter(logging.Filter):
+  def filter(self, record):
+    # I don't want to see this log line generated by tornado
+    if str(record.msg).startswith("Could not open static file"):
+      return False
+    return True
+
+tornado_general.addFilter(TornadoGeneralLoggingFilter())
+
+import os
+logged_logger_name = os.environ.get("LOGGER_NAME", "MyOutpostInstance")
+c.JupyterHubOutpost.log_format = f"%(color)s[%(levelname)1.1s %(asctime)s.%(msecs).03d {logged_logger_name} %(name)s %(module)s:%(lineno)d]%(end_color)s %(message)s"
+```
+
+## Sanitize Spawner.start response
+JupyterHub Outpost relies on the `start` function of the configured SpawnerClass to determine where the single-user server will run. For instance, with KubeSpawner, the `KubeSpawner.start()` function might return a URL like `http://jupyter-<id>-<user_id>:<port>`, which Outpost will pass along to JupyterHub.
+
+The OutpostSpawner then uses this information to set up an SSH port-forwarding process with the command `-L 0.0.0.0:<local_jhub_port>:jupyter-<id>-<user_id>:<port>`. Afterward, JupyterHub will access the single-user server at `http://localhost:<local_jhub_port>`.
+
+If the response from the `start` function isn't correct, the OutpostSpawner and Outpost won't work together properly. However, to support most Spawners, you can customize the response that is sent to the OutpostSpawner.
+
+
+```python
+# In the `outpostConfig` key of your helm values.yaml file or your outpost_config.py file:
+
+# This may be a coroutine
+def sanitize_start_response(spawner, original_response):
+  # ... determine the correct location for the new single-user server
+  return "<...>"
+
+c.JupyterHubOutpost.sanitize_start_response = sanitize_start_response
+```
+
+> If you don't know where your single-user server will be running at the end of the `start` function, you have to return an empty string. In that case, JupyterHub OutpostSpawner won't create a ssh port-forwarding process. Instead, the start process of the single-user server has to send a POST request to the `$JUPYTERHUB_SETUPTUNNEL_URL` url. Have a look at the API Endpoints of the OutpostSpawner (https://jupyterhub-outpostspawner.readthedocs.io/en/latest/apiendpoints.html) for more information.
+
+
+## Disable configuration overwrite
+By default, Jupyter4NFDI can overwrite the JupyterHub Outpost configuration with the OutpostSpawner's `custom_misc` function. As an administrator of the JupyterHub Outpost service, you can prevent this.  
+
+```python
+# In the `outpostConfig` key of your helm values.yaml file or your outpost_config.py file:
+
+async def allow_override(jupyterhub_name, misc):
+    if jupyterhub_name == "trustedhub":
+        return True
+    if list(misc.keys()) != ["image"]:
+        return False
+    return misc.get("image", "None") in ["allowed_image1", "allowed_image2"]
+
+c.JupyterHubOutpost.allow_override = allow_override
+```
+
+The above example leads to the following behaviour:  
+ - JupyterHub with credential username "trustedhub" can overwrite anything. 
+ - If a JupyterHub (other than trustedhub) tries to overwrite anything except the `image` key, it will not be allowed.
+ - The given image must be `allowed_image1` or `allowed_image2`.
+
+> If `custom_misc` in the POST request is empty, the `allow_override` function will not be called. If `allow_override` returns False, the JupyterLab will not be started. An error message will be returned to the JupyterHub OutpostSpawner and shown to the user.
+
+## Recreate ssh tunnels at startup
+If your JupyterHub Outpost is used as a ssh node in the JupyterHub OutpostSpawner, all port-forwarding processes have to be recreated if the JupyterHub Outpost service was restarted. While restarting, existing `ssh` port-forwarding processes will fail after a few seconds and the user's single-user server would be unreachable.  
+
+By default tunnels will be recreated at JupyterHub Outpost restarts. You can disable this behaviour with the `ssh_recreate_at_start` key.  
+
+```python
+# In the `outpostConfig` key of your helm values.yaml file or your outpost_config.py file:
+
+async def restart_tunnels(wrapper, jupyterhub_credential):
+    if jupyterhub_credential == "local_jupyterhub":
+        return False
+    return True
+
+c.JupyterHubOutpost.ssh_recreate_at_start = restart_tunnels
+# c.JupyterHubOutpost.ssh_recreate_at_start = False
+```
+
+
+> JupyterHub Outpost will use the stored JupyterHub API token to recreate the port-forwarding process. If the API token is no longer valid, this will fail. The single-user server would then be unreachable and must be restarted by the user.
+
+
+## Flavors - manage resource access for multiple JupyterHubs
+By default, all connected JupyterHubs may use all available resources. It's possible to configure "flavors" for each connected JupyterHub, offering only a part of the available resources.
+  
+For this configuration three attributes are crucial:
+  - flavors
+  - flavors_undefined_max
+  - flavors_update_token
+
+### Flavors
+Configure different flavors, which can be used in Spawner configuration. 
+```python
+async def flavors(jupyterhub_name):
+    if jupyterhub_name == "privileged":
+        return {
+            "typea": {
+                "max": -1,
+                "weight": 10,
+                "display_name": "2GB RAM, 1VCPU, 5 days",
+                "description": "JupyterLab will run for max 5 days with 2GB RAM and 1VCPU.",
+                "runtime": {"days": 5},
+            },
+        }
+    else:
+        return {
+            "typeb": {
+                "max": 10,
+                "weight": 9,
+                "display_name": "4GB RAM, 1VCPUs, 2 hours",
+                "description": "JupyterLab will run for max 2 hours with 4GB RAM and 1VCPUs.",
+                "runtime": {"hours": 2},
+            },
+        }
+c.JupyterHubOutpost.flavors = flavors
+```
+
+The connected JupyterHub "privileged" can start infinite jupyter servers. The servers will be stopped after 5 days by the JupyterHub Outpost.  
+Any other connected JupyterHub can start up to 10 jupyter servers (all users together for each JupyterHub, not combined for all JupyterHubs).  
+The according RAM / VCPUs restrictions are configured later on in the config file at `c.KubeSpawner.profile_list` or `c.KubeSpawner.[mem_guarantee|mem_limit|cpu_guarantee|cpu_limit]`.  
+JupyterHub OutpostSpawner has to send the chosen flavor in `user_options.flavor` when starting a notebook server.
+
+### User specific flavors
+coming soon
+
+### Undefined Max
+If JupyterHub OutpostSpawner does not send a flavor in user_options `c.JupyterHubOutpost.flavors_undefined_max` will be used to limit the available resources. This value is also used, if the given flavor is not part of the previously defined `flavors` dict. Default is `-1`, which allows infinite notebook servers for all unknown or unconfigured flavored notebook servers.
+
+```python
+c.JupyterHubOutpost.flavors_undefined_max = 0
+```
+
+This example does not allow any jupyter server with a flavor, that's not defined in `c.JupyterHubOutpost.flavors`. Enables full control of the available resources.
+
+### Update Tokens
+The JupyterHub OutpostSpawner offers an APIEndpoint, which receives or offers the current Flavors of all connected JupyterHub Outposts. With this function the Outpost will inform the connected JupyterHubs at each start/stop of a notebook server, about the current flavor situation. The corresponding URL will be given by the OutpostSpawner.
+
+```python
+import os
+async def flavors_update_token(jupyterhub_name):
+    token = os.environ.get(f"FLAVOR_{jupyterhub_name.upper()}_AUTH_TOKEN", "")
+    if not token:
+        raise Exception(f"Flavor auth token for {jupyterhub_name} not configured.")
+    return token
+c.JupyterHubOutpost.flavors_update_token = flavors_update_token
+```
+  
+In case of an exception the update is not send to JupyterHub. This will not interfere with the start of the notebook server.  
+Each connected JupyterHub must provide a service token with scope `custom:outpostflavors:set` to the Outpost administrator.
diff --git a/docs/providers/index.md b/docs/providers/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..928a8318522618ae684d77dbd4c56b6156924f5c
--- /dev/null
+++ b/docs/providers/index.md
@@ -0,0 +1,26 @@
+# JupyterHub Outpost for Jupyter4NFDI
+
+Welcome to the JupyterHub Outpost installation guide for external resource providers! This guide will help you set up a JupyterHub Outpost on your resources, allowing users within the Jupyter4NFDI community to benefit from access to diverse computational environments while you maintain control over your resources.
+
+## Why Join Jupyter4NFDI?
+
+Jupyter4NFDI brings together resources from multiple providers to create a rich and collaborative environment, promoting knowledge sharing and enabling cutting-edge research. Here’s why you should consider contributing:
+
+- **Security and Control**: You control who may access your resources. Permissions and access rights remain fully in your hands.
+- **Visibility**: By connecting your resources to Jupyter4NFDI, you increase their visibility and attract a wider range of users within the scientific community.
+- **Collaborative Community**: Join a network of peers contributing to NFDI's vision of federated, accessible research infrastructure.
+- **Complementary Service**: JupyterHub Outpost doesn't replace or compromise any existing JupyterHub instances you may be running. Instead, it provides an additional access point specifically configured for NFDI needs.
+
+JupyterHub Outpost is a powerful, flexible solution that works in tandem with your existing systems. The following sections will guide you through its architecture, installation, and configuration.
+
+## Features
+
+- Use a central JupyterHub to offer jupyter servers on multiple systems of potentially different types.
+- User specific flavors allows administrators to configure resource limits for each user.
+- Each (remote) system may use a different JupyterHub Spawner.
+- Forward spawn events gathered by the remote Spawner to the user.
+- Users may override the configuration of the remote Spawner at runtime (e.g. to select a different Docker Image), if allowed by JupyterHub Outpost administrators.
+- Integrated SSH port forwarding solution to reach otherwise isolated remote jupyter servers.
+- Supports the JupyterHub internal_ssl feature.
+- One JupyterHub Outpost can be connected to multiple JupyterHubs without the Hubs interfering with each other.
+- Configuration of JupyterHub Outpost similar to the JupyterHub configuration.
diff --git a/docs/providers/installation.md b/docs/providers/installation.md
new file mode 100644
index 0000000000000000000000000000000000000000..30d852043703eb775095511df070fe6247ab47f3
--- /dev/null
+++ b/docs/providers/installation.md
@@ -0,0 +1,118 @@
+# Installation 
+
+
+## Kubernetes
+This section covers an example installation of the [JupyterHub Outpost service](https://artifacthub.io/packages/helm/jupyter-jsc/jupyterhub-outpost) via helm. 
+
+### Requirements
+
+A Kubernetes Cluster up and running.
+
+### Preparations
+
+We assume that the Outpost service will run in the `outpost` namespace. To authenticate the JupyterHub instance, we have to create a Kubernetes secret with username+password. 
+
+```bash
+OUTPOST_PASSWORD=$(uuidgen)
+
+kubectl -n outpost create secret generic --from-literal=usernames=jupyterhub --from-literal=passwords=${OUTPOST_PASSWORD} outpost-users
+```
+
+> If you want to connect multiple JupyterHubs to one JupyterHub Outpost, you have to create a secret with semicolon-separated usernames and passwords.  
+> `kubectl create secret generics --from-literal=usernames=one;two;three --from-literal=passwords=pw1;pw2;pw3 outpost-users`
+
+An encryption key is necessary to ensure that the data in the database is encrypted.
+
+```bash
+SECRET_KEY=$(python3 -c 'from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())')
+
+kubectl -n outpost create secret generic outpost-cryptkey --from-literal=secret_key=${SECRET_KEY}
+```
+
+### Configuration
+
+You have to ask the administrator of all JupyterHubs you want to connect for their ssh-publickey. In this scenario, we're using NodePort as service types. JupyterHub must be able to reach the JupyterHub Outpost service at the ports `30080` (access to the Outpost API) and `30022` (access to ssh daemon for port-forwarding). You can configure the ports to your liking, or choose a different Service type.
+
+> In this scenario, the communication between JupyterHub and JupyterHub Outpost will not be encrypted. Do not use this in production. You'll find an example with encryption below.
+
+Helm values:
+```bash
+cat <<EOF >> outpost_values.yaml
+# Name of database encryption key secret
+cryptSecret: outpost-cryptkey
+# Name of JupyterHub username+password secret
+outpostUsers: outpost-users
+# ssh-publickey of JupyterHub(s) to connect
+sshPublicKeys:
+  - restrict,port-forwarding,command="/bin/echo No commands allowed" $(cat jupyterhub-sshkey.pub)
+# Kubernetes service for the Outpost API
+service:
+  type: NodePort
+  ports:
+    nodePort: 30080
+# Kubernetes service for port-forwarding
+servicessh:
+  type: NodePort
+  ports:
+    nodePort: 30022
+EOF
+```
+
+> Check out the available options for ssh public keys [here](https://manpages.debian.org/experimental/openssh-server/authorized_keys.5.en.html#AUTHORIZED_KEYS_FILE_FORMAT). At least port-forwarding must be allowed.
+
+### Installation
+
+```bash
+# Add JupyterHub Outpost chart repository
+helm repo add jupyter-jsc https://kaas.pages.jsc.fz-juelich.de/helm-charts/
+helm repo update
+# Install the JupyterHub Outpost chart in the `outpost` namespace
+helm upgrade --install --create-namespace --version 1.0.6 --namespace outpost -f outpost_values.yaml outpost jupyter-jsc/jupyterhub-outpost
+```
+
+Ensure that everything is up and running. Double check that the ports 30080 and 30022 are reachable for JupyterHub.  
+Contact the [Jupyter4NFDI administrators](../support.md) to let them know how to reach your JupyterHub Outpost.
+
+
+## Encryption via ingress
+
+When running JupyterHub Outpost in production, you should ensure a certain level of encryption. An easy way is to use an ingress controller with a certificate.
+For this example we've installed [cert-manager with a let's encrypt issuer](https://artifacthub.io/packages/helm/cert-manager/cert-manager) and [ingress-nginx](https://artifacthub.io/packages/helm/ingress-nginx/ingress-nginx). If you already have a certificate, you will only need ingress-nginx.
+
+This example is an addition to the examples above.
+
+### Configuration
+
+```bash
+FLOATING_IP_SSH=<EXTERNAL_IP_FOR_SSH_ACCESS>
+cat <<EOF >> outpost_remote_values_ingress.yaml
+# Name of database encryption key secret
+cryptSecret: outpost-cryptkey
+# Name of JupyterHub username+password secret
+outpostUsers: outpost-users
+# ssh-publickey of JupyterHub(s) to connect
+sshPublicKeys:
+  - restrict,port-forwarding,command="/bin/echo No commands allowed" $(cat jupyterhub-sshkey.pub)
+# Kubernetes service for port-forwarding
+servicessh:
+  type: LoadBalancer
+  loadBalancerIP: ${FLOATING_IP_SSH}
+# Use ingress with TLS instead of a Kubernetes service for the Outpost API
+ingress:
+  enabled: true
+  # Annotations for using LetsEncrypt as a certificate issuer
+  annotations:
+    acme.cert-manager.io/http01-edit-in-place: "true"
+    cert-manager.io/cluster-issuer: letsencrypt-cluster-issuer
+  hosts:
+  - myremoteoutpost.com
+  tls:
+  - hosts:
+    - myremoteoutpost.com
+    # If using LetsEncrypt, the secret will be created automatically. Otherwise, please ensure the secret exists.
+    secretName: outpost-tls-certmanager
+EOF
+```
+
+JupyterHub will now be able to reach the JupyterHub Outpost API at `https://myremoteoutpost.com/services` and the ssh daemon for port-forwarding at `${FLOATING_IP_SSH}` on port 22.
+Contact the [Jupyter4NFDI administrators](../support.md) to inform them about the new addresses.
\ No newline at end of file
diff --git a/docs/providers/installation_including_local_unused.md b/docs/providers/installation_including_local_unused.md
new file mode 100644
index 0000000000000000000000000000000000000000..646a6f40aca1069ee69a3d6a0da66d043a1e4f6d
--- /dev/null
+++ b/docs/providers/installation_including_local_unused.md
@@ -0,0 +1,187 @@
+# Installation 
+
+This section covers example configurations and instructions to install the [JupyterHub Outpost service](https://artifacthub.io/packages/helm/jupyter-jsc/jupyterhub-outpost) via helm. 
+
+## Local installation
+
+<details><summary>
+This chapter shows a simple installation of the JupyterHub Outpost service on the same Kubernetes cluster as JupyterHub.  
+</summary>
+If you don't want to connect external JupyterHubs (meaning JupyterHubs running on a different Kubernetes cluster than your Outpost service) to your JupyterHub Outpost, you won't need ssh port-forwarding between JupyterHub and the Outpost service. The Kubernetes internal DNS can resolve the single-user notebook servers.
+
+
+<h3>Requirements</h3>
+
+One Kubernetes cluster up and running with at least one JupyterHub installation (recommended is the use of [Zero2JupyterHub](https://z2jh.jupyter.org/en/stable/)).
+
+<h3>Preparations</h3>
+
+We assume that the Outpost service will run in the `outpost` namespace. To authenticate the JupyterHub instance, we have to create a Kubernetes secret  in that namespace with username+password. 
+
+```bash
+OUTPOST_PASSWORD=$(uuidgen)
+
+kubectl -n outpost create secret generic --from-literal=usernames=jupyterhub --from-literal=passwords=${OUTPOST_PASSWORD} outpost-users
+```
+
+
+> If you want to connect multiple JupyterHubs to one JupyterHub Outpost, you have to create a secret with semicolon-separated usernames and passwords.  `kubectl create secret generics --from-literal=usernames=one;two;three --from-literal=passwords=pw1;pw2;pw3 outpost-users`
+
+An encryption key is also required, so data in the database can be encrypted.
+
+```bash
+SECRET_KEY=$(python3 -c 'from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())')
+
+kubectl -n outpost create secret generic outpost-cryptkey --from-literal=secret_key=${SECRET_KEY}
+```
+
+<h3>Configuration</h3>
+Helm values:
+
+```bash
+cat <<EOF >> outpost_values.yaml
+# Name of database encryption key secret
+cryptSecret: outpost-cryptkey
+# Name of JupyterHub username+password secret
+outpostUsers: outpost-users
+EOF
+```
+
+<h3>Installation</h3>
+
+```bash
+# Add JupyterHub Outpost chart repository
+helm repo add jupyter-jsc https://kaas.pages.jsc.fz-juelich.de/helm-charts/
+helm repo update
+# Install the JupyterHub Outpost chart in the `outpost` namespace
+helm upgrade --install --create-namespace --version <version> --namespace outpost -f outpost_values.yaml outpost jupyter-jsc/jupyterhub-outpost
+```
+
+Afterwards, the administrator of each connected JupyterHub has to [update the JupyterHub OutpostSpawner configuration](https://jupyterhub-outpostspawner.readthedocs.io/en/latest/usage/installation.html) with the correct IP address + credentials for this JupyterHub Outpost service.  
+
+</details>
+
+## Remote installation
+
+<details><summary>
+This chapter shows a simple installation of the JupyterHub Outpost service on a different Kubernetes cluster than the JupyterHub.  
+</summary>
+
+<h3>Requirements</h3>
+
+Two Kubernetes clusters up and running.  
+One with at least one JupyterHub installation (recommended is the use of [Zero2JupyterHub](https://z2jh.jupyter.org/en/stable/)), the other is used to install the JupyterHub Outpost service.
+
+<h3>Preparations</h3>
+
+We assume that the Outpost service will run in the `outpost` namespace. To authenticate the JupyterHub instance, we have to create a Kubernetes secret in that namespace with username+password. 
+
+```bash
+OUTPOST_PASSWORD=$(uuidgen)
+
+kubectl -n outpost create secret generic --from-literal=usernames=jupyterhub --from-literal=passwords=${OUTPOST_PASSWORD} outpost-users
+```
+
+> If you want to connect multiple JupyterHubs to one JupyterHub Outpost, you have to create a secret with semicolon-separated usernames and passwords.  
+> `kubectl create secret generics --from-literal=usernames=one;two;three --from-literal=passwords=pw1;pw2;pw3 outpost-users`
+
+An encryption key is also required, so data in the database can be encrypted.
+
+```bash
+SECRET_KEY=$(python3 -c 'from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())')
+
+kubectl -n outpost create secret generic outpost-cryptkey --from-literal=secret_key=${SECRET_KEY}
+```
+
+<h3>Configuration</h3>
+
+You have to ask the administrator of all JupyterHubs you want to connect for their ssh-publickey. In this scenario, we're using NodePort as service types. JupyterHub must be able to reach the JupyterHub Outpost service at the ports `30080` (access to the Outpost API) and `30022` (access to ssh daemon for port-forwarding). 
+
+```{admonition} Warning
+In this scenario, the communication between JupyterHub and JupyterHub Outpost will not be encrypted. Do not use this in production. You'll find an example with encryption below.
+```
+
+Helm values:
+```bash
+cat <<EOF >> outpost_values.yaml
+# Name of database encryption key secret
+cryptSecret: outpost-cryptkey
+# Name of JupyterHub username+password secret
+outpostUsers: outpost-users
+# ssh-publickey of JupyterHub(s) to connect
+sshPublicKeys:
+  - restrict,port-forwarding,command="/bin/echo No commands allowed" $(cat jupyterhub-sshkey.pub)
+# Kubernetes service for the Outpost API
+service:
+  type: NodePort
+  ports:
+    nodePort: 30080
+# Kubernetes service for port-forwarding
+servicessh:
+  type: NodePort
+  ports:
+    nodePort: 30022
+EOF
+```
+
+```{admonition} Note 
+You can use the same [options](https://manpages.debian.org/experimental/openssh-server/authorized_keys.5.en.html#AUTHORIZED_KEYS_FILE_FORMAT) for each public key as in ~/.ssh/authorized_keys. At least port-forwarding must be allowed.
+```
+
+<h3>Installation</h3>
+
+```bash
+# Add JupyterHub Outpost chart repository
+helm repo add jupyter-jsc https://kaas.pages.jsc.fz-juelich.de/helm-charts/
+helm repo update
+# Install the JupyterHub Outpost chart in the `outpost` namespace
+helm upgrade --install --create-namespace --version <version> --namespace outpost -f outpost_values.yaml outpost jupyter-jsc/jupyterhub-outpost
+```
+
+Ensure that everything is running. Double check that the ports 30080 and 30022 are reachable from JupyterHub.  
+Afterwards, you have to [update the JupyterHub OutpostSpawner configuration](https://jupyterhub-outpostspawner.readthedocs.io/en/latest/usage/installation.html) with the correct IP address + credentials for this JupyterHub Outpost service.  
+
+</details>
+
+## Encryption via ingress</h3>
+
+When running JupyterHub Outpost on production, you should ensure a certain level of encryption. An easy way is to use an ingress controller with a certificate.
+For this example we've installed [cert-manager, hairpin-proxy, let's encrypt issuer](https://gitlab.jsc.fz-juelich.de/kaas/fleet-deployments/-/tree/cert-manager) and [ingress-nginx](https://artifacthub.io/packages/helm/ingress-nginx/ingress-nginx). If you already have a certificate, you will only need ingress-nginx.
+
+This example is an addition to the examples above.
+
+<h3>Configuration</h3>
+
+```bash
+FLOATING_IP_SSH=<EXTERNAL_IP_FOR_SSH_ACCESS>
+cat <<EOF >> outpost_remote_values_ingress.yaml
+# Name of database encryption key secret
+cryptSecret: outpost-cryptkey
+# Name of JupyterHub username+password secret
+outpostUsers: outpost-users
+# ssh-publickey of JupyterHub(s) to connect
+sshPublicKeys:
+  - restrict,port-forwarding,command="/bin/echo No commands allowed" $(cat jupyterhub-sshkey.pub)
+# Kubernetes service for port-forwarding
+servicessh:
+  type: LoadBalancer
+  loadBalancerIP: ${FLOATING_IP_SSH}
+# Use ingress with TLS instead of a Kubernetes service for the Outpost API
+ingress:
+  enabled: true
+  # Annotations for using LetsEncrypt as a certificate issuer
+  annotations:
+    acme.cert-manager.io/http01-edit-in-place: "false"
+    cert-manager.io/cluster-issuer: letsencrypt-cluster-issuer
+  hosts:
+  - myremoteoutpost.com
+  tls:
+  - hosts:
+    - myremoteoutpost.com
+    # If using LetsEncrypt, the secret will be created automatically. Otherwise, please ensure the secret exists.
+    secretName: outpost-tls-certmanager
+EOF
+```
+
+JupyterHub will now be able to reach the JupyterHub Outpost API at `https://myremoteoutpost.com/services` and the ssh daemon for port-forwarding at `${FLOATING_IP_SSH}` on port 22.
+You have to send each connected JupyterHub its credentials (defined in `outpost-users`), the `servicessh` loadBalancerIP address and the URL of your outpost service.
diff --git a/mkdocs.yml b/mkdocs.yml
index 75a84b67180114a15139a44a47b4f72b0d58c217..aa4e31153f59d99fa148fc385adedfb676e29917 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -62,4 +62,9 @@ nav:
     - Custom Docker Images: users/jupyterlab/customdockerimage/index.md
     - Repo2Docker ( Binder ): users/jupyterlab/repo2docker/index.md
     - Useful Tips & Tricks: users/misc.md
+  - For Resource Providers:
+    - Overview: providers/index.md
+    - Architecture: providers/architecture.md
+    - Installation: providers/installation.md
+    - Configuration: providers/configuration.md
   - Support: support.md