Skip to content
Snippets Groups Projects
Commit fc619e2c authored by Tim Kreuzer's avatar Tim Kreuzer
Browse files

update documentation

parent 1ee9ce1c
No related branches found
No related tags found
No related merge requests found
openapi: 3.0.3 openapi: 3.0.3
info: info:
title: JupyterHub OutpostSpawner title: JupyterHub OutpostSpawner
description: The REST API for JuypterHub OutpostSpawner description: The REST API for JupyterHub OutpostSpawner
license: license:
name: BSD-3-Clause name: BSD-3-Clause
servers: servers:
... ...
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
# OutpostSpawner # OutpostSpawner
The OutpostSpawner enables JupyterHub to spawn single-user notebook servers on multiple remote resources. The OutpostSpawner in combination with the [JupyterHub Outpost service](https://github.com/kreuzert/jupyterhub/) enables JupyterHub to spawn single-user notebook servers on multiple remote resources.
## Overview ## Overview
...@@ -10,7 +10,7 @@ The JupyterHub community created many useful [JupyterHub Spawner](https://jupyte ...@@ -10,7 +10,7 @@ The JupyterHub community created many useful [JupyterHub Spawner](https://jupyte
Other Spawners like [SSHSpawner](https://github.com/NERSC/sshspawner) can spawn single-user servers on remote systems, but are not able to use system-specific features like [KubeSpawner](https://github.com/jupyterhub/kubespawner) or [BatchSpawner](https://github.com/jupyterhub/batchspawner). Other Spawners like [SSHSpawner](https://github.com/NERSC/sshspawner) can spawn single-user servers on remote systems, but are not able to use system-specific features like [KubeSpawner](https://github.com/jupyterhub/kubespawner) or [BatchSpawner](https://github.com/jupyterhub/batchspawner).
With the OutpostSpawner a single JupyterHub can offer multiple remote systems of different types. It comes with an additional REST API called "JupyterHub Outpost". This JupyterHub Outpost can use any JupyterHub Spawner and will manage the lifecycle of the single-user servers. The JupyterHub Outpost service in combination with the OutpostSpawner enables a single JupyterHub to offer multiple remote systems of different types.
- Use one JupyterHub to offer single-user servers on multiple systems. - Use one JupyterHub to offer single-user servers on multiple systems.
- Each system may use a different JupyterHub Spawner. - Each system may use a different JupyterHub Spawner.
...@@ -38,24 +38,22 @@ The JupyterHub Outpost must fulfill the requirements of the configured Spawner c ...@@ -38,24 +38,22 @@ The JupyterHub Outpost must fulfill the requirements of the configured Spawner c
:caption: General :caption: General
architecture architecture
usage
``` ```
```{eval-rst} ```{eval-rst}
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
:caption: Spawners :caption: Usage
spawners/outpostspawner usage/installation
spawners/eventoutpostspawner
apiendpoints
``` ```
```{eval-rst} ```{eval-rst}
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
:caption: Outpost :caption: Spawners
outpost/installation spawners/outpostspawner
outpost/configuration spawners/eventoutpostspawner
apiendpoints
``` ```
# Configuration
JupyterHub Outpost configuration is nearly the same as the JupyterHub Spawner configuration.
The easiest way is to write the configuration in the key `outpostConfig` in the `values.yaml` file.
In this example all single-user servers will be started with the same image. The JupyterHub OutpostSpawner can override each `c.<Spawner>.<key>` value, using the `custom_misc` feature. For more information look into the OutpostSpawner configuration.
```
cat <<EOF >> values.yaml
... don't forget to keep the values from the installation section
outpostConfig: |
from kubespawner import KubeSpawner
c.JupyterHubOutpost.spawner_class = KubeSpawner
c.KubeSpawner.image = "jupyter/minimal-notebook:notebook-7.0.3"
EOF
helm upgrade --install -f values.yaml outpost jupyterhub-outpost/jupyterhub-outpost
```
## Customize Logging
For the logging configuration the Outpost offers these options (all used the same as for JupyterHub):
```
c.JupyterHubOutpost.log_level = ...
c.JupyterHubOutpost.log_datafmt = ...
c.JupyterHubOutpost.log_format = ...
```
If more customization is required, one can do this directly in the configuration itself.
```
class TornadoGeneralLoggingFilter(logging.Filter):
def filter(self, record):
# I don't want to see this log line generated by tornado
if str(record.msg).startswith("Could not open static file"):
return False
return True
logging.getLogger("tornado.general").addFilter(TornadoGeneralLoggingFilter())
import os
logged_logger_name = os.environ.get("LOGGER_NAME", "MyOutpostInstance")
c.JupyterHubOutpost.log_format = f"%(color)s[%(levelname)1.1s %(asctime)s.%(msecs).03d {logged_logger_name} %(name)s %(module)s:%(lineno)d]%(end_color)s %(message)s"
```
## Persistent database
Outpost uses environment variables to [connect](https://gitlab.jsc.fz-juelich.de/jupyterjsc/k8s/images/jupyterhub-outpost/-/blob/main/project/app/database/__init__.py) to a database. In default cases the database will be a file, which will be deleted when Outpost is restarted. To enable a persistent database, one should use postgres. If you want to use a database that is not supported yet, feel free to open an issue and we will add this in the next update.
```
SQL_TYPE = "postgresql"
SQL_DATABASE = "<database_name>"
SQL_HOST = "postgresql.database.svc"
SQL_PASSWORD = "<password>"
SQL_PORT = "5432"
SQL_USER = "<username>"
kubectl create secret generic --from-literal=SQL_TYPE=${SQL_TYPE} --from-literal=SQL_DATABASE=${SQL_DATABASE} --from-literal=SQL_HOST=${SQL_HOST} --from-literal=SQL_PASSWORD=${SQL_PASSWORD} --from-literal=SQL_PORT=${SQL_PORT} --from-literal=SQL_USER=${SQL_USER} outpost-database
cat <<EOF >> values.yaml
... don't forget to keep the other values
extraEnvVarsSecrets:
- outpost-database
EOF
helm upgrade --install -f values.yaml outpost jupyterhub-outpost/jupyterhub-outpost
```
## Sanitize Spawner.start response
JupyterHub Outpost will use the return value of the start function of the configured SpawnerClass, to tell JupyterHub where the single-user server will be running. For example in the KubeSpawner, the response of KubeSpawner.start() will be something like `http://jupyter-<id>-<user_id>:<port>`. The JupyterHub OutpostSpawner will take this information, and create a ssh port-forwarding process with the option `-L 0.0.0.0:<local_jhub_port>:jupyter-<id>-<user_id>:<port>`. Afterwards, JupyterHub will look for the newly created single-user server at `http://localhost:<local_jhub_port>`. If the response of the start function of the configured SpawnerClass in the JupyterHub Outpost service will not be correct, OutpostSpawner and Outpost can not work together properly. To ensure nearly all Spawners can be used anyway, you can override the response send to the OutpostSpawner.
```
# This may be a coroutine
def sanitize_start_response(spawner, original_response):
# ... determine the correct location for the new single-user server
return "<...>"
c.JupyterHubOutpost.sanitize_start_response = sanitize_start_response
```
If you don't know where your single-user server will be running in the end, you have to return an empty string. JupyterHub OutpostSpawner won't create a ssh port-forwarding process. The start process of the single-user server has to send a POST request to the `$JUPYTERHUB_SETUPTUNNEL_URL` url. Have a look at the API Endpoints of the OutpostSpawner for more information.
# Installation
In this section a simple Outpost Installation for an existing JupyterHub OutpostSpawner is described.
To install an Outpost for JupyterHub, one public key is required for each connected JupyterHub. After the installation, each JupyterHub must know the defined username / password combination to configure the OutpostSpawner correctly.
## Requirements
- One k8s cluster
- [Helm](https://helm.sh/) CLI
- [kubectl](https://kubernetes.io/de/docs/reference/kubectl/) CLI
- One public key from each connected JupyterHub
## Installation
For the Outpost instance two secrets are required:
- An encryption key for the database. When starting a single-user server, Outpost will encrypt the given data and store it in a database.
- Usernames / Passwords for authentication of the connected JupyterHubs. Multiple values must be separated by a semicolon.
```
# Create secret for encryption key
pip install cryptography
SECRET_KEY=$(python3 -c 'from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())')
kubectl create secret generic outpost-cryptkey --from-literal=secret_key=${SECRET_KEY}
# Create secret for usernames / passwords
JUPYTERHUB_PASSWORD=$(uuidgen)
kubectl create secret generic --from-literal=usernames=jupyterhub --from-literal=passwords=${JUPYTERHUB_PASSWORD} outpost-users
```
Installation with Helm:
```
cat <<EOF >> values.yaml
cryptSecret: outpost-cryptkey
outpostUsers: outpost-users
sshPublicKeys:
- <enter the SSH public key from JupyterHub here>
EOF
helm repo add jupyterhub-outpost https://kreuzert.github.io/jupyterhub-outpost/charts/
helm repo update
helm upgrade --install -f values.yaml outpost jupyterhub-outpost/jupyterhub-outpost
```
## Make Outpost reachable for JupyterHub
JupyterHub will connect the Outpost on 2 ports:
- API Endpoint to start/poll/stop the single-user server
- SSH to enable port forwarding
For the first one it's recommended to use an ingress class with encryption. The second one can be of type LoadBalancer or NodePort.
If JupyterHub and Outpost are running in the same k8s cluster, ClusterIP for both services should be fine.
```
cat <<EOF >> secure_values.yaml
cryptSecret: outpost-cryptkey
outpostUsers: outpost-users
sshPublicKeys:
- <enter the SSH public key from JupyterHub here>
servicessh:
type: LoadBalancer
loadBalancerIP: <add your floating ip for the ssh connection in here>
ingress:
enabled: true
annotations: # used for Let's Encrypt certificate
acme.cert-manager.io/http01-edit-in-place: "false"
cert-manager.io/cluster-issuer: letsencrypt-cluster-issuer
hosts:
- <your hostname to reach the API Endpoint>
tls:
- hosts:
- <your hostname to reach the API Endpoint>
secretName: outpost-tls
EOF
helm repo add jupyterhub-outpost https://kreuzert.github.io/jupyterhub-outpost/charts/
helm repo update
helm upgrade --install -f secure_values.yaml outpost jupyterhub-outpost/jupyterhub-outpost
```
# Usage # Usage
This section contains example configurations and instructions, to use [zero2jupyterhub](https://z2jh.jupyter.org) with the OutpostSpawner and two [JupyterHub Outposts](https://artifacthub.io/packages/helm/jupyterhub-outpost/jupyterhub-outpost). We will use two Kubernetes clusters: "local" (JupyterHub and Outpost will be installed) and "remote" (only Outpost will be installed). For JupyterHub the namespace `jupyter` is used. For Outpost the namespace `outpost` is used. This section covers an example configuration to use [zero2jupyterhub](https://z2jh.jupyter.org) with the OutpostSpawner.
In this scenario, we will connect the JupyterHub OutpostSpawner with two running JupyterHub Outpost services.
You can find a tutorial how to install a JupyterHub Outpost service [here](https://jupyterhub-outpost.readthedocs.io/en/latest/usage/installation.html).
```{admonition} Warning ```{admonition} Warning
In this example the communication between "local" and "remote" is not encrypted. Do not use this setup in production. In this example the communication between JupyterHub and the `second` system (a remote Kubernetes cluster hosting the JupyterHub Outpost service) is not encrypted. Do not use this setup in production.
You can use ingress-nginx on the remote cluster to enable encryption. You'll find an example at the end of this section. You can use ingress-nginx on the JupyterHub Outpost cluster to enable encryption.
``` ```
## Pre-Requirements ## Pre-Requirements
2 Kubernetes clusters up and running. In this example we will use [ingress-nginx](https://artifacthub.io/packages/helm/ingress-nginx/ingress-nginx) on the local cluster. One Kubernetes cluster up and running.
The Outpost on the "remote" cluster must be reachable on port 30080 and 30022 (you can change the ports, or use ingress + LoadBalancer and port 443 + 22). In this example we will use [ingress-nginx](https://artifacthub.io/packages/helm/ingress-nginx/ingress-nginx).
## Requirements ## Requirements
To allow JupyterHub to create ssh port forwarding to the Outpost, a ssh keypair is required. To allow JupyterHub to create ssh port forwarding process to the Outpost, a ssh keypair is required.
``` ```
ssh-keygen -f jupyterhub-sshkey -t ed25519 -N '' ssh-keygen -f jupyterhub-sshkey -t ed25519 -N ''
# On local cluster:
kubectl -n jupyter create secret generic --type=kubernetes.io/ssh-auth --from-file=ssh-privatekey=jupyterhub-sshkey --from-file=ssh-publickey=jupyterhub-sshkey.pub jupyterhub-outpost-sshkey kubectl -n jupyter create secret generic --type=kubernetes.io/ssh-auth --from-file=ssh-privatekey=jupyterhub-sshkey --from-file=ssh-publickey=jupyterhub-sshkey.pub jupyterhub-outpost-sshkey
``` ```
To authenticate the JupyterHub instance at the two outposts, we have to create username+password. In this example we use different a username/password combination for each Outpost. If one Outpost should be connected to multiple JupyterHubs, each JupyterHub needs its own username. To authenticate the JupyterHub instance at JupyterHub Outposts, we will receive a username / password combination from each JupyterHub Outpost administrator.
``` ```
LOCAL_OUTPOST_PASSWORD=$(uuidgen) FIRST_OUTPOST_PASSWORD=... # you should get this from the Outpost administrator
REMOTE_OUTPOST_PASSWORD=$(uuidgen) SECOND_OUTPOST_PASSWORD=... # you should get this from the Outpost administrator
# On local cluster:
## Store username / password for Outpost
kubectl -n outpost create secret generic --from-literal=usernames=jupyterhub --from-literal=passwords=${LOCAL_OUTPOST_PASSWORD} outpost-users
## Store both usernames / passwords for JupyterHub ## Store both usernames / passwords for JupyterHub
kubectl -n jupyter create secret generic --from-literal=AUTH_OUTPOST_LOCAL=$(echo -n "jupyterhub:${LOCAL_OUTPOST_PASSWORD}" | base64 -w 0) --from-literal=AUTH_OUTPOST_REMOTE=$(echo -n "jupyterhub:${REMOTE_OUTPOST_PASSWORD}" | base64 -w 0) jupyterhub-outpost-auth kubectl --namespace jupyter create secret generic --from-literal=AUTH_OUTPOST_FIRST=$(echo -n "jupyterhub:${FIRST_OUTPOST_PASSWORD}" | base64 -w 0) --from-literal=AUTH_OUTPOST_SECOND=$(echo -n "jupyterhub:${SECOND_OUTPOST_PASSWORD}" | base64 -w 0) jupyterhub-outpost-auth
# On remote cluster:
## Store username / password for Outpost
kubectl -n outpost create secret generic --from-literal=usernames=jupyterhub --from-literal=passwords=${REMOTE_OUTPOST_PASSWORD} outpost-users
```
## Create Outpost
Now we're installing the two JupyterHub Outposts. We're using a NodePort service for the remote Outpost, so JupyterHub can reach it.
For the Outpost we need an encryption key, so data in the database can be encrypted.
```
LOCAL_SECRET_KEY=$(python3 -c 'from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())')
REMOTE_SECRET_KEY=$(python3 -c 'from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())')
# On local cluster:
kubectl -n outpost create secret generic outpost-cryptkey --from-literal=secret_key=${LOCAL_SECRET_KEY}
# On remote cluster:
kubectl -n outpost create secret generic outpost-cryptkey --from-literal=secret_key=${REMOTE_SECRET_KEY}
``` ```
``` ## Configuration
cat <<EOF >> outpost_local_values.yaml
cryptSecret: outpost-cryptkey
outpostUsers: outpost-users
sshPublicKeys:
- $(cat jupyterhub-sshkey.pub)
EOF
```
``` With these secrets created, we can now start JupyterHub. In this scenario we're using ingress-nginx and disabling a few things, that are not required in this example. Your JupyterHub configuration might look a bit different.
cat <<EOF >> outpost_remote_values.yaml
cryptSecret: outpost-cryptkey
outpostUsers: outpost-users
sshPublicKeys:
- $(cat jupyterhub-sshkey.pub)
service:
type: NodePort
ports:
nodePort: 30080
servicessh:
type: NodePort
ports:
nodePort: 30022
EOF
```
Now let's install the Outpost on both cluster. ```{admonition} Warning
``` We're connecting this JupyterHub with two JupyterHub Outposts. One is running on the same cluster as JupyterHub, the second one is running remotely on a different cluster.
# On local cluster Therefore, we're using an internal cluster address for the first Outpost. Furthermore, there's no need to enable ssh port-forwarding for the first cluster, as the JupyterLabs will be reachable for JupyterHub.
helm repo add jupyterhub-outpost https://kreuzert.github.io/jupyterhub-outpost/charts/
helm repo update
helm upgrade --install --version <version> --namespace outpost -f outpost_local_values.yaml outpost jupyterhub-outpost/jupyterhub-outpost
# On remote cluster All JupyterLabs will be using the external DNS alias name of the JupyterHub to reach the hub api url (see `c.OutpostSpawner.public_api_url`). You might have to install a hairpin-proxy (e.g. [this](https://github.com/compumike/hairpin-proxy)), to allow the pods within your cluster, to communicate with the public DNS alias name.
helm upgrade --install --version <version> --namespace outpost -f outpost_remote_values.yaml outpost jupyterhub-outpost/jupyterhub-outpost
``` ```
Ensure that everything is running. Double check if the remote Cluster has opened the ports 30080 and 30022 for the local cluster. Figure out at which IP Adresse JupyterHub will be able to reach the remote Outpost.
```
# In a NodePort scenario both may be the same. If you're using LoadBalancers they should be different.
# If you're using ingress you can use the DNS alias name, too.
REMOTE_IP_ADDRESS=10.0.123.123 # You have to change this!
REMOTE_IP_ADDRESS_SSH=10.0.123.123 # You have to change this!
```
With these secrets created, we can now start JupyterHub. In this scenario we're using ingress-nginx and disabling a few things, that are not required in this example. Your JupyterHub configuration might look a bit different.
``` ```
cat <<EOF >> z2jh_values.yaml cat <<EOF >> z2jh_values.yaml
hub: hub:
...@@ -127,16 +66,16 @@ hub: ...@@ -127,16 +66,16 @@ hub:
- name: jupyterhub-outpost-sshkey - name: jupyterhub-outpost-sshkey
mountPath: /mnt/ssh_keys mountPath: /mnt/ssh_keys
extraEnv: extraEnv:
- name: AUTH_OUTPOST_LOCAL - name: AUTH_OUTPOST_FIRST
valueFrom: valueFrom:
secretKeyRef: secretKeyRef:
name: jupyterhub-outpost-auth name: jupyterhub-outpost-auth
key: AUTH_OUTPOST_LOCAL key: AUTH_OUTPOST_FIRST
- name: AUTH_OUTPOST_REMOTE - name: AUTH_OUTPOST_SECOND
valueFrom: valueFrom:
secretKeyRef: secretKeyRef:
name: jupyterhub-outpost-auth name: jupyterhub-outpost-auth
key: AUTH_OUTPOST_REMOTE key: AUTH_OUTPOST_SECOND
extraConfig: extraConfig:
customConfig: |- customConfig: |-
import outpostspawner import outpostspawner
...@@ -144,26 +83,24 @@ hub: ...@@ -144,26 +83,24 @@ hub:
c.OutpostSpawner.options_form = """ c.OutpostSpawner.options_form = """
<label for=\"system\">Choose a system:</label> <label for=\"system\">Choose a system:</label>
<select name=\"system\"> <select name=\"system\">
<option value="local">local</option> <option value="first">First</option>
<option value="remote">remote</option> <option value="second">Second</option>
</select> </select>
""" """
async def request_url(spawner): async def request_url(spawner):
system = spawner.user_options.get("system", "None") system = spawner.user_options.get("system", "None")[0]
if system == "local": if system == "first":
ret = "http://outpost.outpost.svc:8080/services" ret = "http://outpost.outpost.svc:8080/services"
elif system == "remote": elif system == "second":
ret = "http://${REMOTE_IP_ADDRESS}:30080/services" ret = "http://${SECOND_OUTPOST_ADDRESS}/services"
else: else:
ret = "System not supported" ret = "System not supported"
spawner.log.info(f"URL for system {system}: {ret}")
return ret return ret
c.OutpostSpawner.request_url = request_url c.OutpostSpawner.request_url = request_url
async def request_headers(spawner): async def request_headers(spawner):
system = spawner.user_options.get("system", "None") system = spawner.user_options.get("system", "None")[0]
spawner.log.info(f"Create request header for system {system}")
auth = os.environ.get(f"AUTH_OUTPOST_{system.upper()}") auth = os.environ.get(f"AUTH_OUTPOST_{system.upper()}")
return { return {
"Authorization": f"Basic {auth}", "Authorization": f"Basic {auth}",
...@@ -173,18 +110,32 @@ hub: ...@@ -173,18 +110,32 @@ hub:
c.OutpostSpawner.request_headers = request_headers c.OutpostSpawner.request_headers = request_headers
async def ssh_node(spawner): async def ssh_node(spawner):
system = spawner.user_options.get("system", "None") system = spawner.user_options.get("system", "None")[0]
if system == "local": if system == "first":
ret = "outpost.outpost.svc" ret = "outpost.outpost.svc"
elif system == "remote": elif system == "second":
ret = "${REMOTE_IP_ADDRESS_SSH}" ret = "${REMOTE_OUTPOST_IP_ADDRESS_SSH}"
else: else:
ret = "System not supported" ret = "System not supported"
spawner.log.info(f"SSH Node for system {system}: {ret}")
return ret return ret
c.OutpostSpawner.ssh_node = ssh_node c.OutpostSpawner.ssh_node = ssh_node
def ssh_enabled(spawner):
system = spawner.user_options.get("system", ["None"])[0]
if system == "first":
return False
elif system == "second":
return True
else:
raise Exception("Not supported")
c.OutpostSpawner.ssh_enabled = ssh_enabled
c.OutpostSpawner.ssh_key = "/mnt/ssh_keys/ssh-privatekey"
c.OutpostSpawner.http_timeout = 1200
c.OutpostSpawner.public_api_url = "https://myjupyterhub.com/hub/api"
c.OutpostSpawner.ssh_key = "/mnt/ssh_keys/ssh-privatekey" c.OutpostSpawner.ssh_key = "/mnt/ssh_keys/ssh-privatekey"
helm_release_name = os.environ.get("HELM_RELEASE_NAME")
c.OutpostSpawner.pod_name_template = f"{helm_release_name}-{{servername}}-{{userid}}"
ingress: ingress:
annotations: annotations:
acme.cert-manager.io/http01-edit-in-place: "false" acme.cert-manager.io/http01-edit-in-place: "false"
...@@ -210,47 +161,15 @@ scheduling: ...@@ -210,47 +161,15 @@ scheduling:
EOF EOF
``` ```
Install JupyterHub on local cluster: ## Installation
Install JupyterHub:
``` ```
# On local cluster:
helm repo add jupyterhub https://hub.jupyter.org/helm-chart/ helm repo add jupyterhub https://hub.jupyter.org/helm-chart/
helm repo update helm repo update
helm upgrade --cleanup-on-fail --install --namespace jupyter -f z2jh_values.yaml jupyterhub jupyterhub/jupyterhub helm upgrade --cleanup-on-fail --install --namespace jupyter -f z2jh_values.yaml jupyterhub jupyterhub/jupyterhub
``` ```
After a few minutes everything should be up and running. If you have any problems following this example, or want to leave feedback, feel free to open an issue on GitHub. After a few minutes everything should be up and running. If you have any problems following this example, or want to leave feedback, feel free to open an issue on GitHub.
You are now able to start JupyterLabs on both Kubernetes Clusters, using the KubeSpawner as default. You will find more information about the possibilities of the Outpost and the OutpostSpawner in this documentation. If you have not already done, you should now install the connected JupyterHub Outpost services. Have a look at its documentation [here](https://jupyterhub-outpost.readthedocs.io/en/latest/usage/installation.html).
## Encryption on JupyterHub Outpost
When running JupyterHub Outpost on production, you should ensure some encryption. An easy way is to use ingress-nginx with a certificate.
For this example we've installed [cert-manager, hairpin-proxy and let's encrypt issuer](https://gitlab.jsc.fz-juelich.de/kaas/fleet-deployments/-/tree/cert-manager). If you already have an certificate you will not need this.
```
FLOATING_IP_SSH=<EXTERNAL_IP_FOR_SSH_ACCESS>
cat <<EOF >> outpost_remote_values_ingress.yaml
cryptSecret: outpost-cryptkey
outpostUsers: outpost-users
sshPublicKeys:
- $(cat jupyterhub-sshkey.pub)
servicessh:
type: LoadBalancer
loadBalancerIP: ${FLOATING_IP_SSH}
ingress:
enabled: true
annotations:
acme.cert-manager.io/http01-edit-in-place: "false"
cert-manager.io/cluster-issuer: letsencrypt-cluster-issuer
hosts:
- myremoteoutpost.com
tls:
- hosts:
- myremoteoutpost.com
secretName: outpost-tls-certmanager
EOF
```
## Configure persistent database to Outpost
See [documentation on ArtifactHub](https://artifacthub.io/packages/helm/jupyterhub-outpost/jupyterhub-outpost#configure-database)
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please to comment