diff --git a/index.md b/index.md
index 978cbc9306a723755142e4ab8d51b2454e5b7450..ab1fca4863407ca3318958b7eebfbeb181b3c249 100644
--- a/index.md
+++ b/index.md
@@ -47,7 +47,7 @@ date: December 4th, 2024
 
 # Why, part 2
 
-- Projects like OpenGPT-X, TrustLLM and Laion need a place to run
+- Projects like OpenGPT-X, TrustLLM need a place to run
 - The usual: we want to be ready when the time comes
     - The time is now!
 - TL;DR: BECAUSE WE CAN! 🚀
@@ -80,6 +80,7 @@ date: December 4th, 2024
 
 - Web UI only
 - API usage wasn't recorded until we moved to a new host, devs still migrating
+- Some healthy usage by tools e.g. B2DROP assistant: Around 400 requests/day (count as a single ip from B2DROP's server)
 
 ---
 
@@ -139,23 +140,37 @@ date: December 4th, 2024
 
 # Open Source
 
-- Outside of Academia, the only one 100% open I am aware of is [OLMo](https://blog.allenai.org/hello-olmo-a-truly-open-llm-43f7e7359222) (I might be outdated)
-    - Has training code, weights and data, all open
 - German academia: [OpenGPT-X](https://opengpt-x.de/en/)
+    - Trained in JĂĽlich and Dresden
     - For German businesses and academia
     - Yet unclear if training data will be open
 - EU: [TrustLLM](https://trustllm.eu/)
-    - Less emphasis on English
+    - Trained in JĂĽlich
+    - "Trustworthy" LLM
+    - Says it will be fully open
+- Laion from FZJ (and others) is also open source
+    - Provides datasets, audio, image, video, text encoder models
+
+---
+
+## Open source
+
+- Outside of Academia, there's [OLMo](https://blog.allenai.org/hello-olmo-a-truly-open-llm-43f7e7359222) from AllenAI
+    - Has training code, weights and data, all open
+- [Intellect-1](https://www.primeintellect.ai/blog/intellect-1-release) was trained collaboratively
+    - Up to 112 H100 GPUs simultaneously
+    - They claim overall compute utilization of 83% across continents and 96% when training only in the USA
     - Fully open
 
 ---
 
 # Non-transformer architectures
 
-- Currently, Jamba 1.5 is the best one
+- Last I checked, Jamba 1.5 was the best one
 - Performs well on benchmarks
 - What about real examples?
 - Some mathematical discussions about it being turing-complete (probably not)
+- Other examples are Hymba from NVIDIA, Liquid from MIT
 
 ---
 
@@ -193,9 +208,9 @@ date: December 4th, 2024
 
 ---
 
-# User demand is growing, we get more hardware
+# User demand is growing, we need more hardware
 
-- Currently around 1000+ unique users/day on the website
+- Currently around 300+ unique users/day on the website
 - API usage is higher, growing and heavier
 
 ---
@@ -303,6 +318,18 @@ date: December 4th, 2024
 
 ---
 
+## Vision for the (near) future
+
+- Blablador as an umbrella for inference
+- Use cases:
+    - LLMs for science
+    - Nasa's Prithvi 3 (currently being trained here)
+    - ESA's upcomping model
+    - Health: Radiology with Aachen Uniklinik
+    - ...
+    - With privacy!
+
+---
 
 ## Todo
 
diff --git a/public/index.html b/public/index.html
index 4d059ac398991d92a9a797f28575e2d7c634a4ea..f8fb30aa8d870a475c58ec6e3a101e7e8d4a43e8 100644
--- a/public/index.html
+++ b/public/index.html
@@ -283,8 +283,8 @@ target 🎯💨</li>
 <section id="why-part-2" class="slide level1">
 <h1>Why, part 2</h1>
 <ul>
-<li class="fragment">Projects like OpenGPT-X, TrustLLM and Laion need a
-place to run</li>
+<li class="fragment">Projects like OpenGPT-X, TrustLLM need a place to
+run</li>
 <li class="fragment">The usual: we want to be ready when the time comes
 <ul>
 <li class="fragment">The time is now!</li>
@@ -330,6 +330,8 @@ it, contact me!</em></strong></li>
 <li class="fragment">Web UI only</li>
 <li class="fragment">API usage wasn’t recorded until we moved to a new
 host, devs still migrating</li>
+<li class="fragment">Some healthy usage by tools e.g. B2DROP assistant:
+Around 400 requests/day (count as a single ip from B2DROP’s server)</li>
 </ul>
 </section>
 <section class="slide level1">
@@ -403,22 +405,43 @@ download the weights</li>
 <section id="open-source-1" class="slide level1">
 <h1>Open Source</h1>
 <ul>
-<li class="fragment">Outside of Academia, the only one 100% open I am
-aware of is <a
-href="https://blog.allenai.org/hello-olmo-a-truly-open-llm-43f7e7359222">OLMo</a>
-(I might be outdated)
-<ul>
-<li class="fragment">Has training code, weights and data, all open</li>
-</ul></li>
 <li class="fragment">German academia: <a
 href="https://opengpt-x.de/en/">OpenGPT-X</a>
 <ul>
+<li class="fragment">Trained in JĂĽlich and Dresden</li>
 <li class="fragment">For German businesses and academia</li>
 <li class="fragment">Yet unclear if training data will be open</li>
 </ul></li>
 <li class="fragment">EU: <a href="https://trustllm.eu/">TrustLLM</a>
 <ul>
-<li class="fragment">Less emphasis on English</li>
+<li class="fragment">Trained in JĂĽlich</li>
+<li class="fragment">“Trustworthy” LLM</li>
+<li class="fragment">Says it will be fully open</li>
+</ul></li>
+<li class="fragment">Laion from FZJ (and others) is also open source
+<ul>
+<li class="fragment">Provides datasets, audio, image, video, text
+encoder models</li>
+</ul></li>
+</ul>
+</section>
+<section class="slide level1">
+
+<h2 id="open-source-2">Open source</h2>
+<ul>
+<li class="fragment">Outside of Academia, there’s <a
+href="https://blog.allenai.org/hello-olmo-a-truly-open-llm-43f7e7359222">OLMo</a>
+from AllenAI
+<ul>
+<li class="fragment">Has training code, weights and data, all open</li>
+</ul></li>
+<li class="fragment"><a
+href="https://www.primeintellect.ai/blog/intellect-1-release">Intellect-1</a>
+was trained collaboratively
+<ul>
+<li class="fragment">Up to 112 H100 GPUs simultaneously</li>
+<li class="fragment">They claim overall compute utilization of 83%
+across continents and 96% when training only in the USA</li>
 <li class="fragment">Fully open</li>
 </ul></li>
 </ul>
@@ -426,11 +449,13 @@ href="https://opengpt-x.de/en/">OpenGPT-X</a>
 <section id="non-transformer-architectures" class="slide level1">
 <h1>Non-transformer architectures</h1>
 <ul>
-<li class="fragment">Currently, Jamba 1.5 is the best one</li>
+<li class="fragment">Last I checked, Jamba 1.5 was the best one</li>
 <li class="fragment">Performs well on benchmarks</li>
 <li class="fragment">What about real examples?</li>
 <li class="fragment">Some mathematical discussions about it being
 turing-complete (probably not)</li>
+<li class="fragment">Other examples are Hymba from NVIDIA, Liquid from
+MIT</li>
 </ul>
 </section>
 <section id="eu-ai-act" class="slide level1">
@@ -480,11 +505,11 @@ make it harder for EU models to compete with US ones</li>
 <figcaption aria-hidden="true">Haicluster</figcaption>
 </figure>
 </section>
-<section id="user-demand-is-growing-we-get-more-hardware"
+<section id="user-demand-is-growing-we-need-more-hardware"
 class="slide level1">
-<h1>User demand is growing, we get more hardware</h1>
+<h1>User demand is growing, we need more hardware</h1>
 <ul>
-<li class="fragment">Currently around 1000+ unique users/day on the
+<li class="fragment">Currently around 300+ unique users/day on the
 website</li>
 <li class="fragment">API usage is higher, growing and heavier</li>
 </ul>
@@ -648,6 +673,23 @@ href="https://github.com/haesleinhuepf/bia-bob/blob/main/README.md">https://gith
 </section>
 <section class="slide level1">
 
+<h2 id="vision-for-the-near-future">Vision for the (near) future</h2>
+<ul>
+<li class="fragment">Blablador as an umbrella for inference</li>
+<li class="fragment">Use cases:
+<ul>
+<li class="fragment">LLMs for science</li>
+<li class="fragment">Nasa’s Prithvi 3 (currently being trained
+here)</li>
+<li class="fragment">ESA’s upcomping model</li>
+<li class="fragment">Health: Radiology with Aachen Uniklinik</li>
+<li class="fragment">…</li>
+<li class="fragment">With privacy!</li>
+</ul></li>
+</ul>
+</section>
+<section class="slide level1">
+
 <h2 id="todo">Todo</h2>
 <ul>
 <li class="fragment">Multi-modal models (text+image, text+audio,