diff --git a/index.md b/index.md
index 8954558c455eb517c1e03b16ec5d3f80ae2874e9..e8950a8dd3e0de57856a215730e2644365203644 100644
--- a/index.md
+++ b/index.md
@@ -11,16 +11,89 @@ date: December 4th, 2024
 
 ---
 
-# The LLM ecosystem
+# Blablador
+
+- /ˈblæblæˌdɔɹ/
+- Bla-bla-bla 🗣️ + Labrador 🐕‍🦺
+- A stage for deploying and testing large language models
+- Models change constantly (constantly improving rank, some good, some awful)
+- A mix of small, fast models and large, slower ones - changes constantly, keeps up with the state of the art
+- It is a web server, an api server, model runner, and training code.
+
+---
+
+# Why?
+
+- AI is becoming basic infrastructure
+- Which historically is Open Source
+- We (as we in scientists) train a lot, deploy little: _Here is your code/weights, tschüssi!_
+- Little experience with dealing with LLMs
+- From the tools point of view, this is a FAST moving target 🎯💨
+- Acquire local experience in issues like
+    - data loading,
+    - quantization, 
+    - distribution,
+    - fine-tune LLMs for specific tasks,
+    - inference speed,
+    - deployment
+
+---
+
+# Why, part 2
+
+- Projects like OpenGPT-X, TrustLLM and Laion need a place to run
+- The usual: we want to be ready when the time comes
+    - The time is now!
+- TL;DR: BECAUSE WE CAN! 🚀
+
+---
+
+## Some facts
+
+- No data collection at all. I don't keep ***ANY*** data whatsoever!
+    - You can use it AND keep your data private
+    - No records? Privacy (and GDPR is happy)
+
+---
+
+## Deployment as a service
+
+- Scientists from FZJ can deploy their models on their _own_ hardware and point to blablador
+- This solves a bunch of headaches for researchers:
+    - Authentication
+    - Web server
+    - Firewall
+    - Availability
+    - Etc
+- ***If you have a model and want to deploy it, contact me!***
+
+---
+
+## OpenAI-compatible API
+
+- Users import openai-python from OpenAI itself
+- All services which can use OpenAI's API can use Blablador's API (VSCode's Continue.dev, etc)
+- The API is not yet rate-limited, logged, monitored, documented or well-tested.
+
+---
+
+# The LLM open ecosystem
 
 - If it isn't on huggingface, it doesn't exist
 - The "open" ecosystem is dominated by a few big players: Meta, Mistral.AI, Google
-- Microsoft has tiny, bad ones
-- Apple is going their own way
+- Microsoft has tiny, bad ones (but I wouldn't bet against them)
+- Apple is going their own way + using ChatGPT
 - Twitter/X has Grok-2 for paying customers, Grok-1 is enormous and "old" (from march)
+- Google is catching up FAST on closed models
+- Anthropic is receiving billions from Amazon but Claude is completely closed
+
+---
+
+# The LLM ecosystem
+
 - Evaluation is HARD
     - Benchmarks prove little
-    - 99% of new models are fine-tuned versions of existing ones...
+    - Absolute majority of new models are fine-tuned versions of existing ones...
     - On the benchmarks themselves!
         - Is this cheating?
 
@@ -38,7 +111,7 @@ date: December 4th, 2024
 
 ---
 
-![](images/llm-arena.jpg)
+![](images/llm-leaderboard-2024-11.png)
 
 ---
 
@@ -87,16 +160,6 @@ date: December 4th, 2024
 
 ---
 
-# Blablador
-
-- /ˈblæblæˌdɔɹ/
-- Bla-bla-bla 🗣️ + Labrador 🐕‍🦺
-- A stage for deploying and testing large language models
-- Models change constantly (constantly improving rank, some good, some awful)
-- A mix of small, fast models and large, slower ones - changes constantly
-- It is a web server and an api server, and training code.
-
---- 
 
 > "I think the complexity of Python package management holds down AI application development more than is widely appreciated. AI faces multiple bottlenecks — we need more GPUs, better algorithms, cleaner data in large quantities. But when I look at the day-to-day work of application builders, there’s one additional bottleneck that I think is underappreciated: The time spent wrestling with version management is an inefficiency I hope we can reduce. "
 
@@ -111,54 +174,6 @@ Andrew Ng, 28.02.2024
 
 --- 
 
-# Why?
-
-- AI is becoming basic infrastructure
-- Which historically is Open Source
-- We train a lot, deploy little: _Here is your code/weights, k.thnx.bye!_
-- Little experience with dealing with LLMs
-- From the tools point of view, this is a FAST moving target 🎯💨
-- Acquire local experience in issues like
-    - data loading,
-    - quantization, 
-    - distribution,
-    - fine-tune LLMs for specific tasks,
-    - inference speed,
-    - deployment
-- Projects like OpenGPT-X, TrustLLM and Laion need a place to run
-- The usual: we want to be ready when the time comes
-- TL;DR: BECAUSE WE CAN! 🤘
-
----
-
-## Some facts
-
-- No data collection at all. I don't keep ***ANY*** data whatsoever!
-    - You can use it AND keep your data private
-    - No records? Privacy (and GDPR is happy)
-
----
-
-## Deployment as a service
-
-- Scientists from (currently just FZJ) can deploy their models on their _own_ hardware and point to blablador
-- This solves a bunch of headaches for researchers:
-    - Authentication
-    - Web server
-    - Firewall
-    - Availability
-    - Etc
-- ***If you have a model and want to deploy it, contact me!***
-
----
-
-## OpenAI-compatible API
-
-- Users import openai-python from OpenAI itself
-- All services which can use OpenAI's API can use Blablador's API (VSCode's Continue.dev, etc)
-- The API is not yet rate-limited, logged, monitored, documented or well-tested.
-
----
 
 # Juelich Supercomputing Centre
 
diff --git a/public/images/llm-arena.jpg b/public/images/llm-arena.jpg
deleted file mode 100644
index 7e970110f53ce5b578b3e8cbb475846281673bb9..0000000000000000000000000000000000000000
Binary files a/public/images/llm-arena.jpg and /dev/null differ
diff --git a/public/images/llm-leaderboard-2024-11.png b/public/images/llm-leaderboard-2024-11.png
new file mode 100644
index 0000000000000000000000000000000000000000..79df047e240ec97057ea560277689d31f2945349
Binary files /dev/null and b/public/images/llm-leaderboard-2024-11.png differ
diff --git a/public/index.html b/public/index.html
index d056b69457dc409d7afb96ee518b672172e54d9b..230033cfa06a9c9482b2c2a83939ed12863a500c 100644
--- a/public/index.html
+++ b/public/index.html
@@ -240,21 +240,119 @@ alt="https://go.fzj.de/2024-12-jsc-colloquium" />
 aria-hidden="true">https://go.fzj.de/2024-12-jsc-colloquium</figcaption>
 </figure>
 </section>
-<section id="the-llm-ecosystem" class="slide level1">
-<h1>The LLM ecosystem</h1>
+<section id="blablador" class="slide level1">
+<h1>Blablador</h1>
+<ul>
+<li class="fragment">/ˈblæblæˌdɔɹ/</li>
+<li class="fragment">Bla-bla-bla 🗣️ + Labrador 🐕‍🦺</li>
+<li class="fragment">A stage for deploying and testing large language
+models</li>
+<li class="fragment">Models change constantly (constantly improving
+rank, some good, some awful)</li>
+<li class="fragment">A mix of small, fast models and large, slower ones
+- changes constantly, keeps up with the state of the art</li>
+<li class="fragment">It is a web server, an api server, model runner,
+and training code.</li>
+</ul>
+</section>
+<section id="why" class="slide level1">
+<h1>Why?</h1>
+<ul>
+<li class="fragment">AI is becoming basic infrastructure</li>
+<li class="fragment">Which historically is Open Source</li>
+<li class="fragment">We (as we in scientists) train a lot, deploy
+little: <em>Here is your code/weights, tschüssi!</em></li>
+<li class="fragment">Little experience with dealing with LLMs</li>
+<li class="fragment">From the tools point of view, this is a FAST moving
+target 🎯💨</li>
+<li class="fragment">Acquire local experience in issues like
+<ul>
+<li class="fragment">data loading,</li>
+<li class="fragment">quantization,</li>
+<li class="fragment">distribution,</li>
+<li class="fragment">fine-tune LLMs for specific tasks,</li>
+<li class="fragment">inference speed,</li>
+<li class="fragment">deployment</li>
+</ul></li>
+</ul>
+</section>
+<section id="why-part-2" class="slide level1">
+<h1>Why, part 2</h1>
+<ul>
+<li class="fragment">Projects like OpenGPT-X, TrustLLM and Laion need a
+place to run</li>
+<li class="fragment">The usual: we want to be ready when the time comes
+<ul>
+<li class="fragment">The time is now!</li>
+</ul></li>
+<li class="fragment">TL;DR: BECAUSE WE CAN! 🚀</li>
+</ul>
+</section>
+<section class="slide level1">
+
+<h2 id="some-facts">Some facts</h2>
+<ul>
+<li class="fragment">No data collection at all. I don’t keep
+<strong><em>ANY</em></strong> data whatsoever!
+<ul>
+<li class="fragment">You can use it AND keep your data private</li>
+<li class="fragment">No records? Privacy (and GDPR is happy)</li>
+</ul></li>
+</ul>
+</section>
+<section class="slide level1">
+
+<h2 id="deployment-as-a-service">Deployment as a service</h2>
+<ul>
+<li class="fragment">Scientists from FZJ can deploy their models on
+their <em>own</em> hardware and point to blablador</li>
+<li class="fragment">This solves a bunch of headaches for researchers:
+<ul>
+<li class="fragment">Authentication</li>
+<li class="fragment">Web server</li>
+<li class="fragment">Firewall</li>
+<li class="fragment">Availability</li>
+<li class="fragment">Etc</li>
+</ul></li>
+<li class="fragment"><strong><em>If you have a model and want to deploy
+it, contact me!</em></strong></li>
+</ul>
+</section>
+<section class="slide level1">
+
+<h2 id="openai-compatible-api">OpenAI-compatible API</h2>
+<ul>
+<li class="fragment">Users import openai-python from OpenAI itself</li>
+<li class="fragment">All services which can use OpenAI’s API can use
+Blablador’s API (VSCode’s Continue.dev, etc)</li>
+<li class="fragment">The API is not yet rate-limited, logged, monitored,
+documented or well-tested.</li>
+</ul>
+</section>
+<section id="the-llm-open-ecosystem" class="slide level1">
+<h1>The LLM open ecosystem</h1>
 <ul>
 <li class="fragment">If it isn’t on huggingface, it doesn’t exist</li>
 <li class="fragment">The “open” ecosystem is dominated by a few big
 players: Meta, Mistral.AI, Google</li>
-<li class="fragment">Microsoft has tiny, bad ones</li>
-<li class="fragment">Apple is going their own way</li>
+<li class="fragment">Microsoft has tiny, bad ones (but I wouldn’t bet
+against them)</li>
+<li class="fragment">Apple is going their own way + using ChatGPT</li>
 <li class="fragment">Twitter/X has Grok-2 for paying customers, Grok-1
 is enormous and “old” (from march)</li>
+<li class="fragment">Google is catching up FAST on closed models</li>
+<li class="fragment">Anthropic is receiving billions from Amazon but
+Claude is completely closed</li>
+</ul>
+</section>
+<section id="the-llm-ecosystem" class="slide level1">
+<h1>The LLM ecosystem</h1>
+<ul>
 <li class="fragment">Evaluation is HARD
 <ul>
 <li class="fragment">Benchmarks prove little</li>
-<li class="fragment">99% of new models are fine-tuned versions of
-existing ones…</li>
+<li class="fragment">Absolute majority of new models are fine-tuned
+versions of existing ones…</li>
 <li class="fragment">On the benchmarks themselves!
 <ul>
 <li class="fragment">Is this cheating?</li>
@@ -277,7 +375,7 @@ href="https://lmarena.ai">https://lmarena.ai</a></li>
 </section>
 <section class="slide level1">
 
-<p><img data-src="images/llm-arena.jpg" /></p>
+<p><img data-src="images/llm-leaderboard-2024-11.png" /></p>
 </section>
 <section id="open-source" class="slide level1">
 <h1>Open Source?</h1>
@@ -332,24 +430,9 @@ turing-complete (probably not)</li>
 <li class="fragment">Bureaucratic P.I.T.A. 💩</li>
 </ul>
 </section>
-<section id="blablador" class="slide level1">
-<h1>Blablador</h1>
-<p><img data-src="images/blablador-screenshot.png" /></p>
-</section>
 <section id="blablador-1" class="slide level1">
 <h1>Blablador</h1>
-<ul>
-<li class="fragment">/ˈblæblæˌdɔɹ/</li>
-<li class="fragment">Bla-bla-bla 🗣️ + Labrador 🐕‍🦺</li>
-<li class="fragment">A stage for deploying and testing large language
-models</li>
-<li class="fragment">Models change constantly (constantly improving
-rank, some good, some awful)</li>
-<li class="fragment">A mix of small, fast models and large, slower ones
-- changes constantly</li>
-<li class="fragment">It is a web server and an api server, and training
-code.</li>
-</ul>
+<p><img data-src="images/blablador-screenshot.png" /></p>
 </section>
 <section class="slide level1">
 
@@ -376,73 +459,6 @@ in computer science or software engineering.”</p>
 </blockquote>
 <p>Andrew Ng, 28.02.2024</p>
 </section>
-<section id="why" class="slide level1">
-<h1>Why?</h1>
-<ul>
-<li class="fragment">AI is becoming basic infrastructure</li>
-<li class="fragment">Which historically is Open Source</li>
-<li class="fragment">We train a lot, deploy little: <em>Here is your
-code/weights, k.thnx.bye!</em></li>
-<li class="fragment">Little experience with dealing with LLMs</li>
-<li class="fragment">From the tools point of view, this is a FAST moving
-target 🎯💨</li>
-<li class="fragment">Acquire local experience in issues like
-<ul>
-<li class="fragment">data loading,</li>
-<li class="fragment">quantization,</li>
-<li class="fragment">distribution,</li>
-<li class="fragment">fine-tune LLMs for specific tasks,</li>
-<li class="fragment">inference speed,</li>
-<li class="fragment">deployment</li>
-</ul></li>
-<li class="fragment">Projects like OpenGPT-X, TrustLLM and Laion need a
-place to run</li>
-<li class="fragment">The usual: we want to be ready when the time
-comes</li>
-<li class="fragment">TL;DR: BECAUSE WE CAN! 🤘</li>
-</ul>
-</section>
-<section class="slide level1">
-
-<h2 id="some-facts">Some facts</h2>
-<ul>
-<li class="fragment">No data collection at all. I don’t keep
-<strong><em>ANY</em></strong> data whatsoever!
-<ul>
-<li class="fragment">You can use it AND keep your data private</li>
-<li class="fragment">No records? Privacy (and GDPR is happy)</li>
-</ul></li>
-</ul>
-</section>
-<section class="slide level1">
-
-<h2 id="deployment-as-a-service">Deployment as a service</h2>
-<ul>
-<li class="fragment">Scientists from (currently just FZJ) can deploy
-their models on their <em>own</em> hardware and point to blablador</li>
-<li class="fragment">This solves a bunch of headaches for researchers:
-<ul>
-<li class="fragment">Authentication</li>
-<li class="fragment">Web server</li>
-<li class="fragment">Firewall</li>
-<li class="fragment">Availability</li>
-<li class="fragment">Etc</li>
-</ul></li>
-<li class="fragment"><strong><em>If you have a model and want to deploy
-it, contact me!</em></strong></li>
-</ul>
-</section>
-<section class="slide level1">
-
-<h2 id="openai-compatible-api">OpenAI-compatible API</h2>
-<ul>
-<li class="fragment">Users import openai-python from OpenAI itself</li>
-<li class="fragment">All services which can use OpenAI’s API can use
-Blablador’s API (VSCode’s Continue.dev, etc)</li>
-<li class="fragment">The API is not yet rate-limited, logged, monitored,
-documented or well-tested.</li>
-</ul>
-</section>
 <section id="juelich-supercomputing-centre" class="slide level1">
 <h1>Juelich Supercomputing Centre</h1>
 <figure>