diff --git a/README.md b/README.md
index 67925c9bd9cdaad14f40aa7a7a55888a344dd86c..537c18b993661d46a72859095f93527176cb0663 100644
--- a/README.md
+++ b/README.md
@@ -1,10 +1,10 @@
 ---
 author: Alexandre Strube
-title: Helmholtz BLABLADOR - An experimental Large Language Model server
+title: The current state of LLMs
 # subtitle: A primer in supercomputers`
-date: June 12, 2024
+date: January 29, 2024
 ---
 
 ## Description
 
-Talk for the [HAI Conference 2024 ](http://haicon24.de)
\ No newline at end of file
+Talk for the [Helmholtz AI Consultants Retreat](https://events.hifis.net/event/1945/)
\ No newline at end of file
diff --git a/index.md b/index.md
index 3c5da1b9d4f98464957ae96e8b79be4e92422caa..30b6ed3c0462899ceaf7d2417e75bff207638e46 100644
--- a/index.md
+++ b/index.md
@@ -1,8 +1,8 @@
 ---
 author: Alexandre Strube
-title: BLABLADOR ![](images/blablador.png){ width=550px }
-subtitle: JSC's Inference Infrastructure
-date: December 4th, 2024
+title: LLMs in 2025 ![](images/blablador.png){ width=550px }
+subtitle: Open source is the new black
+date: Januray 29th, 2025
 ---
 
 ## Website
@@ -15,20 +15,131 @@ date: December 4th, 2024
 
 ## OUTLINE
 
-- Past
 - Present
 - Future
 
 ---
 
-# Blablador
+## EVERYTHING CHANGED
+
+- Open source is the new black
+- OpenAI is no longer the only game in town
+- Training costs are WAY down
+
+--- 
+
+## The rise of China πŸ‡¨πŸ‡³
+
+- China is now a major player in AI
+- And they do a lot of open source!
+- 2020: USA 11 LLMs, China 2 LLMs
+- 2021: 31 LLMs on both countries
+(I need more recent data)
+
+---
+
+## Diagram of thought
+
+![Trying to overcome limitations of Chain-Of-Thought (Andrew Chi-Chih Yao from Tsinghua University)](images/diagram-of-thought.png){width=550px}
+
+---
+
+![](images/chinese-llm-benchmark.jpg)
+
+---
+
+## Huawei
+
+- Selling AI Chips
+- Ascend 910C: claimed to be on par with Nvidia's H100
+- (In practice, their chips are closed to A100)
+- Already made by SMIC (previous models were made in Taiwan by TSMC)
+
+---
+
+# China Telecom
+
+- Already has a 1 trillion parameter model using Huawei chips
+- TeleChat2-115b 
+- Meanwhile, Deutsche Telekom gives me 16mbps in JΓΌlich, like I had in 2005 in Brazil πŸ˜₯
+- Using **copper**, like the ancient aztecs or whatever πŸ—Ώπ“‚€π“‹Ήπ“ˆπ“ƒ π“†ƒπ“…“π“†£
+
+---
+
+# Baidu
+
+- Doing a lot of AI reserch
+- Ernie 4 from Oct 2023 (version 3 is open)
+- Ernie-Health
+- Miaoda (no-code dev)
+- I-Rag and Wenxin Yige (text to image)
+- [https://research.baidu.com/Blog/index-view?id=187](https://research.baidu.com/Blog/index-view?id=187)
+- 100,000+ GPU cluster**s**
+
+---
+
+# Baichuan AI
+
+- Has a 1 trillion parameter model for a year already
+
+---
+
+## Yi (01.ai)
+
+- Yi 1.5 pre-trained on 3.5T tokens and then produces it's own data and re-train on it
+- Was the best model at the beginning of 2024 on benchmarks
+- IMHO was not so good in practice πŸ˜΅β€πŸ’«
+- Yi Lightning is #12 on LmArena as of 26.01.2025
+- Yi VL, multimodal in June (biggest vision model available, 34b)
+- Yi Coder was the best code model until 09.2024
+- Went quiet after that (probably busy making money)
+- [https://01.ai](https://01.ai)
+
+---
+
+# StepFun
+
+- Step-2 is #8 on LLM Arena (Higher than Claude, Llama, Grok, Qwen etc)
+- Step-1.5V is a strong multimodal model
+- Step-1V generates images
+- No open source LLM available
+
+--- 
+
+##  Alibaba: Qwen 
+- BLABLADOR: alias-code
+- Better than Llama 
+- Open weights on HuggingFace and free inference too
+
+---
+
+## InternLM
 
-- /ˈblΓ¦blæˌdΙ”ΙΉ/
-- Bla-bla-bla πŸ—£οΈ + Labrador πŸ•β€πŸ¦Ί
-- A stage for deploying and testing large language models
-- Models change constantly (constantly improving rank, some good, some awful)
-- A mix of small, fast models and large, slower ones - changes constantly, keeps up with the state of the art
-- It is a web server, an api server, model runner, and training code.
+- InternLM-3 is an 8B model with 4x less training time than Llama 3
+- Released in 2025.01.15
+- Strube's opinion: Previous versions were bad, need to check again
+
+---
+
+## Tencent
+
+- Hunyuan-DiT LLM and can create images too
+- The Five: Home Robot
+
+
+
+---
+
+## Zhipu
+
+ChatGLM 
+
+---
+
+# ByteDance: Doubao
+- Multi-modal
+- 200x cheaper than OpenAI's GPT-4o
+- Video generation diffusion models: Doubao-PixelDance and Doubao-Seaweed
 
 ---
 
@@ -248,26 +359,6 @@ date: December 4th, 2024
 
 ---
 
-# Humbler beginnings...
-
-- Small cluster inherited from other projects
-- Started small, with three nodes
-- Runs Slurm, has a parallel FS/Storage, uses EasyBuild
-
----
-
-![Haicluster](images/IMG_7685.jpg)
-
----
-
-![Haicluster](images/IMG_7688.jpg)
-
----
-
-![Haicluster](images/IMG_7687.jpg)
-
----
-
 # User demand is growing, we need more hardware
 
 - Currently around 300+ unique users/day on the website
@@ -287,15 +378,7 @@ date: December 4th, 2024
 
 ---
 
-## It's being used in the wild!
-
-![https://indico.desy.de/event/38849/contributions/162118/](images/peter-steinbach-blablador-talk-lips-2024.png)
-
----
-
-![Dmitriy Kostunin's talk at LIPS24](images/kostunin-lips2024.png)
-
----
+## Crazy stuff people are doing with blablador:
 
 ![Controlling a radiotelescope with AI agents](images/kostunin-adass2024.png)
 
diff --git a/public/images/chinese-llm-benchmark.jpg b/public/images/chinese-llm-benchmark.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..69dfa1055784d4e32bd75e13b8655909ae424056
Binary files /dev/null and b/public/images/chinese-llm-benchmark.jpg differ
diff --git a/public/images/diagram-of-thought.png b/public/images/diagram-of-thought.png
new file mode 100644
index 0000000000000000000000000000000000000000..e374abfe025daaf099363cb1bdbabb5ce72f3ba4
Binary files /dev/null and b/public/images/diagram-of-thought.png differ
diff --git a/public/index.html b/public/index.html
index 75b101d196fdc538168f1908681bb51dd6dce923..71b6fd25695c0ad281a4f2c9a4821d298dd1cc11 100644
Binary files a/public/index.html and b/public/index.html differ