← Back to Blogs

Why I’m Using Bun.js with Blazor: Speed, AI, and the Best of Both Worlds 🚀

Why I’m Using Bun.js with Blazor: Speed, AI, and the Best of Both Worlds 🚀

Blazor Meets Bun.js: Why I Went Hybrid for AI

A developer's take on combining the Microsoft stack with the speed of Bun for AI-powered workflows.

As someone deep in the .NET and Blazor ecosystem, I’ve always appreciated how solid and structured the Microsoft stack is. But when I started diving into AI, embeddings, and microservices, I needed something faster, lighter, and more flexible than the typical .NET approach.

That’s where Bun.js came in.

This post is a quick recap of:

  • Why I’m using Bun alongside Blazor
  • What problems I solved
  • How I’m using Node.js libraries like Xenova for embeddings
  • Why mixing the two ecosystems makes a lot of sense

🧠 The Problem: AI and LLM Workloads Are Heavy

When building features like:

  • Document understanding (RAG)
  • AI-powered chat
  • Voice interaction
  • Vector search + embeddings

…I quickly ran into limits with pure .NET. While .NET is amazing for structure and UI (Blazor Server FTW), it’s not always ideal for:

  • Fast background jobs
  • AI SDK compatibility
  • Running LLM tools or modern Node.js/JavaScript libraries

And honestly, I didn’t want to spend days porting working Node tools into C#. So I went hybrid.

⚡ Why Bun.js?

I chose Bun because:

  • It’s blazing fast – faster than Node.js for almost everything I tested
  • It supports Node.js libraries out of the box
  • It’s perfect for running lightweight microservices with zero bloat
  • I could use cutting-edge tools like Xenova’s all-MiniLM-L6-v2 (in JS) for embedding

The cool part? I now run Bun as a microservice backend, and Blazor/.NET calls it for everything AI-related.

🔄 My Architecture: Microservices That Fly

Here’s how it flows:

  1. User uploads a doc in Blazor
  2. Blazor calls a Bun.js microservice
  3. Bun uses Xenova’s embedding model to generate vectors
  4. The vector is stored in Qdrant
  5. Blazor later queries Qdrant and passes results to Gemini via API

Boom — intelligent responses in real-time 🔥

🧰 Tools I’m Using

Purpose Tool
Frontend UIBlazor Server + Tailwind
EmbeddingsXenova’s all-MiniLM-L6-v2 via Bun
Vector StoreQdrant + @qdrant/js-client-rest
LLM (chat/inference)Gemini Flash Lite API
Microservice RuntimeBun (instead of Node.js for speed)
Backend Logic.NET 9 and Dapper

🧪 Why Mix .NET with Bun?

Simple:

  • Blazor is amazing for UI, security, and structure
  • Bun gives me raw speed + access to JS/AI tooling
  • Mixing both lets me move fast without losing the reliability of the .NET ecosystem

.NET doesn’t need to be your everything. Let it be your anchor, while Bun handles the async AI stuff.

🔜 What’s Next

  • Add LangChain.js agents to the Bun side for multi-hop logic
  • Add a Blazor UI for uploading, querying, and AI interaction
  • Package the Bun microservice setup as a reusable module for future projects

✨ Final Take

If you’re building with Blazor and want to integrate AI (RAG, embeddings, chatbots, audio), try using Bun.js for your AI microservices. You get speed, access to Node.js libraries like Xenova, and full control.

Blazor + Bun = productivity, speed, and power. No compromises.

About the Author

P

Panha Ma

on a mission to get high up🚀

An unhandled error has occurred. Reload 🗙