Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,592
|
Comments: 51,224
Privacy Policy · Terms
filter by tags archive
time to read 6 min | 1012 words

I talked with my daughter recently about an old babysitter, and then I pulled out my phone and searched for a picture using “Hadera, beach”. I could then show my daughter a picture of her and the babysitter at the beach from about a decade ago.

I have been working in the realm of databases and search for literally decades now. The image I showed my daughter was taken while I was taking some time off from thinking about what ended up being Corax, RavenDB’s indexing and querying engine 🙂.

It feels natural as a user to be able to search the content of images, but as a developer who is intimately familiar with how this works? That is just a big mountain of black magic. Except… I do know how to make it work. It isn’t black magic, it's just the natural consequence of a bunch of different things coming together.

TLDR: you can see the sample application here: https://github.com/ayende/samples.imgs-embeddings

And here is what the application itself looks like:

Let’s talk for a bit about how that actually works, shall we? To be able to search the content of an image, we first need to understand it. That requires a model capable of visual reasoning.

If you are a fan of the old classics, you may recall this XKCD comic from about a decade ago. Luckily, we don’t need a full research team and five years to do that. We can do it with off-the-shelf models.

A small reminder - semantic search is based on the notion of embeddings, a vector that the model returns from a piece of data, which can then be compared to other vectors from the same model to find how close together two pieces of data are in the eyes of the model.

For image search, that means we need to be able to deal with a pretty challenging task. We need a model that can accept both images and text as input, and generate embeddings for both in the same vector space.

There are dedicated models for doing just that, called CLIP models (further reading). Unfortunately, they seem to be far less popular than normal embedding models, probably because they are harder to train and more expensive to run. You can run it locally or via the cloud using Cohere, for example.

Here is an example of the codeyou need to generate an embedding from an image. And here you have the code for generating an embedding from text using the same model. The beauty here is that because they are using the same vector space, you can then simply apply both of them together using RavenDB’s vector search.

Here is the code to use a CLIP model to perform textual search on images using RavenDB:


// For visual search, we use the same vector search but with more candidates
// to find visually similar categories based on image embeddings
var embedding = await _clipEmbeddingCohere.GetTextEmbeddingAsync(query);


var categories = await session.Query<CategoriesIdx.Result, CategoriesIdx>()
      .VectorSearch(x => x.WithField(c => c.Embedding),
                  x => x.ByEmbedding(embedding),
                  numberOfCandidates: 3)
      .OfType<Category>()
      .ToListAsync();

Another option, and one that I consider a far better one, is to not generate embeddings directly from the image. Instead, you can ask the model to describe the image as text, and then run semantic search on the image description.

Here is a simple example of asking Ollama to generate a description for an image using the llava:13b visual model. Once we have that description, we can ask RavenDB to generate an embedding for it (using the Embedding Generation integration) and allow semantic searches from users’ queries using normal text embedding methods.

Here is the code to do so:


var categories = await session.Query<Category>()
   .VectorSearch(
      field => {
         field.WithText(c => c.ImageDescription)
            .UsingTask("categories-image-description");
      },
      v => v.ByText(query),
      numberOfCandidates: 3)
   .ToListAsync();

We send the user’s query to RavenDB, and the AI Task categories-image-description handles how everything works under the covers.

In both cases, by the way, you are going to get a pretty magical result, as you can see in the top image of this post. You have the ability to search over the content of images and can quite easily implement features that, a very short time ago, would have been simply impossible.

You can look at the full sample application here, and as usual, I would love your feedback.

time to read 6 min | 1003 words

This blog recently got a nice new feature, a recommended reading section (you can find the one for this blog post at the bottom of the text). From a visual perspective, it isn’t much. Here is what it looks like for the RavenDB 7.1 release announcement:

At least, that is what it shows right now. The beauty of the feature is that this isn’t something that is just done, it is a much bigger feature than that. Let me try to explain it in detail, so you can see why I’m excited about this feature.

What you are actually seeing here is me using several different new features in RavenDB to achieve something that is really quite nice. We have an embedding generation task that automatically processes the blog posts whenever I post or update them.

Here is what the configuration of that looks like:

We are generating embeddings for the PostsBody field and stripping out all the HTML, so we are left with just the content. We do that in chunks of 2K tokens each (because I have some very long blog posts).

The reason we want to generate those embeddings is that we can then run vector searches for semantic similarity. This is handled using a vector search index, defined like this:


public class Posts_ByVector : AbstractIndexCreationTask<Post>
{
    public Posts_ByVector()
    {
        SearchEngineType = SearchEngineType.Corax;
        Map = posts =>
            from post in posts
            where post.PublishAt != null
            select new
            {
                Vector = LoadVector("Body", "posts-by-vector"),
                PublishAt = post.PublishAt,
            };
    }
}

This index uses the vectors generated by the previously defined embedding generation task. With this setup complete, we are now left with writing the query:


var related = RavenSession.Query<Posts_ByVector.Query, Posts_ByVector>()
    .Where(p => p.PublishAt < DateTimeOffset.Now.AsMinutes())
    .VectorSearch(x => x.WithField(p => p.Vector), x => x.ForDocument(post.Id))
    .Take(3)
    .Skip(1) // skip the current post, always the best match :-)
    .Select(p => new PostReference { Id = p.Id, Title = p.Title })
    .ToList();

What you see here is a query that will fetch all the posts that were already published (so it won’t pick up future posts), and use vector search to match the current blog post embeddings to the embeddings of all the other posts.

In other words, we are doing a “find me all posts that are similar to this one”, but we use the embedding model’s notion of what is similar. As you can see above, even this very simple implementation gives us a really good result with almost no work.

  • The embedding generation task is in charge of generating the embeddings - we get automatic embedding updates whenever a post is created or updated.
  • The vector index will pick up any new vectors created for those posts and index them.
  • The query doesn’t even need to load or generate any embeddings, everything happens directly inside the database.
  • A new post that is relevant to old content will show up automatically in their recommendations.

Beyond just the feature itself, I want to bring your attention to the fact that we are now done. In most other systems, you’d now need to deal with chunking and handling rate limits yourself, then figure out how to deal with updates and new posts (I asked an AI model how to deal with that, and it started to write a Kafka architecture to process it, I noped out fast), handling caching to avoid repeated expensive model calls, etc.

In my eyes, beyond the actual feature itself, the beauty is in all the code that isn’t there. All of those capabilities are already in the box in RavenDB - this new feature is just that we applied them now to my blog. Hopefully, it is an interesting feature, and you should be able to see some good additional recommendations right below this text for further reading.

time to read 8 min | 1414 words

Let’s say I want to count the number of reachable nodes for each node in the graph. I can do that using the following code:


void DFS(Node start, HashSet<Node> visited) 
{
    if (start == null || visited.Contains(start)) return;
    visited.Add(start);
    foreach (var neighbor in start.Neighbors) 
    {
        DFS(neighbor, visited);
    }
}


void MarkReachableCount(Graph g)
{
   foreach(var node in g.Nodes)
   {
       HashSet<Node> visited = [];
       DFS(node, visisted);
       node.ReachableGraph = visited.Count;
   }
}

A major performance cost for this sort of operation is the allocation cost. We allocate a separate hash set for each node in the graph, and then allocate whatever backing store is needed for it. If you have a big graph with many connections, that is expensive.

A simple fix for that would be to use:


void MarkReachableCount(Graph g)
{
   HashSet<Node> visited = [];
   foreach(var node in g.Nodes)
   {
       visited.Clear();
       DFS(node, visisted);
       node.ReachableGraph = visited.Count;
   }
}

This means that we have almost no allocations for the entire operation, yay!

This function also performs significantly worse than the previous one, even though it barely allocates. The reason for that? The call to Clear() is expensive. Take a look at the implementation - this method needs to zero out two arrays, and it will end up being as large as the node with the most reachable nodes. Let’s say we have a node that can access 10,000 nodes. That means that for each node, we’ll have to clear an array of about 14,000 items, as well as another array that is as big as the number of nodes we just visited.

No surprise that the allocating version was actually cheaper. We use the visited set for a short while, then discard it and get a new one. That means no expensive Clear() calls.

The question is, can we do better? Before I answer that, let’s try to go a bit deeper in this analysis. Some of the main costs in HashSet<Node> are the calls to GetHashCode() and Equals(). For that matter, let’s look at the cost of the Neighbors array on the Node.

Take a look at the following options:


public record Node1(List<Node> Neighbors);
public record Node2(List<int> NeighborIndexes);

Let’s assume each node has about 10 - 20 neighbors. What is the cost in memory for each option? Node1 uses references (pointers), and will take 256 bytes just for the Neighbors backing array (32-capacity array x 8 bytes). However, the Node2 version uses half of that memory.

This is an example of data-oriented design, and saving 50% of our memory costs is quite nice. HashSet<int> is also going to benefit quite nicely from JIT optimizations (no need to call GetHashCode(), etc. - everything is inlined).

We still have the problem of allocations vs. Clear(), though. Can we win?

Now that we have re-framed the problem using int indexes, there is a very obvious optimization opportunity: use a bit map (such as BitsArray). We know upfront how many items we have, right? So we can allocate a single array and set the corresponding bit to mark that a node (by its index) is visited.

That dramatically reduces the costs of tracking whether we visited a node or not, but it does not address the costs of clearing the bitmap.

Here is how you can handle this scenario cheaply:


public class Bitmap
{
    private ulong[] _data;
    private ushort[] _versions;
    private int _version;


    public Bitmap(int size)
    {
        _data = new ulong[(size + 63) / 64];
        _versions = new ushort[_data.Length];
    }


    public void Clear()
    {
        if(_version++ < ushort.MaxValue)
            return;
            
        Array.Clear(_data);
        Array.Clear(_versions);
    }


    public bool Add(int index)
    {
        int arrayIndex = index >> 6;
        if(_versions[arrayIndex] != _version)
        {
            _versions[arrayIndex] = _version;
            _data[arrayIndex] = 0;
        }


        int bitIndex = index & 63;
        ulong mask = 1UL << bitIndex;
        ulong old = _data[arrayIndex];
        _data[arrayIndex] |= mask;
        return (old & mask) == 0;
    }
}

The idea is pretty simple, in addition to the bitmap - we also have another array that marks the version of each 64-bit range. To clear the array, we increment the version. That would mean that when adding to the bitmap, we reset the underlying array element if it doesn’t match the current version. Once every 64K items, we’ll need to pay the cost of actually resetting the backing stores, but that ends up being very cheap overall (and worth the space savings to handle the overflow).

The code is tight, requires no allocations, and performs very quickly.

time to read 1 min | 187 words

Orleans is a distributed computing framework for .NET. It allows you to build distributed systems with ease, taking upon itself all the state management, persistence, distribution, and concurrency.

The core aspect in Orleans is the notion of a “grain” - a lightweight unit of computation & state. You can read more about it in Microsoft’s documentation, but I assume that if you are reading this post, you are already at least somewhat familiar with it.

We now support using RavenDB as the backing store for grain persistence, reminders, and clustering. You can read the official announcement about the release here, and the docs covering how to use RavenDB & Microsoft Orleans.

You can use RavenDB to persist and retrieve Orleans grain states, store Orleans timers and reminders, as well as manage Orleans cluster membership.

RavenDB is well suited for this task because of its asynchronous nature, schema-less design, and the ability to automatically adjust itself to different loads on the fly.

If you are using Orleans, or even just considering it, give it a spin with RavenDB. We would love your feedback.

time to read 3 min | 441 words

RavenDB is moving at quite a pace, and there is actually more stuff happening than I can find the time to talk about. I usually talk about the big-ticket items, but today I wanted to discuss some of what we like to call Quality of Life features.

The sort of things that help smooth the entire process of using RavenDB - the difference between something that works and something polished. That is something I truly care about, so with a great sense of pride, let me walk you through some of the nicest things that you probably wouldn’t even notice that we are doing for you.


RavenDB Node.js Client - v7.0 released (with Vector Search)

We updated the RavenDB Node.js client to version 7.0, with the biggest item being explicit support for vector search queries from Node.js. You can now write queries like these:


const docs = session.query<Product>({collection: "Products"})
   .vectorSearch(x => x.withText("Name"),
      factory => factory.byText("italian food"))
  .all();

This is the famous example of using RavenDB’s vector search to find pizza and pasta in your product catalog, utilizing vector search and automatic data embeddings.


Converting automatic indexes to static indexes

RavenDB has auto indexes. Send a query, and if there is no existing index to run the query, the query optimizer will generate one for you. That works quite amazingly well, but sometimes you want to use this automatic index as the basis for a static (user-defined) index. Now you can do that directly from the RavenDB Studio, like so:

You can read the full details of the feature at the following link.


RavenDB Cloud - Incidents History & Operational Suggestions

We now expose the operational suggestions to you on the dashboard. The idea is that you can easily and proactively check the status of your instances and whether you need to take any action.

You can also see what happened to your system in the past, including things that RavenDB’s system automatically recovered from without you needing to lift a finger.

For example, take a look at this highly distressed system:


As usual, I would appreciate any feedback you have on the new features.

time to read 2 min | 342 words

I wrote the following code:


if (_items is [var single])
{
    // no point invoking thread pool
    single.Run();
}

And I was very proud of myself for writing such pretty and succinct C# code.

Then I got a runtime error:

I asked Grok about this because I did not expect this, and got the following reply:

No, if (_items is [var single]) in C# does not match a null value. This pattern checks if _items is a single-element array and binds the element to single. If _items is null, the pattern match fails, and the condition evaluates to false.

However, the output clearly disagreed with both Grok’s and my expectations. I decided to put that into SharpLab, which can quickly help identify what is going on behind the scenes for such syntax.

You can see three versions of this check in the associated link.


if(strs is [var s]) // no null check


if(strs is [string s]) //  if (s != null)


if(strs is [{} s]) //  if (s != null)

Turns out that there is a distinction between a var pattern (allows null) and a non-var pattern. The third option is the non-null pattern, which does the same thing (but doesn’t require redundant type specification). Usually var vs. type is a readability distinction, but here we have a real difference in behavior.

Note that when I asked the LLM about it, I got the wrong answer. Luckily, I could get a verified answer by just checking the compiler output, and only then head out to the C# spec to see if this is a compiler bug or just a misunderstanding.

time to read 3 min | 420 words

I was just reviewing a video we're about to publish, and I noticed something in the subtitles. It said, "Six qubits are used for..."

I got all excited thinking RavenDB was jumping into quantum computing. But nope, it turned out to be a transcription error. What was actually said was, "Six kilobytes are used for..."

To be fair, I listened to the recording a few times, and honestly, "qubits" isn't an unreasonable interpretation if you're just going by the spoken words. Even with context, that transcription isn't completely out there. I wouldn't be surprised if a human transcriber came up with the same result.

Fixing this issue (and going over an hour of text transcription to catch other possible errors) is going to be pretty expensive. Honestly, it would be easier to just skip the subtitles altogether in that case.

Here's the thing, though. I think a big part of this is that we now expect transcription to be done by a machine, and we don't expect it to be perfect. Before, when it was all done manually, it cost so much that it was reasonable to expect near-perfection.

What AI has done is make it cheap enough to get most of the value, while also lowering the expectation that it has to be flawless.

So, the choices we're looking at are:

  • AI transcription - mostly accurate, cheap, and easy to do.
  • Human transcription - highly accurate, expensive, and slow.
  • No transcription - users who want subtitles would need to use their own automatic transcription (which would probably be lower quality than what we use).

Before, we really only had two options: human transcription or nothing at all. What I think the spread of AI has done is not just made it possible to do it automatically and cheaply, but also made it acceptable that this "Good Enough" solution is actually, well, good enough.

Viewers know it's a machine translation, and they're more forgiving if there are some mistakes. That makes it way more practical to actually use it. And the end result? We can offer more content.

Sure, it's not as good as manual transcription, but it's definitely better than having no transcription at all (which is really the only other option).

What I find most interesting is that it's the fact that this is so common now that makes it possible to actually use it more.

Yes, we actually review the subtitles and fix any obvious mistakes for the video. The key here is that we can spend very little time actually doing that, since errors are more tolerated.

time to read 2 min | 218 words

RavenDB is a pretty big system, with well over 1 million lines of code. Recently, I had to deal with an interesting problem. I had a CancellationToken at hand, which I expected to remain valid for the duration of the full operation.

However, something sneaky was going on there. Something was cancelling my CancelationToken, and not in an expected manner. At last count, I had roughly 2 bazillion CancelationTokens in the RavenDB codebase. Per request, per database, global to the server process, time-based, operation-based, etc., etc.

Figuring out why the CancelationToken was canceled turned out to be a chore. Instead of reading through the code, I cheated.


token.Register(() =>
{
    Console.WriteLine("Cancelled!" + Environment.StackTrace);
});

I ran the code, tracked back exactly who was calling cancel, and realized that I had mixed the request-based token with the database-level token. A single line fix in the end. Until I knew where it was, it was very challenging to figure it out.

This approach, making the code tell you what is wrong, is an awesome way to cut down debugging time by a lot.

FUTURE POSTS

No future posts left, oh my!

RECENT SERIES

  1. RavenDB 7.1 (7):
    11 Jul 2025 - The Gen AI release
  2. Production postmorterm (2):
    11 Jun 2025 - The rookie server's untimely promotion
  3. Webinar (7):
    05 Jun 2025 - Think inside the database
  4. Recording (16):
    29 May 2025 - RavenDB's Upcoming Optimizations Deep Dive
  5. RavenDB News (2):
    02 May 2025 - May 2025
View all series

Syndication

Main feed ... ...
Comments feed   ... ...
}