Building a faster CouchDB View Server in Rust
6 min read

Building a faster CouchDB View Server in Rust

A project I’ve been working on over the last few months is implementing the new CouchDB javascript view server in Rust and then benchmarking how it compared to the official Cloudant View Server.
Building a faster CouchDB View Server in Rust
Photo by Jake Givens / Unsplash

A project I’ve been working on over the last few months is implementing the new CouchDB javascript view server in Rust and then benchmarking how it compared to the official Cloudant View Server.

This is my third attempt at doing this, the first time I did it in C++ as part of the team at Cloudant. The majority of that work was done by Paul. The second attempt was to write it as an Erlang NIF as a way to improve my C++. Finally, this version, which is the one I’m most happy with, in Rust. This is broken into two sections, Part 1 is a deep dive into how View Servers work in CouchDB and Part 2 is all about the Rust implementation and the benchmark results.

Part 1: What is the CouchDB View Server

In Apache CouchDB 3.x, and before, the javascript engine was used for building map/reduce indexes, show and list functions, and validate document updates. I’m going to focus on the map index building for now as its the only one currently supported in the upcoming CouchDB 4.x release.

Traditionally how the javascript view server worked was that each Erlang node in a CouchDB cluster would maintain a set number of external javascript processes. The nodes would communicate via JS processes using stdio. In the situation where a map/reduce index needs to be built, the CouchDB indexer would acquire a lock on a specific JS process. It would send the map functions from the design doc and then send each document to be mapped with those functions. The javascript process would then reply with the emitted key/values from the document. Because the communication was via stdio it is a very synchronous process with CouchDB sendings a doc, waiting for a response, and then sending another document. Each JS process would only be able to work on one map index at a time.

One nice side effect though was because it communicated via stdio it is possible to write view servers in different languages like in Python, Ruby, and Node.js. Recently Jan built one using Deno.js

View Server in CouchDB 4.x

Moving to Apache CouchDB 4.x, we wanted to improve on that design. We aimed to create a nice clear API that would allow us to build View servers in multiple languages without depending on stdio and instead allow for different ways of connecting with the view server e.g Http, Erlang Nif, GRPC, etc.

The main API for all of this is defined in couch_eval.erl, it has three main functions:

  • acquire_map_context which is the start of the process when mapping documents.
  • map_docs which is called with the documents you want to be indexed with the query engine.
  • release_map_context is called when you have finished indexing documents.

couch_eval is the interface that an implementation needs to implement. In Erlang terms that means implementing the behavior.

The current default Apache CouchDB version is couch_js.erl which is a wrapper over the old stdio implementation. This is a place holder for now until we settle on a better solution. I want to rather look at Ateles which is a CouchDB view server written in C++ and uses the Mozilla’s Spidermonkey JS engine. Ateles uses protocol buffers over HTTP 1.1 to communicate with CouchDB. Below is a diagram of how it works.


Step 1, acquire_map_context, is the initial setup phase to create a JS context to use for mapping of documents. CouchDB will open an HTTP 1.1 session with Ateles server and send the all the map functions for that view to be transpiled. We have to transpile them because the way we define map functions is no longer supported in modern JS engines.

Next CouchDB sends over the map.js file which is the main code that manages running the documents through the design documents. Finally, CouchDB then sends over the transpiled map functions to be loaded into the new JS Context. Each new connection and mapping process gets its own JS Context. This is the equivalent of opening a new tab in your browser and makes sure that different user code cannot interfere with each other. Important to note is that each Ateles servers can handle multiple JS Contexts and connections at one time unlike in CouchDB 3.x.

Step 2, map_docs, CouchDB will send the documents to be mapped and wait for the Key/Values emitted to be returned back. It will collate those results and then store them in CouchDB. For example, say we have the following three documents:

{_id: "doc-1", value: 1, name: "field-1"},
{_id: "doc-2", value: 2, name: "field-2"},
{_id: "doc-3", value: 3, name: "field-3"},

And a design document that looks like this:

{_id: "_design/example_ddoc",
    views: {
      idx1: {
        map: `function (doc) {
                 emit(, doc.value);

Then the emitted Key/Values after the documents have been mapped would look like this:

    ("field-1", 1),
    ("field-2", 2),
    ("field-3", 3)

These values would be stored as the index and returned when this index is queried.

Finally Step 3, once CouchDB has indexed all the documents in the current FoundationDB transaction it calls release_map_context which will close the HTTP connection. Ateles will at this point destroy the JS Context.

Part 2: View Server implemented in Rust

Ateles was written in C++ and I thought this would be a really interesting challenge to rewrite in Rust and see how the code and performance would compare. Also, I had never written any Rust async/await code so I needed a challenging project to use as a way to learn how to do that. So I wrote Fortuna-rs, which is an implementation of the CouchDB View server in Rust using Google V8.

The main starting point for the code is the http layer. I used for this. Hyper is quite a low-level framework to work with but it was great for this use case. Hyper has a concept of a Service and a MakeService. Every time a new connection is established the MakeService is called to create a Service struct. That Service is then used to service (pun intended) every request for that connection. In the case of Fortuna, when the MakeService is called a new JS Context, Isolate in V8 language, is created for this connection. The design I decided for this is to create a new separate thread for the Isolate to run in rather than let it run in the threads that Tokio is using. This has worked well so far but some fun future work would be to try and make an async wrapper over v8 and use it in the main async threads.

When we first started working on Ateles we tried to use Google V8 but found building it slow and tricky. But for Rust, the Deno team has a fantastic V8 library that makes this easy to do. The tests are also a great starting point on learning how to use V8. The majority of my V8 code is based on those tests. The code can be found here.


Once I had Fortuna-rs working, I was excited to see how it would perform in building indexes in CouchDB. Initially, I was quite disappointed because there was no noticeable speed improvement at all. After a bit of head-scratching, I realized that the majority of indexing work is done in CouchDB and it wasn’t spending a lot of time mapping the docs with the JS View Server. So to get a better sense of performance, I built a small benchmark client that would do a better job of applying a larger HTTP load to Ateles and Fortuna so that I could get a better idea of the performance differences. The benchmark client imitates the mapping process and you can configure the number of total requests, concurrent request, and the number of documents to be mapped.

Before I go forward, I would like to mention that all the benchmarking was done on my local 6 Core iMac. The results are interesting and give a small indication of performance between the two servers but it is definitely not 100% accurate.

Below is a graph comparing Ateles versus Fortuna mapping large docs and smaller docs. On average Fortuna was about 60% faster. Which was super awesome.


I want to again mention this is all done locally on my machine so those numbers are not 100% accurate. Why is Fortuna faster? Rust makes it easy to write fast performant code, its strict compiler forces you to write code that is memory efficient and performant. I also think Tokio’s scheduler and Hyper do a really good job of handling asynchronous HTTP connections. However, most likely the main reason is that V8 is super fast.

Future Work

The main next steps would be to:

  • Add better test coverage
  • Look at adding simulation testing
  • Timeout if a request takes to long
  • Better error handling for failed requests
  • Investigate creating an async wrapper for V8

This was fun to build and it was incredibly satisfying seeing the speed improvement. I learned a lot about how JS engines work and embedding them in a project. If you want to give this try, take a look at the Readme on how to get this working with CouchDB.