The npm blog has been discontinued.
Updates from the npm team are now published on the GitHub Blog and the GitHub Changelog.
New npm Registry Architecture
This blog post describes some recent changes to the way that The npm Registry works, and can be relevant to you if you’re replicating from the registry CouchDB today.
Fair warning: this post is heavy on technical details, and probably boring, unless you’re really into scaling web services.
tl;dr
- npm, Inc., is now sponsoring the public npm registry. The
isaacs.iriscouch.com
CouchDB is a downstream mirror. - If you change nothing, everything still works. Your replications might be a few seconds or minutes behind the official database of record.
- To shorten this delay, and also benefit from greater data consistency, you can replicate from
https://fullfatdb.npmjs.com/registry
instead. The AU and EU mirrors are already pulling out of FullfatDB. You probably should just create a new database that replicates from FullfatDB if you already have been pulling from Iris Couch in the past, since it has a lot less garbage. - To replicate the data without the attachments, point your replicator at
https://skimdb.npmjs.com/registry
. If you do this, then tarballs will be fetched from the public URLs.
Goals
One of the goals of npm, Inc. is to support the public npm registry and the users who depend on it. Open source is what makes npm interesting, and we’re excited to be a part of it.
CouchDB provides a lot of what npm needs out of the box. The replication story is solid. That’s really hard to get right, and CouchDB does. The ability to configure the database in JavaScript is also a big win. The core paradigms of CouchDB are very good for read-heavy workloads like The npm Registry.
However, there are a few aspects of the original registry implementation that are swiftly taking us outside of CouchDB’s comfort zone. Storing every package tarball as an attachment on a CouchDB document is painful. It works fine for a few hundred docs, but at well over a quarter million package versions, this makes view generation and compaction much too onerous, and the file becomes too large to ever successfully compact.
The plan was to figure out a way to get the attachments out of the main CouchDB database, without ruining the experience for those of you currently replicating the data and depending on the tarballs being included.
The best “vendor lock-in” is to provide really good service. If you want to replicate the public npm registry data and take it elsewhere, be our guest. Open source code is by its very nature open, and attempting to restrict access to it only reduces the value it provides. So, continuing to provide low-friction access to the public npm data is also a requirement.
This blog post will tell you what we’ve done so far, and how we’re meeting these goals.
Transitions
Making a change to a production service is always fraught with peril. In order to mitigate the risk, we did things in steps.
In all of these diagrams “Fastly” refers to the Fastly CDN that is fronting the registry. That’s what the npm client actually talks to, unless you’ve configured it to go somewhere else.
As of December of 2013, the registry was structured like this:
The registry.npmjs.org
domain name is a CNAME
to Fastly’s servers. The configuration on Fastly does the following things:
- Any
GET
orHEAD
request for a tarball goes to Manta. If it’s not found in Manta, then it falls back to theisaacs.iriscouch.com
CouchDB, operated by Nodejitsu. Tarballs have a high TTL, since they change rarely, and benefit greatly from caching. - Writes (ie,
PUT
,POST
,DELETE
, etc.) and requests for JSON metadata go to theisaacs.iriscouch.com
CouchDB.
In order to get a view of the data that is strictly metadata (and thus, much faster to generate views, easier to back up, and so on), I wrote npm-skim-registry. This is an extension of the mcouch module. Like mcouch, it uploads attachments to Manta. But, it then goes a step further, and PUT
s the document into another database (or back to the same one) without the attachments.
The first transitional structure looked like this:
This generated the “skim” database. Also, this data is continuously replicating to a few other servers which are read-only, but can be promoted to master relatively quickly, if need be.
You can see the documents in all their attachment-free goodness at https://skimdb.npmjs.com/registry. If you want to replicate documents without attachments, direct your replicators there.
If we didn’t care about backwards compatibility, we could just point the registry at skimdb, and stop. However, that would be a pretty rude thing to do. So, we added another piece, the npm-fullfat-registry. This is a module that takes an attachment-free skim registry, and puts the attachments back on. Also, you can provide it with a whitelist, if you want to keep certain attachments locally, and fetch others from elsewhere.
The second transitional step looked like this:
This was the state of the world for the last several weeks, while we’ve been load testing the SkimDB server and working out the kinks in the data flows.
In that phase, PUTs, DELETEs, and JSON GETs went to isaacs.iriscouch.com
, and binary GETs went to Manta. The skim daemon pulls the tarballs out and uploads them to Manta, and then puts the attachment-free document into SkimDB.
Notice that there’s a second skim daemon in this setup, that reads from SkimDB and then writes back to SkimDB. Since all the packages coming into SkimDB are already skimmed, that second daemon isn’t doing anything. However, when we publish directly into SkimDB, then that becomes important.
The goal, after all, was for publishes to go directly into this system, so that npm, Inc. is the official host of The npm Registry. That’s why we created this company: to support our OSS community in a long-term sustainable way. Providing the registry service is just the first step of many.
Getting Ready
We took some of the CouchDB logs from one of the isaacs.iriscouch.com
servers, and had siege run a representative stress test on the new SkimDB host. After a bit of resizing and configuring, we found a setup that would stand up to the typical load that the registry sees, plus a generous “round up for safety” multiplier.
Because fake data is never enough to be sure, we also directed a portion of the production read traffic to the SkimDB databases. Unfortunately, at the start of that process, we accidentally introduced a bug in the Fastly config that sent all traffic to Manta (including writes and non-binary GETs), taking down the registry and website for 54 minutes. (I apologize again about that. If you are good at helping to avoid such things, please chat with us.)
Once the error was corrected, we learned some other lessons by gradually dialing up the traffic. The most important lesson was: Never ever use CouchDB in production with an old version of SpiderMonkey. The default CouchDB in SmartOS pkgsrc installs with SpiderMonkey 1.8.0, instead of the current version, 1.8.5.
Given how much CouchDB relies on JavaScript, the difference in speed made it unfit for even modest production load. A single instance could take on about 4% of the production traffic, but any more than that and it would start timing out and falling over.
This was tricky for me personally to track down, due to my own bias: years of developing, debugging, and using Node.js (and CouchDB with modern SpiderMonkey versions) has taught me that JavaScript speed is almost never the problem. In trying to root cause the latency, we investigated disk IO, erlang scheduler issues, operating system differences, etc. But it turns out, if you’re using a VM from before the last 5 years of JavaScript VM optimization, JavaScript speed is usually the problem!
Once we had a SpiderMonkey 1.8.5 build, everything went much faster.
Anatomy of a Database Server
Each database server is using a vanilla CouchDB 1.5.0 listening on a private port. The CouchDB is fronted by the Pound TLS Terminator, which held up the best in our load tests.
It’s worth noting that, when we’re talking about the registry itself, SkimDB is not just one zone. That would be extremely brittle. There is a single master server where all writes go to, but GET and HEAD requests are also served by several read-only replicas, like this:
Note that this is not peer or bi-directional replication, per se. Writes go to a single master, and that master then pushes the info out to all the followers. Our Fastly configuration automatically balances the load, as well as shielding us from the brunt of the Wild, Wild, Web. Since PUTs and DELETEs account for well under 1% of the overall traffic, this works well for us.
Each of these are medium sized high-CPU zones running in the Joyent Public Cloud.
The FullfatDB block on the diagrams also has a few read-only hot spares lying around. However, since FullfatDB is strictly a single-purpose read-only replication source, it has considerably different needs. It is configured to not even allow you to request views or log in, removing any temptation to use it as an actual “registry”. So, it’s configured as a large-disk zone, but nothing special in terms of memory or CPU.
Final
As of today, The npm Registry looks like this:
Write operations go directly into SkimDB. When you publish, the skim daemon plucks off the tarball attachment, and once it confirms the upload to Manta, it deletes it from the SkimDB database.
As new documents flow into the SkimDB database, the fullfat daemon re-attaches the binary tarballs from Manta and feeds them into the FullfatDB.
The final step to making this work, in a backwards compatible fashion, is feeding the fullfat records back into the isaacs.iriscouch.com
database. Incidentally, this also allows our partner Nodejitsu’s existing npm service to continue functioning without interruption, which was another important goal of the transition.
Not Pictured
There are some pieces that are not covered in this post, for the sake of clarity.
- The npmjs.org website, which is a Node.js application that talks to the registry. It also talks to redis, resets passwords, verifies email addresses, and other stuff. The code for that is in the npm-www project.
- The search is powered by ElasticSearch, and uses the npm2es module to feed new package data into the ES search index.
- Logs from Fastly feed into an npm-lylog daemon, which in turn pipes them over to Loggly and a file that gets backed up in Manta.
- Various other little bits and bobs that we’ve been using to check data integrity, re-flush documents through the pipeline, etc.
- The opaque “Iris Couch” block loses a lot of detail, of course. They have load balancers, multiple boxes, etc.
Any of this sound like it’d be fun to work on? We’re hiring.