At npm Inc, we take security extremely seriously… so you know what’s coming next, don’t you? Here’s the TLDR:
- We hired ^Lift to audit the security of npm
- Before they could start, we had a very serious security vulnerability responsibly disclosed by Will Farrington and Charlie Somerville
- We fixed it on February 17th
- We have no evidence that it was exploited
- But we can’t be sure it wasn’t, because our systems weren’t good enough to audit intrusions
- Our systems are now good enough to audit intrusions
- We also fixed a bunch of other less-serious problems found by ^Lift
- We can still do better, and we’re working on that
If you are using the st module in production, upgrade to the latest version.
We knew this was coming
Back in November, when Isaac and I first seriously started talking about how the still-hypothetical npm, Inc. would work, one of the first things I said was “we need to do a security audit”. npm is four years old, and when it was first written the Node community was small, and benevolence could be assumed. That’s no longer the case, but npm is a big codebase, and even though it’s open-source, not every line gets reviewed. We knew the moment we announced the existence of a company, the security spotlight would be on us. That’s how it is, and that’s how it should be. The community should be able to trust us.
The problem is that a professional security audit requires money, and you don’t have money until you’re a company. It’s a classic race condition, and it bit us this way:
- On December 31st, we started talking to ^Lift about a security audit
- On February 11th, we announced our funding
- Also on February 11th, with our funding in the bank, we signed a contract with ^Lift
- Due to existing commitments, ^Lift were unavailable to start until February 14th
- On February 12th, we received a pair of vulnerability reports, one minor, one major, from two engineers at GitHub: Will Farrington on the operations team, and Charlie Somerville on the systems team, working in their spare time.
- D’oh! But also thanks!
The Big Bug
The bug found by Charlie Somerville is a classic “static file leakage” bug: the code that runs the npm website served static files through a module called st. It was possible, through a carefully encoded URL, to get st to serve any file it could see, not just the ones in the static content directory, and you could also list the contents of directories, so it was very easy to go looking for sensitive files.
The files that could have been potentially accessed included a ton of sensitive information: SSL keys, database passwords with read/write access to our production databases, basically everything you never want a third party to see. Somebody with access to the database could replace npm modules with malicious payloads. I don’t want to blur the truth here: this could have been a disaster. It is of very similar scope to the rubygems.org security issue in early 2013, and we are similarly lucky that the effect was not much much worse. (NB: this paragraph earlier mis-stated the severity of the rubygems.org issue; see our correction.)
Thankfully, there’s no evidence that, other than ourselves, the engineers who reported the bugs, and a few members of the GitHub security team who knew about the issue, anyone knew about this hole. But, in the interests of transparency, we should be clear that we can’t prove that: the logs we kept at the time were not sufficiently long-lasting to be able to be sure nobody had ever accessed sensitive data (though we know nobody did it in the month prior to the disclosure). We were just lucky that the first people to find it were friendly and responsible, and we are immensely grateful to Will Farrington and Charlie Somerville for their efforts.
What we did
In addition to patching the hole immediately (it took a few tries, but we were done by February 17th), we changed all the passwords that could have been exposed, and retired the machine that was hosting the vulnerable version of the site, switching to new hardware. We also immediately replaced the SSL certificate on the website with a new one. However, that old SSL certificate was also used by the registry itself, and this is where we made a really dumb mistake: the new certificate worked fine for the website, but when put in front of the registry it broke older versions of npm in a way that caused a whole bunch of pain for people. So we rolled that back, finally replacing the certs on both the website and the registry with a fully backwards-compatible certificate earlier today. Which brings us to a relevant question:
Why did this take so long to disclose?
Simply, it was an administrative problem. The only SSL certificate that would work backwards-compatibly is one from GlobalSign, and because GlobalSign take verification quite seriously, and npm, Inc. was only a week old, normal fast methods of verification didn’t work. There was a lot of back-and-forth of emails with lawyers and signatures and swearings of fact, while Will Farrington and Charlie Somerville sat patiently asking why we hadn’t blogged about this yet, and I tore my hair out every day.
What should I do?
If you are a package maintainer, and you are worried about the integrity of your module, simply publish a new version (or, if you have time, audit the code of the current one). Our infrastructure is, to the best of our knowledge and that of our auditors, secure, so any version published since our fix on the 17th is safe. (You can see the exact publication time of all package versions using
npm view package-name-here time)
If you are a node user in production, you should make sure you’re using the latest available versions, and keep an ear out for any disclosures about vulnerable packages in the coming weeks. (Again: if you’re using st, you should absolutely upgrade to the latest version right now)
The other bugs
There was one other bug located by the external researchers: Will found our ElasticSearch instance (used by the website) was vulnerable to being shut down and otherwise messed with by third parties. Again, there is no evidence that this happened. If it had, it could have disabled the search feature on the website and/or caused website unavailability, but not much more.
The remaining high-risk bugs were located by the hard-working team at ^Lift as part of the regular, non-emergency audit:
- A user could publish a package they own, but make it appear to have been published by a different user
- Our password-reset flow was vulnerable to targeted phishing attacks
- A user could “star” a package as another user
- Our servers were not properly requesting “Strict Transport Security” for HTTP
- A user could inject scripts into the npm website via the README and license fields
These have all been fixed in the past few weeks, and again in every case, we saw no evidence that anyone other than security researchers was aware of these flaws or had exploited them.
The ^Lift team also found number of other, lower-risk bugs, which we have been patching on a non-emergency schedule as part of regular work on the website.
What we’re still working on
In addition to the active vulnerabilities in our codebase and APIs, ^Lift have given us a long list of recommendations for enhancing our security at the software, network, and process levels. We’ll be announcing those improvements as we make them. We have already discussed our plans to allow package maintainers to cryptographically sign their packages, and the npm client to automatically verify those signatures. In the nearer term, we will be creating a security page to allow other researchers to responsibly disclose vulnerabilities.
We’re sorry. We will do better.
It goes without saying, but we’ll say it anyway: we’re sorry these holes existed. Until February of this year, the only person whose job it was to make sure npm was secure was Isaac, and it wasn’t even his full-time job. We are still a small team (we’re hiring!) working hard to keep up with the enormous growth in the popularity of Node, but we are also working hard on enhancing the security of npm. We hope you stick with us during these inevitable growing pains.