Sometimes even us pen-testers find some new vulnerabilities. I find it fascinating how hard many tech companies make it to establish meaningful dialog and work an appropriate disclosure process. This blog post may be more venting or ranting than anything else, but seriously – this is the 21st century.
Not understanding vulnerability disclosure is inexcusable for tech vendors these days, right?

It goes something like this… I’m staring at someone’s network, struggling to make headway. A creative spark gets me thinking about something I’ve never tried before; then I realize that I almost have all the tools I need to do it. If I just write a little glue script here, modify a payload there, it just might work. And what to my wondering eyes, but success, and I pull off something that feels just a little unique. Just a little magical.

At some point, as I’m thinking about it, I realize that this is a new vulnerability. It’ll probably work at other places that use the same solution, which means they are at risk too. At this point, I have to decide what to do with it. The best thing, of course, is to call the vendor. It may turn out that they have a workaround or subtle feature that can mitigate the risk. Further, they certainly have an incentive to fix a vulnerability, and they need to know as soon as possible.

So when I call up a company at 1-800-COM-PANY to find out who I can talk to about this vulnerability, it’s quite stunning how hard it is to get them to engage. It usually goes something like this:

Stone:    Hi, I'm a security researcher, and I think I found a
          vulnerability in your product.  I couldn't find a security
          contact on your web site.  Is there someone there I can 
          talk to about it?

Phone Op: Wow, um... maybe.  First, though, what's your account
          number?

Stone:    Sorry, I don't have an account number -- I'm an independent
          researcher.  I found this while working with one of our
          mutual clients.

Phone Op: I'm sorry, but we can't provide technical support unless
          you are a customer.

Stone:    I understand, but I'm not asking for technical support.  
          I'm trying to determine whether this is a significant 
          security issue or not.

Phone Op: There's nothing I can do, sir, our policies are designed to
          ensure that our support resources are available for the 
          paying customers.

Stone:    Well, I'm sure there is *someone* at your company who will
          prefer to talk through my findings before I publish them.
          It's in everyeone's best interest to fully evaluate it.

Phone Op: I really don't know anything about that...

Et cetera. Maybe they have a supervisor, and maybe I’ll get to talk to someone. It takes days or weeks to finally establish dialog with a company that is clearly not prepared to handle the potential that their products may have security vulnerabilities. Somehow, I was under the impression that the disclosure wars were settled in the ’90s, but it seems the well-understood process of responsible disclosure only applies to the biggest and oldest tech companies (e.g., I had a wonderful experience discussing a vulnerability with Microsoft… but then, they’re very practiced at it).

The above scenario has happened for me about 4-5 times in the last year. We’ve experimented with asking the client to take point on the process, but there are downsides to that as well (I’ll not waste too much time on the details now, but clients have very different motivations, and the process drags out just as long if not longer).

But what’s most unsettling is that I feel an ethical obligation to publish. I’ve found myself in the position of knowing about vulnerabilities that I can’t discuss with others yet because responsible disclosure hasn’t worked itself out. I’ll get approached by customers asking what I think about a certain solution, and I am suddenly in an ethical bind. I have to publish, and ensure that the vendor has a fix available, because otherwise I can’t properly help people secure their stuff. And if the issue impacts compliance, then we’re really stuck until there’s a solution.

So here are some (hopefully constructive) thoughts:

  • First, publish a security contact. Your web site’s “contact us” page should have a security contact listed. I would not recommend overloading your “support@company.com” address either. Make it clear that you want to provide a channel for communication with the security community.
  • Second, instruct front-line support personnel so that they can forward an inbound call from a security researcher to the right people. I promise that someone in your company wants to talk to me before I talk to the CVE people.
  • Third, recognize that responsible disclosure means public disclosure at some point. Even big, famous, economically successful companies like Microsoft and RSA get hit with vulnerabilities.
    Your credibility is not on the line for having a bug; rather, a poor engagement with the infosec community may cause more harm in the end.
  • Fourth, really listen to the researcher. A lot of times, a vulnerability doesn’t look like a buffer overflow or SQLi. It might not fit in the normal bins. But when we pen-testers get something on an assessment that we couldn’t get any other way, we know it’s a security risk. Too many companies want to say, “That’s a defense in depth problem,” or “We don’t see the risk in that.” But then, your company isn’t the expert in exploitation either – let the researcher articulate the risk.

At the end of the day, they don’t call it work because it’s easy. But even so, many tech companies make it too hard to get responsible disclosure done right.