Sunday, August 13, 2017

But that's not my job!

This week I've been thinking about how security people and non security people interact. Various conversations I have often end up with someone suggesting everyone needs some sort of security responsibility. My suspicion is this will never work.

First some background to think about. In any organization there are certain responsibilities everyone has. Without using security as our specific example just yet, let's consider how a typical building functions. You have people who are tasked with keeping the electricity working, the plumbing, the heating and cooling. Some people keep the building clean, some take care of the elevators. Some work in the building to accomplish some other task. If the company that inhabits the building is a bank you can imagine the huge number of tasks that take place inside.

Now here's where I want our analogy to start. If I work in a building and I see a leaking faucet. I probably would report it. If I didn't, it's likely someone else would see it. It's quite possible if I'm one of the electricians and while accessing some hard to reach place I notice a leaking pipe. That's not my job to fix it, I could tell the plumbers but they're not very nice to me, so who cares. The last time I told them about a leaking pipe they blamed me for breaking it, so I don't really have an incentive here. If I do nothing, it really won't affect me. If I tell someone, at best it doesn't affect me, but in reality I probably will get some level of blame or scrutiny.

This almost certainly makes sense to most of us. I wonder if there are organizations where reporting things like this comes with an incentive. A leaking water pipe could end up causing millions in damage before it's found. Nowhere I've ever worked ever really had an incentive to report things like this. If it's not your job, you don't really have to care, so nobody ever really cared.

Now let's think about phishing in a modern enterprise. You see everything from blaming the user who clicked the link, to laughing at them for being stupid, to even maybe firing someone for losing the company a ton of money. If a user clicks a phishing link, and suspects a problem, they have very little incentive to be proactive. It's not their job. I bet the number of clicked phish links we find out about is much much lower than the total number clicked.

I also hear security folks talking about educating the users on how all this works. Users should know how to spot phishing links! While this won't work for a variety of reasons, at the end of the day, it's not their job so why do we think they should know how to do this? Even more important, why do we think they should care?

The think I keep wondering is should this be the job of everyone or just the job of the security people? I think the quick reaction is "everyone" but my suspicion is it's not. Electricity is a great example. How many stories have you heard of office workers being electrocuted in the office? The number is really low because we've made electricity extremely safe. If we put this in the context of modern security we have a system where the office is covered in bare wires. Imagine wires hanging from the ceiling, some draped on the floor. The bathroom has sparking wires next to the sink. We lost three interns last week, those stupid interns! They should have known which wires weren't safe to accidentally touch. It's up to everyone in the office to know which wires are safe and which are dangerous!

This is of course madness, but it's modern day security. Instead of fixing the wires, we just imagine we can train everyone up on how to spot the dangerous ones.

Friday, July 28, 2017

For a security conference that everyone claims not to trust the wifi, there sure was a lot of wifi

I attended BlackHat USA 2017, Elastic had a booth on the floor I spent a fair bit of time at as well as meetings scattered about the conference center. It was a great time as always, but this year I had a secret with me. I put together a Raspberry Pi that was passively collecting wifi statistics. Just certain metadata, no actual wifi data packets were captured or harmed in the making of this. I then log everything into Elasticsearch so I can build pretty visualizations in Kibana. I only captured 2.4 Ghz data with one radio, so I had it jumping around. Obviously I missed plenty of data, but this was really just about looking for interesting patterns.

I put everything I used to make this project go into GitHub, it's really rough though, you've been warned.

I have a ton of data to mine, I'll no doubt spend a great deal of time in the future doing that, but here's the basic TL;DR picture.

pretty picture

I captured 12.6 million wifi packets, the blue bars show when I captured what, the table shows the SSIDs I saw (not all packets have SSID data), and the colored graph shows which wifi channels were seen (not all packets have channel data either). I also have packet frequencies logged, so all that can be put together later. The two humps in the wifi data was when I was around the conference, I admit I was surprised by the volume of wifi I saw basically everywhere, even in the middle of the night from my hotel room.

Below is a graph showing the various frequencies I saw, every packet has to come in on some wireless frequency even if it doesn't have a wifi channel.

The devices seen data was also really interesting.

This chart represents every packet seen, so it's clearly going to be a long tail. It's no surprise an access point sends out a lot of packets, I didn't expect Apple to be #1 here, I expected the top few to be access point manufacturers. It would seem Apple gear is more popular and noisy than I expected.

A more interesting graph is unique devices seen by manufacturer (as a side note, I saw 77,904 devices in total over my 3 days).

This table is far more useful as it's totally expected a single access point will be very noisy. I didn't expect Cisco to make the top 3 I admit. But this means that Apple was basically 10% of wifi devices then we drop pretty quickly.

There's a lot more interesting data in this set, I just have to spend some time finding it all. I'll also make a point to single out the data specific to business hours. Stay tuned for a far more detailed writeup.

Saturday, July 22, 2017

Security and privacy are the same thing

Earlier today I ran across this post on Reddit
Security but not Privacy (Am I doing this right?)

The poster basically said "I care about security but not privacy".

It got me thinking about security and privacy. There's not really a difference between the two. They are two faces of the same coin but why isn't always obvious in today's information universe. If a site like Facebook or Google knows everything about you it doesn't mean you don't care about privacy, it means you're putting your trust in those sites. The same sort of trust that makes passwords private.

The first thing we need to grasp is what I'm going to call a trust boundary. I trust you understand trust already (har har har). But a trust boundary is less obvious sometimes. A security (or privacy) incident happens when there is a breach of the trust boundary. Let's just dive into some examples to better understand this.

A web site is defaced
In this example the expectation is the website owner is the only person or group that can update the website content. The attacker crossed a trust boundary that allowed them to make unwanted changes to the website.

Your credit card is used fraudulently
It's expected that only you will be using your credit card. If someone gets your number somehow and starts to make purchases with your card, how they got the card crosses a trust boundary. You could easily put this example in the "privacy" bucket if you wanted to keep them separate, it's likely your card was stolen due to lax security at one of the businesses you visited.

Your wallet is stolen
This one is tricky. The trust boundary is probably your pocket or purse. Maybe you dropped it or forgot it on a counter. Whatever happened the trust boundary is broken when you lose control of your wallet. An event like this can trickle down though. It could result in identity theft, your credit card could be used. Maybe it's just about the cash. The scary thing is you don't really know because you lost a lot of information. Some things we'd call privacy problems, some we'd call security problems.

I use a confusing last example on purpose to help prove my point. The issue is all about who do you trust with what. You can trust Facebook and give them tons of information, many of us do. You can trust Google for the same basic reasons. That doesn't mean you don't care about privacy, it just means you have put them inside a certain trust boundary. There are limits to that trust though.

What if Facebook decided to use your personal information to access your bank records? That would be a pretty substantial trust boundary abuse. What if your phone company decided to use the information they have to log into your Facebook account?

A good password isn't all that different from your credit card number. It's a bit of private information that you share with one or more other organizations. You are expecting them not to cross a trust boundary with the information you gave them.

The real challenge is to understand what trust boundaries you're comfortable with. What do you share with who? Nobody is an island, we must exist in an ecosystem of trust. We all have different boundaries of what we will share. That's quite all right. If you understand your trust boundary making good security/privacy decisions becomes a lot easier.

They say information is the new oil. If that's true then trust must be the currency.

Thursday, July 20, 2017

Summer is coming

I'm getting ready to attend Black Hat. I will miss BSides and Defcon this year unfortunately due to some personal commitments. And as I'm packing up my gear, I started thinking about what these conferences have really changed. We've been doing this every summer for longer than many of us can remember now. We make our way to the desert, we attend talks by what we consider the brightest minds in our industry. We meet lots of people. Everyone has a great time. But what is the actionable events that come from these things.

The answer is nothing. They've changed nothing.

But I'm going to put an asterisk next to that.

I do think things are getting better, for some definition of better. Technology is marching forward, security is getting dragged along with a lot of it. Some things, like IoT, have some learning to do, but the real change won't come from the security universe.

Firstly we should understand that the world today has changed drastically. The skillset that mattered ten years ago doesn't have a lot of value anymore. Things like buffer overflows are far less important than they used to be. Coding in C isn't quite what it once was. There are many protections built into frameworks and languages. The cloud has taken over a great deal of infrastructure. The list can go on.

The point of such a list is to ask the question, how much of the important change that's made a real difference came from our security leaders? I'd argue not very much. The real change comes from people we've never heard of. There are people in the trenches making small changes every single day. Those small changes eventually pile up until we notice they're something big and real.

Rather than trying to fix the big problems, our time is better spent ignoring the thought leaders and just doing something small. Conferences are important, but not to listen to the leaders. Go find the vendors and attendees who are doing new and interesting things. They are the ones that will make a difference, they are literally the future. Even the smallest bug bounty, feature, or pull request can make a difference. The end goal isn't to be a noisy gasbag, instead it should be all about being useful.

Saturday, July 8, 2017

Who's got your hack back?

The topic of hacking back keeps coming up these days. There's an attempt to pass a bill in the US that would legalize hacking back. There are many opinions on this topic, I'm generally not one to take a hard stand against what someone else thinks. In this case though, if you think hacking back is a good idea, you're wrong. Painfully wrong.

Everything I've seen up to this point tells me the people who think hacking back is a good idea are either mistaken about the issue or they're misleading others on purpose. Hacking back isn't self defense, it's not about being attacked, it's not about protection. It's a terrible idea that has no place in a modern society. Hacking back is some sort of stone age retribution tribal law. It has no place in our world.

Rather than break the various argument apart. Let's think about two examples that exist in the real world.

Firstly, why don't we give the people doing mall security guns? There is one really good reasons I can think of here. The insurance company that holds the policy on the mall would never allow the security to carry guns. If you let security carry guns, they will use them someday. They'll probably use them in an inappropriate manner, the mall will be sued, and they will almost certainly lose. That doesn't mean the mall has to pay a massive settlement, it means the insurance company has to pay a massive settlement. They don't want to do that. Even if some crazy law claims it's not illegal to hack back, no sane insurance company will allow it. I'm not talking about cyber insurance, I'm just talking about general policies here.

The second example revolves around shoplifting. If someone is caught stealing from a store, does someone go to their house and take some of their stuff in retribution? They don't of course. Why not? Because we're not cave people anymore. That's why. Retribution style justice has no place in a modern civilization. This is how a feud starts, nobody has ever won a feud, at best it's a draw when they all kill each other.

So this has me really thinking. Why would anyone want to hack back? There aren't many reasons that don't revolve around revenge. The way most attacks work you can't reliably know who is doing what with any sort of confidence. Hacking back isn't going to make anything better. It would make things a lot worse. Nobody wants to be stuck in the middle of a senseless feud. Well, nobody sane.

Sunday, June 25, 2017

When in doubt, blame open source

If you've not read my previous post on thought leadership, go do that now, this one builds on it. The thing that really kicked off my thinking on these matters was this article:

Security liability is coming for software: Is your engineering team ready?

The whole article is pretty silly, but the bit about liability and open source is the real treat. There's some sort of special consideration when you use open source apparently, we'll get back to that. Right now there is basically no liability of any sort when you use software. I doubt there will be anytime soon. Liability laws are tricky, but the lawyers I've spoken with have been clear that software isn't currently covered in most instances. The whole article is basically nonsense from that respect. The people they interview set the stage for liability and responsibility then seem to discuss how open source should be treated special in this context.

Nothing is special, open source is no better or worse than closed source software. If you build something why would open source need more responsibility than closed source? It doesn't of course, it's just an easy target to pick on. The real story is we don't know how to deal with this problem. Open source is an easy boogeyman. It's getting picked on because we don't know where else to point the finger.

The real problem is we don't know how to secure our software in an acceptable manner. Trying to talk about liability and responsibility is fine, nobody is going to worry about security until they have to. Using open source as a discussion point in this conversation clouds it though. We now get to shift the conversation from how do we improve security, to blaming something else for our problems. Open source is one of the tools we use to build our software. It might be the most powerful tool we've ever had. Tools are never the problem in a broken system even though they get blamed on a regular basis.

The conversation we must have revolves around incentives. There is no incentive to build secure software. Blaming open source or talking about responsibility are just attempts to skirt the real issue. We have to fix our incentives. Liability could be an incentive, regulation can be an incentive. User demand can be an incentive as well. Today the security quality of software doesn't seem to matter.

I'd like to end this saying we should make an effort to have more honest discussions about security incentives, but I don't think that will happen. As I mention in my previous blog post, our problem is a lack of leadership. Even if we fix security incentives, I don't see things getting much better under current leadership.

Saturday, June 17, 2017

Thought leaders aren't leaders

For the last few weeks I've seen news stories and much lamenting on twitter about the security skills shortage. Some say there is no shortage, some say it's horrible beyond belief. Basically there's someone arguing every possible side of this. I'm not going to debate if there is or isn't a worker shortage, that's not really the point. A lot of complaining was done by people who would call themselves leaders in the security universe. I then read the below article and change my thinking up a bit.

Our problem isn't a staff shortage. Our problem is we don't have any actual leaders. I mean people who aren't just "in charge". Real leaders aren't just in charge, they help their people grow in a way that accomplishes their vision. Virtually everyone in the security space has spent their entire careers working alone to learn new things. We are not an industry known for working together and the thing I'd never really thought about before was that if we never work together, we never really care about anyone or anything (except ourselves). The security people who are in charge of other security people aren't motivating anyone which by definition means they're not accomplishing any sort of vision. This holds true for most organizations since barely keeping the train on the track is pretty much the best case scenario.

If I was going to guess the existing HR people look at most security groups and see the same dumpster fire we see when we look at IoT.

In the industry today virtually everyone who is seen as being some sort of security leader is what a marketing person would call "thought leaders". Thought leaders aren't leaders. Some do have talent. Some had talent. And some just own a really nice suit. It doesn't matter though. What we end up with is a situation where the only thing anyone worries about is how many Twitter followers they have instead of making a real difference. You make a real difference when you coach and motivate someone else do great things.

Being a leader with loyal employees would be a monumental step for most organizations. We have no idea who to hire and how to teach them because the leaders don't know how to do those things. Those are skills real leaders have and real leaders develop in their people. I suspect the HR department knows what's wrong with the security groups. They also know we won't listen to them.

There is a security talent shortage, but it's a shortage of leadership talent.