Saturday, July 22, 2017

Security and privacy are the same thing

Earlier today I ran across this post on Reddit
Security but not Privacy (Am I doing this right?)

The poster basically said "I care about security but not privacy".

It got me thinking about security and privacy. There's not really a difference between the two. They are two faces of the same coin but why isn't always obvious in today's information universe. If a site like Facebook or Google knows everything about you it doesn't mean you don't care about privacy, it means you're putting your trust in those sites. The same sort of trust that makes passwords private.

The first thing we need to grasp is what I'm going to call a trust boundary. I trust you understand trust already (har har har). But a trust boundary is less obvious sometimes. A security (or privacy) incident happens when there is a breach of the trust boundary. Let's just dive into some examples to better understand this.

A web site is defaced
In this example the expectation is the website owner is the only person or group that can update the website content. The attacker crossed a trust boundary that allowed them to make unwanted changes to the website.

Your credit card is used fraudulently
It's expected that only you will be using your credit card. If someone gets your number somehow and starts to make purchases with your card, how they got the card crosses a trust boundary. You could easily put this example in the "privacy" bucket if you wanted to keep them separate, it's likely your card was stolen due to lax security at one of the businesses you visited.

Your wallet is stolen
This one is tricky. The trust boundary is probably your pocket or purse. Maybe you dropped it or forgot it on a counter. Whatever happened the trust boundary is broken when you lose control of your wallet. An event like this can trickle down though. It could result in identity theft, your credit card could be used. Maybe it's just about the cash. The scary thing is you don't really know because you lost a lot of information. Some things we'd call privacy problems, some we'd call security problems.

I use a confusing last example on purpose to help prove my point. The issue is all about who do you trust with what. You can trust Facebook and give them tons of information, many of us do. You can trust Google for the same basic reasons. That doesn't mean you don't care about privacy, it just means you have put them inside a certain trust boundary. There are limits to that trust though.

What if Facebook decided to use your personal information to access your bank records? That would be a pretty substantial trust boundary abuse. What if your phone company decided to use the information they have to log into your Facebook account?

A good password isn't all that different from your credit card number. It's a bit of private information that you share with one or more other organizations. You are expecting them not to cross a trust boundary with the information you gave them.

The real challenge is to understand what trust boundaries you're comfortable with. What do you share with who? Nobody is an island, we must exist in an ecosystem of trust. We all have different boundaries of what we will share. That's quite all right. If you understand your trust boundary making good security/privacy decisions becomes a lot easier.

They say information is the new oil. If that's true then trust must be the currency.

Thursday, July 20, 2017

Summer is coming

I'm getting ready to attend Black Hat. I will miss BSides and Defcon this year unfortunately due to some personal commitments. And as I'm packing up my gear, I started thinking about what these conferences have really changed. We've been doing this every summer for longer than many of us can remember now. We make our way to the desert, we attend talks by what we consider the brightest minds in our industry. We meet lots of people. Everyone has a great time. But what is the actionable events that come from these things.

The answer is nothing. They've changed nothing.

But I'm going to put an asterisk next to that.

I do think things are getting better, for some definition of better. Technology is marching forward, security is getting dragged along with a lot of it. Some things, like IoT, have some learning to do, but the real change won't come from the security universe.

Firstly we should understand that the world today has changed drastically. The skillset that mattered ten years ago doesn't have a lot of value anymore. Things like buffer overflows are far less important than they used to be. Coding in C isn't quite what it once was. There are many protections built into frameworks and languages. The cloud has taken over a great deal of infrastructure. The list can go on.

The point of such a list is to ask the question, how much of the important change that's made a real difference came from our security leaders? I'd argue not very much. The real change comes from people we've never heard of. There are people in the trenches making small changes every single day. Those small changes eventually pile up until we notice they're something big and real.

Rather than trying to fix the big problems, our time is better spent ignoring the thought leaders and just doing something small. Conferences are important, but not to listen to the leaders. Go find the vendors and attendees who are doing new and interesting things. They are the ones that will make a difference, they are literally the future. Even the smallest bug bounty, feature, or pull request can make a difference. The end goal isn't to be a noisy gasbag, instead it should be all about being useful.



Saturday, July 8, 2017

Who's got your hack back?

The topic of hacking back keeps coming up these days. There's an attempt to pass a bill in the US that would legalize hacking back. There are many opinions on this topic, I'm generally not one to take a hard stand against what someone else thinks. In this case though, if you think hacking back is a good idea, you're wrong. Painfully wrong.

Everything I've seen up to this point tells me the people who think hacking back is a good idea are either mistaken about the issue or they're misleading others on purpose. Hacking back isn't self defense, it's not about being attacked, it's not about protection. It's a terrible idea that has no place in a modern society. Hacking back is some sort of stone age retribution tribal law. It has no place in our world.

Rather than break the various argument apart. Let's think about two examples that exist in the real world.

Firstly, why don't we give the people doing mall security guns? There is one really good reasons I can think of here. The insurance company that holds the policy on the mall would never allow the security to carry guns. If you let security carry guns, they will use them someday. They'll probably use them in an inappropriate manner, the mall will be sued, and they will almost certainly lose. That doesn't mean the mall has to pay a massive settlement, it means the insurance company has to pay a massive settlement. They don't want to do that. Even if some crazy law claims it's not illegal to hack back, no sane insurance company will allow it. I'm not talking about cyber insurance, I'm just talking about general policies here.

The second example revolves around shoplifting. If someone is caught stealing from a store, does someone go to their house and take some of their stuff in retribution? They don't of course. Why not? Because we're not cave people anymore. That's why. Retribution style justice has no place in a modern civilization. This is how a feud starts, nobody has ever won a feud, at best it's a draw when they all kill each other.

So this has me really thinking. Why would anyone want to hack back? There aren't many reasons that don't revolve around revenge. The way most attacks work you can't reliably know who is doing what with any sort of confidence. Hacking back isn't going to make anything better. It would make things a lot worse. Nobody wants to be stuck in the middle of a senseless feud. Well, nobody sane.

Sunday, June 25, 2017

When in doubt, blame open source

If you've not read my previous post on thought leadership, go do that now, this one builds on it. The thing that really kicked off my thinking on these matters was this article:

Security liability is coming for software: Is your engineering team ready?

The whole article is pretty silly, but the bit about liability and open source is the real treat. There's some sort of special consideration when you use open source apparently, we'll get back to that. Right now there is basically no liability of any sort when you use software. I doubt there will be anytime soon. Liability laws are tricky, but the lawyers I've spoken with have been clear that software isn't currently covered in most instances. The whole article is basically nonsense from that respect. The people they interview set the stage for liability and responsibility then seem to discuss how open source should be treated special in this context.

Nothing is special, open source is no better or worse than closed source software. If you build something why would open source need more responsibility than closed source? It doesn't of course, it's just an easy target to pick on. The real story is we don't know how to deal with this problem. Open source is an easy boogeyman. It's getting picked on because we don't know where else to point the finger.

The real problem is we don't know how to secure our software in an acceptable manner. Trying to talk about liability and responsibility is fine, nobody is going to worry about security until they have to. Using open source as a discussion point in this conversation clouds it though. We now get to shift the conversation from how do we improve security, to blaming something else for our problems. Open source is one of the tools we use to build our software. It might be the most powerful tool we've ever had. Tools are never the problem in a broken system even though they get blamed on a regular basis.

The conversation we must have revolves around incentives. There is no incentive to build secure software. Blaming open source or talking about responsibility are just attempts to skirt the real issue. We have to fix our incentives. Liability could be an incentive, regulation can be an incentive. User demand can be an incentive as well. Today the security quality of software doesn't seem to matter.

I'd like to end this saying we should make an effort to have more honest discussions about security incentives, but I don't think that will happen. As I mention in my previous blog post, our problem is a lack of leadership. Even if we fix security incentives, I don't see things getting much better under current leadership.

Saturday, June 17, 2017

Thought leaders aren't leaders

For the last few weeks I've seen news stories and much lamenting on twitter about the security skills shortage. Some say there is no shortage, some say it's horrible beyond belief. Basically there's someone arguing every possible side of this. I'm not going to debate if there is or isn't a worker shortage, that's not really the point. A lot of complaining was done by people who would call themselves leaders in the security universe. I then read the below article and change my thinking up a bit.


Our problem isn't a staff shortage. Our problem is we don't have any actual leaders. I mean people who aren't just "in charge". Real leaders aren't just in charge, they help their people grow in a way that accomplishes their vision. Virtually everyone in the security space has spent their entire careers working alone to learn new things. We are not an industry known for working together and the thing I'd never really thought about before was that if we never work together, we never really care about anyone or anything (except ourselves). The security people who are in charge of other security people aren't motivating anyone which by definition means they're not accomplishing any sort of vision. This holds true for most organizations since barely keeping the train on the track is pretty much the best case scenario.

If I was going to guess the existing HR people look at most security groups and see the same dumpster fire we see when we look at IoT.

In the industry today virtually everyone who is seen as being some sort of security leader is what a marketing person would call "thought leaders". Thought leaders aren't leaders. Some do have talent. Some had talent. And some just own a really nice suit. It doesn't matter though. What we end up with is a situation where the only thing anyone worries about is how many Twitter followers they have instead of making a real difference. You make a real difference when you coach and motivate someone else do great things.

Being a leader with loyal employees would be a monumental step for most organizations. We have no idea who to hire and how to teach them because the leaders don't know how to do those things. Those are skills real leaders have and real leaders develop in their people. I suspect the HR department knows what's wrong with the security groups. They also know we won't listen to them.

There is a security talent shortage, but it's a shortage of leadership talent.

Sunday, June 11, 2017

Humanity isn't proactive

I ran across this article about IoT security the other day

The US Needs to Get Serious About Securing the Internet of Hackable Things

I find articles like this frustrating for the simple fact everyone keeps talking about security, but nobody is going to do anything. If you look at the history of humanity, we've never been proactive when dealing with problems. We wait until things can't get worse and the only actual option is to fix the problem. If you look at every problem there are at least two options. Option #1 is always "fix it". Option #2 is ignore it. There could be more options, but generally we pick #2 because it's the least amount of work in the short term. Humanity rarely cares about the long term implications of anything.

I know this isn't popular, but I'm going to say it: We aren't going to fix IoT security for a very long time

I really wish this wasn't true, but it just is. If a senator wants to pretend they're doing something but they're really just ignoring the problem, they hold a hearing and talk about how horrible something is. If they actually want to fix it they propose legislation. I'm not blaming anyone in charge mind you. They're really just doing what they think the people want. If we want the government to fix IoT we have to tell them to do it. Most people don't really care because they don't have a reason to care.

Here's the second point that I suspect many security people won't want to hear. The reason nobody cares about IoT security isn't because they're stupid. This is the narrative we've been telling ourselves for years. They don't care because the cost of doing nothing is substantially less than fixing IoT security. We love telling scary campfire stories about how the botnet was coming from inside the house and how a pacemaker will kill grandpa, but the reality is there hasn't been enough real damage done yet from insecure IoT. I'm not saying there won't ever be, there just hasn't been enough expensive widespread damage done yet to make anyone really care.

In world filled with insecurity, adding security to your product isn't a feature anyone really cares about. I've been doing research about topics such as pollution, mine safety, auto safety, airline safety, and a number of other problems from our past. There are no good examples where humans decided to be proactive and solve a problem before it became absolutely horrible. People need a reason to care, there isn't a reason for IoT security.

Yet.

Someday something might happen that makes people start to care. As we add compute power to literally everything my security brain says there is some sort of horrible doom coming without security. But I've also been saying this for years and it's never really happened. There is a very real possibility that IoT security will just never happen if things never get bad enough.

Sunday, June 4, 2017

Free Market Security

I've been thinking about the concept of free market forces this weekend. The basic idea here is that the price of a good is decided by the supply and demand of the market. If the market demands something, the price will go up if there it's in short supply. This is basically why the Nintendo Switch is still selling on eBay for more than it would cost in the store. There is a demand but there isn't a supply. But back to security. Let's think about something I'm going to call "free market security". What if demand and supply was driving security? Or we can flip the question around, what if the market will never drive security?

Of course security isn't really a thing like we think of goods and services in this context. At best we could call it a feature of another product. You can't buy security to add it to your products, it's just sort of something that happens as part of a larger system.

I'm leaning in the direction of secure products. Let's pick on mobile phones because that environment is really interesting. Is the market driving security into phones? I'd say the answer today is a giant "no". Most people buy phones that will never see a security update. They don't even ask about updates or security in most instances. You could argue they don't know this is even a problem.

Apple is the leader here by a wide margin. They have invested substantially into security, but why did they do this? If we want to think about market forces and security, what's the driver? If Apple phones were less secure would the market stop buying them? I suspect the sales wouldn't change at all. I know very few people who buy an iPhone for the security. I know zero people outside of some security professionals who would ever think about this question. Why Apple decided to take these actions is a topic for another day I suspect.

Switching gears, the Android ecosystem is pretty rough in this regard. The vast majority of phones sold today are android phones. Android phones that are competitively priced, all have similar hardware, and almost all of them are completely insecure. People still buy them though. Security is clearly not a feature that's driving anything in this market. I bought a Nexus phone because of security. This one single feature. I am clearly not the norm here though.

The whole point we should be thinking about is idea of a free market for security. It doesn't exist, it probably won't exist. I see it like pollution. There isn't a very large market products that either don't pollute, or are made without polluting in some way. I know there are some people who worry about sustainability, but the vast majority of consumers don't really care. In fact nobody really cared about pollution until a river actually lit on fire. There are still some who don't, even after a river lit on fire.

I think there are many of us in security who keep waiting for demand to appear for more security. We keep watching and waiting, any day now everyone will see why this matters! It's not going to happen though. We do need security more  and more each day. The way everything is heading, things aren't looking great. I'd like to think we won't have to wait for the security equivalent of a river catching on fire, but I'm pretty sure that's what it will take.