Thursday, May 27, 2010

It's the Perimeter, Stupid

Some years ago I allowed myself to be lulled into a false sense of security by people I worked with.  When I would raise a security point, they would counter it with the perimeter argument:  sure we're putting sensitive data on the wire unencrypted, but it's our private wire, inside the perimeter. 

I bought it.  After all, if the network folks have sufficiently secured the corporate network, then we don't have to worry very much about packet sniffing.  They hold the perimeter, so as long as our traffic stays inside the firewall, there's no need to add additional protection.  This mentality is very pervasive among corporate developers.  Why?  Well I think partly because we want to believe it.  It makes our lives easier. 

But if we're honest (and not totally ignorant), we're forced to acknowledge that even if there were no such thing as the inside threat, this argument is embarrassingly, obviously, wildly wrong.

Last post I mentioned that someone once asked me, with respect to the horrendous security situation we're in, "how did it get this way?"

This is exactly how.  Naive, pollyannaish assumptions about the environment our applications are running in and the kind of people who will be abusing them.  These assumptions are at the heart of nearly every grave security problem, from the wide-open network protocols of the internet, to unencrypted account data, to the XSS vulns in our apps, to insecure default OS configurations.

I'm at least as guilty as anyone else, having bought into the perimeter fallacy some time ago.But I've seen the light again in recent years.  We can't keep this up or we're in serious trouble.


The other day, I was working on an application that needed to make use of a service over HTTP.  The payload would contain sensitive data, even social security numbers at times.  When working in the development environment, the developer of the service forwarded me a URL with the http schema. 

I pointed this out another developer, who rightly interrupted me to emphatically insist that use HTTPS. Right on.  But even this developer was not all that worried about it, citing the perimeter fallacy.


Now don't get me wrong, I perfectly understand the dilemma.  Developers don't want to think about security.  It's generally interesting only to developers who happen to be interested in security anyway.  Developers are far more interested in spending their brain power creating useful new things, solving problems at hand.

This is why I think that we need to start taking security out of the hands of developers, so that developers won't have to care about security in order to create relatively secure applications.  How to do that?  That's for another time.  I've already gone on too long.

In the meantime, we must start insisting on more secure development practices in our organizations.  We're negligent if we don't.

The Internet Must Die

The Internet Must Die.  It must cease to be what it is and be reborn. 

A few months ago I had a conversation with a man who had left an IT career to enter religious life.  At one point in the conversation he asked me about the state of security in the industry.

"Absolutely horrible," I told him.  "We are living in the equivalent of the wild west." 

After hearing some details he was astonished.  "How did it get this way?" he asked.

How it got this way is simple.  Getting into trouble is always a very easy thing to do.  Getting out of it is the hard part. 

There are two primary reasons that our systems are so insecure.

First, nearly everything is on the net (even things that don't need to be), and nearly every protocol on which the net operates is intrinsically insecure.

ARP, BGP, DNS, DHCP, TCP, IP...  The list goes on.  These are insecure protocols, designed for a pollyannaish garden of Eden, where even the snake is uneager to exploit vulnerabilities.

A foundation so weak cannot support a robust, secure network.  But it is precisely this foundation, this set of protocols, that makes the internet what it is.

It therefore must cease to be what it is.  It must be replaced by, or morphed into, a network built on a new set of secure protocols. It must die and be reborn.  Hence the title of this post.

But I said there are two primary reasons why things are so bad, and the second one is actually a bigger problem.  Most of software that is running on the net (web apps, or other apps running on devices that are online), were also developed -- and continue to be developed -- with the same naive outlook as that of the protocol designers.  The same pollyannaish Eden.

Every developer I know (myself included) has been guilty of neglecting to harden our apps, if not routinely, then more than once.  And even to the extent that sometimes we developers actually pay attention to security, the average developer, in my experience, does not have adequate knowlege and skill to avoid, detect and prevent vulnerabilities anyway.

Yeah, yeah, I know, most of us don't want to have to spend time thinking about security.  We just want to get the app working. But we can't afford to take that attitude anymore.  [In my next post, I'll take a look at one fallacy that is partly to blame for our lax disposition toward security.]

But if we continue to be lax, if we refuse to start thinking about how we secure our apps and actually do so for every app we build, it will not be long before software developers will need malpractice insurance, because we're going to start getting sued.

It will not be long before companies are going to start being held liable for their apps that get hacked and start spreading trojans and botnets.  I'm only surprised it hasn't happened already.


Saturday, May 15, 2010

HIDS

I've been working with some host-based IDS software on my (Linux) laptop lately.  It had been bothering me for quite a while that I didn't have visibility into my system.  If I were ever to suspect I had been owned, how would I know what files had changed on my drive?  Especially on one of the Windows boxes I help support (mom and dad), AV's failure to detect anything is not proof that you're clean.  New derivatives won't be seen by AV.  And even when you know with certainty that you've been infected, how can you account for everything that's been altered?  How can you confidently recover?  You can't.

Without HIDS, you're totally blind.

Actually, even with HIDS, even though you're not totally blind, you're not omniscient either.  But some is better than none.  And so I've toyed around with two HIDS on my laptop: tripwire and OSSEC.

OSSEC is nice because it runs on multiple platforms -- you can use it in Linux, Windows (and Mac too, I think).  It also offers more features than simple file checksumming, including a firewall and some AV capability.  One big problem I've had so far is that I can't figure out how to get it to give me alerts in any way other than email (and the documentation is fairly sparse).  Email alerts would actually be great, but so far I can't get them to work.  So I have to manually inspect the alert log.  Not good.

This is still work in progress, so I might eventually figure it out.

Tripwire was the first hids I tried, and because of the problems with OSSEC, I'm keeping it going.  The problem with tripwire is that it is a bit cumbersome.  I have it set up to run periodically as a cron job, which works nicely, and it works smoothly.  The problem is that I have so many software updates (usually at least 2 security updates per week) that it is turning into a lot of work to keep the tripwire database updated. 

Another problem I'm having with tripwire is that it keeps adding files that I think it shouldn't be -- despite the rules I've configured (probably incorrectly) -- leading to false positives.  Log files, for example.  I really don't care if they've changed.  We want them to change constantly.

All of these changes and false positives increase the risk that I could miss an illegitimate file diff and have it accepted into the tripwire checksum db as a legitimate file state.  Then I find myself guarding malice.

I'll keep toying with this, and might give some other hids a try.  So far, it's better than nothing, but I'm not as happy as I'd hoped to be with the experience.

Friday, May 14, 2010

Oracle Packages and Stored Procs

Today I needed to look at an Oracle package, but the only client tool I had access to was Squirrel Sql.  

You can see the source code for a package (or stored proc or function) with by querying either of these tables: all_source, user_source, dba_source.
select text from all_source where type='PACKAGE BODY' and name ='MYPACKAGE'

or

to see the entire package

select text from all_source where name='MYPACKAGE'


More Windows Stuff

Not sure where this came from, but found it in my notes...
The following command-line outputs the list of running processes (with the complete command-line arguments used for each process) to a text file:
    [This only works for XP Pro, not XP Home]
    Click Start, Run and type CMD

    Type the command given below exactly:

    WMIC /OUTPUT:C:\ProcessList.txt PROCESS get Caption,Commandline,Processid

    or

    WMIC /OUTPUT:C:\ProcessList.txt path win32_process get Caption,Processid,Commandline

    Now, open the file C:\ProcessList.txt. You can see the details of all the processes in that file.

More command line stuff

I ran across this again today.  Posting it here for the usual reasons...

When opening a command prompt (cmd.exe) console, then it checks the following STRING values in the registry to see if any commands should be executed:
[HKEY_CURRENT_USER \SOFTWARE \Microsoft \Command Processor]
[HKEY_LOCAL_MACHINE \SOFTWARE \Microsoft \Command Processor]
Autorun = "prompt [%computername%]$S$P$G && COLOR 0A && CD C:\"
Note the commands above will change the command prompt and color and change the default path/directory when opening the command prompt to the root of the C-drive.

Note to specify several commands separate them with &&, or use a batch file like a Autoexec.bat.

Note the cmd has a switch to disable the execution of the Autorun: cmd /d