Saturday, March 26, 2011

Passive Network Monitoring of Strong Authentication

There’s been a fair amount of consternation and FUD concerning the effectiveness of “strong authentication” in defending against APT. For example, in their M-trends 2011 report, Mandiant has demonstrated how smart cards are being subverted. If that isn’t bad enough, RSA has recently revealed that they’ve been victim of attacks that they believe are attributed to APT and which resulted in attackers getting access to information that may weaken the effectiveness of SecureID.

Unfortunately, like most people blogging about these issues, I can’t provide any more authoritative information on the topic other than to say that based on my personal experience, targeting and subverting strong authentication mechanisms is a common practice for some targeted, persistent attackers. It’s hard to predict the impact of any of these weaknesses. Additionally, people who have found out the hard way usually aren’t particularly open about sharing their hard knocks.

Nevertheless, I’d like to advance the suitability of passive network monitoring as a method for helping to audit authentication, especially strong authentication mechanisms. While auditing is more properly conducted using logs provided by the devices that actually perform authentication (and authorization, access control, etc if you want to be pedantic), there are real operational and organization issues that may well make passive network monitoring one of the most effective means of gathering the information necessary to perform auditing of strong authentication.

The vast majority of password based authentication mechanisms bundle the username with the password and provide both to the server either in the clear or encrypted. It is possible to provide the username in the clear and the password encrypted which would improve monitoring capabilities at the possible expense of privacy. In general, this bundling of credentials is done because confidentiality is provided through mechanisms that operate at a different layer of the stack: ex. username and password sent through SSL tunnel.

On the other hand, many authentication mechanisms provide the username/user identifier in the clear. For these protocols, passive network monitoring provides the ability to collect information necessary to provide some amount of auditing of user activity. In this post I advance two quick and dirty examples of how this information could be collected. For and simplicity’s and brevity’s sake, I’ll focus solely on collecting usernames. I’ve chosen two protocols that are very frequently used in conjunction with the strong authentication mechanisms: RADIUS and SSL/TLS client certificate authentication.

RADIUS


RADIUS isn’t exactly as the most secure authentication protocol in the world. Since it has some serious weaknesses, it’s normally not used over hostile networks (like the internet). However, it is frequently used internally to organizations. In fact, it is very frequently used in conjunction with strong credentials such as RSA SecureID. One nice thing about RADIUS is that the username is passed in the clear in authentication requests. As such it’s pretty simple to build a monitoring tool to expose this data to auditing.

In my example of monitoring RADIUS, I’ll use this packet capture taken from the testing data sets for libtrace.

In my experience tcpdump is very useful for monitoring and parsing older and simpler protocols, especially ones that usually don’t span multiple packets, like DNS or RADIUS. The following is shows how tcpdump parses one RADIUS authentication request:



/usr/sbin/tcpdump -nn -r radius.pcap -s 0 -v "dst port 1812" -c 1
reading from file radius.pcap, link-type EN10MB (Ethernet)
18:42:58.228064 IP (tos 0x0, ttl 64, id 47223, offset 0, flags [DF], proto: UDP (17), length: 179) 10.1.12.20.1034 > 192.107.171.165.1812: RADIUS, length: 151
Access Request (1), id: 0x2e, Authenticator: 36ea5ffd15130961caafc039b5909d34
Username Attribute (1), length: 6, Value: test
NAS IP Address Attribute (4), length: 6, Value: 10.1.12.20
NAS Port Attribute (5), length: 6, Value: 0
Called Station Attribute (30), length: 31, Value: 00-02-6F-21-EC-52:CRCnet-test
Calling Station Attribute (31), length: 19, Value: 00-02-6F-21-EC-5F
Framed MTU Attribute (12), length: 6, Value: 1400
NAS Port Type Attribute (61), length: 6, Value: Wireless - IEEE 802.11
Connect Info Attribute (77), length: 22, Value: CONNECT 0Mbps 802.11
EAP Message Attribute (79), length: 11, Value: .
Message Authentication Attribute (80), length: 18, Value: ...eE.*.B.._..).


Note that we intentionally haven’t turned the verbosity up all the way. While there’s a lot of other good info in there, let say we only want to extract the UDP quad and the username and then send them to our SIMS so we can audit them. Assuming a configuration of syslog that sends logs somewhere to be audited appropriately, the following demonstrates how to do so:



tcpdump -nn -r radius.pcap -s 0 -v "dst port 1812" | awk '{ if ( $1 ~ "^[0-9][0-9]:" ) { print SRC" "DST" "USER; SRC=$18; DST=$20; USER="" }; if ( $0 ~ " Username Attribute" ) { USER=$NF } }' | logger -t radius_request


This example generates syslogs that appears as follows:



Mar 26 14:45:15 monitor radius_request: 10.1.12.20.1034 192.107.171.165.1812: test
Mar 26 14:45:15 monitor radius_request: 10.1.12.20.1034 192.107.171.165.1812: test
Mar 26 14:45:15 monitor radius_request: 10.1.12.20.1034 192.107.171.165.1812: test


I’ve done no significant validation to ensure that it’s complete, but this very well could be used on a large corporate network as is. Obviously, you’d need to replace the -r pcapfile with the appropriate -i interface.

SSL/TLS Client Certificate


Another opportunity for simple passive monitoring is SSL/TLS when a client certificate is used. It is very common for this mechanism to be used to authenticate users with either soft or hard (ie. smart card) certificates to web sites. This mechanism relies on PKI which involves the use of a public and private key. While the private key should never be transferred over the network, and in many cases they never leave smart cards, the public keys are openly shared. In the case of SSL/TLS client certificate based authentication the public key, along with other information such as the client user identification, is passed in the clear during authentication as the client certificate.

To have data for this example, I generated my own. I took the following steps based on the wireshark SSL wiki:



openssl req -new -x509 -out server.pem -nodes -keyout privkey.pem -subj /CN=localhost/O=pwned/C=US
openssl req -new -x509 -nodes -out client.pem -keyout client.key -subj /CN=Foobar/O=pwned/C=US

openssl s_server -ssl3 -cipher AES256-SHA -accept 4443 -www -CAfile client.pem -verify 1 -key privkey.pem

#start another shell
tcpdump -i lo -s 0 -w ssl_client.pcap "tcp port 4443"

#start another shell
(echo GET / HTTP/1.0; echo ; sleep 1) | openssl s_client -connect localhost:4443 -ssl3 -cert client.pem -key client.key

#kill tcpdump and server

#fix pcap by converting back to 443 and fixing checksums (offload problem)
tcprewrite --fixcsum --portmap=4443:443 --infile=ssl_client.pcap --outfile=ssl_client_443.pcap


You can download the resulting pcap here.

The client certificate appears as follows:



$ openssl x509 -in client.pem -noout -text
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
b0:cc:6b:94:b4:83:0f:78
Signature Algorithm: sha1WithRSAEncryption
Issuer: CN=Foobar, O=pwned, C=US
Validity
Not Before: Mar 26 13:13:12 2011 GMT
Not After : Apr 25 13:13:12 2011 GMT
Subject: CN=Foobar, O=pwned, C=US
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
RSA Public Key: (1024 bit)
Modulus (1024 bit):
00:e5:d6:78:cd:95:4e:89:0c:88:bd:78:98:26:86:
0b:f1:be:df:85:98:a2:93:c1:66:65:44:d2:aa:08:
69:2d:4c:a9:9d:50:08:79:1d:58:6e:6d:b4:2b:24:
ca:37:90:d6:91:9f:6d:73:5f:51:5a:10:af:f0:ce:
85:85:d6:e4:42:7b:ca:b0:af:0c:52:8b:60:1c:5b:
3f:54:10:cc:c4:35:18:a8:a6:a7:c8:ae:df:b7:ab:
a9:d9:20:cf:f7:5c:43:01:2e:12:cf:96:45:87:e7:
7e:87:f7:5e:8f:25:23:1b:ee:bd:0a:79:48:07:99:
ba:cc:68:16:53:43:56:e9:a1
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Subject Key Identifier:
BD:C2:84:BF:76:17:B7:15:BC:2F:8C:7E:A6:E6:18:B1:47:60:A3:B6
X509v3 Authority Key Identifier:
keyid:BD:C2:84:BF:76:17:B7:15:BC:2F:8C:7E:A6:E6:18:B1:47:60:A3:B6
DirName:/CN=Foobar/O=pwned/C=US
serial:B0:CC:6B:94:B4:83:0F:78

X509v3 Basic Constraints:
CA:TRUE
Signature Algorithm: sha1WithRSAEncryption
4c:28:ea:47:20:38:d5:17:dd:cf:aa:f8:13:3e:d0:5f:cf:05:
7d:c7:a1:c3:f4:3e:d7:db:56:f7:d4:d6:d6:c6:f4:5c:47:5b:
99:f6:9c:23:2d:dc:75:ab:51:8b:96:df:26:3b:9e:59:8f:2c:
08:d1:84:bf:4f:98:65:b4:0f:b7:32:9d:2f:eb:d9:a5:a6:69:
b6:75:ce:03:f4:ad:3b:f2:e6:3a:a1:ff:44:ea:8a:98:40:34:
cc:dd:e0:d8:35:0e:8b:97:20:30:e4:7b:07:52:98:63:11:32:
5e:6e:cb:c7:f1:10:67:1c:cd:e2:03:3a:99:98:8b:2f:f8:94:
03:6f


For auditing, we are interested in extracting the CN, which in this case is “Foobar”. As the client certificate is transferred over the network, the CN appears as follows:



000002e0 00 3f 0d 00 00 37 02 01 02 00 32 00 30 30 2e 31 |.?...7....2.00.1|
000002f0 0f 30 0d 06 03 55 04 03 13 06 46 6f 6f 62 61 72 |.0...U....Foobar|
00000300 31 0e 30 0c 06 03 55 04 0a 13 05 70 77 6e 65 64 |1.0...U....pwned|
00000310 31 0b 30 09 06 03 55 04 06 13 02 55 53 0e 00 00 |1.0...U....US...|


Immediately preceding the string “Foobar” is following sequence (in hex):



06 03 55 04 03 13 06


I’m not 100% sure what the "06 03" is for, but I believe this to be invariant in client certificates (if not, this example needs fixing). The "55 04 03" is indicative of the following data being a CN. This is an x509/ASN.1 thing where this sequence maps to the OID 2.5.4.3. The "13" can vary among a few common values (it specifies the data type) and the "06" indicates the length of the data (6 ASCII characters). Using this knowledge of SSL certificates we can create a tool to extract and log all CNs as follows:



$ mkdir /dev/shm/ssl_client_streams
$ cd /dev/shm/ssl_client_streams/
$ vortex -r ssl_client_443.pcap -S 0 -C 10240 -g "svr port 443" | xargs -t -I+ pcregrep -o -H "\x06\x03\x55\x04\x03..[A-Za-z0-9]{1,100}" + | sed -r "s/\x06\x03\x55\x04\x03../ /" | sed 's/c/ /' | logger -t client_cert



This generates logs as follows:



Mar 26 15:26:05 sr2s4 client_cert: 127.0.0.1:41143 127.0.0.1:443: localhost1
Mar 26 15:26:05 sr2s4 client_cert: 127.0.0.1:41143 127.0.0.1:443: localhost1
Mar 26 15:26:05 sr2s4 client_cert: 127.0.0.1:41143 127.0.0.1:443: Foobar1


If you are new to vortex, check out my vortex howto series. Basically we’re snarfing the first 10k of SSL streams transferred from the client to the server as files then analyzing them. Note that since we’re pulling all CNs out of all the certificates in the certificate chain provided by the client, we’re getting not only “Foobar” but “localhost” who is the CA in this case. Also note the trailing garbage we were too lazy to remove.

While this works, this is a little too dirty even for me. The biggest problem is that the streams which are snarfed by vortex are never purged. Second, we’re doing a lot of work in an inefficient manner on each SSL stream, even those that don’t include client certs.

Let’s refactor this slightly. First, we’re going to immediately weed out all stream we don’t want look at. In this example I’m looking for client certs in general, but you could easily change signature to be the CA for the certificates which you are interested in monitoring. Ex. “Pwned Org CA”:



$ vortex -e -r ssl_client_443.pcap -S 0 -C 10240 -g "svr port 443" | xargs pcregrep -L "\x06\x03\x55\x04\x03" | xargs rm


That will leave all the streams which we want to inspect in the current dir. If we do something like the following in an infinite loop or very frequent cron job, then we’ll do the logging and purging we need:



find -cmin +1 -type f | while read file
do
pcregrep -o -H "\x06\x03\x55\x04\x03..[A-Za-z0-9]{1,100}" $file | sed -r "s/\x06\x03\x55\x04\x03../ /" | sed 's/c/ /' | logger -t client_cert
rm $file
done


This implementation is also probably suitable for use on a large network or pretty close to it.

For these examples, it’s assumed that the logs are streamed to a log storage, aggregation, or correlation tool for real time auditing or for historical forensics. I would not be surprised if there were flaws in the examples as presented, so use at your own risk or perform the validation and tweaking necessary for your environment. These examples are intended to be merely that—to show the feasibility. While I’ve discussed two specific protocols/mechanisms there are others that lend themselves to passive network monitoring as well as many that don’t.

In this post I’ve shown how passive network monitoring could be used to help audit the use or misuse of strong authentication mechanisms. I’ve given quick and dirty examples which are probably suitable or are close to something that would be suitable for use on enterprise networks. Notwithstanding the weaknesses in my examples, I hope they provide ideas for what can be done to “trust, but verify” strong authentication mechanisms through data collection done on passive network sensors.

2 comments:

  1. Interesting post Charles, coincidentally I recently bloged about how to extract certificates from SSL streams in pcap files. The post was about how to investigate MITM attacks on SSL by looking at a pcap file:
    http://www.netresec.com/?page=Blog&month=2011-03&post=Network-Forensic-Analysis-of-SSL-MITM-Attacks

    SSL MITM might not be very commonly used by APT, but the concept of monitoring SSL communication for certificates is still very relevant. You might wanna try using NetworkMiner or NetworkMinerCLI to extract certificates from pcap files?

    ReplyDelete
  2. Erik,

    I think we’re attacking different problems, or addressing similar problems in a fundamentally different way. One thing that makes incident response in the face of persistent attackers different from IR for opportunistic threats is that historical analysis is key to appreciation of current activity and development forward looking detections/ mitigations. Judicious collection of metadata is important to supporting this type of analysis which spans both wide temporal scope and large amounts of data. Network sensors can provide a good audit log source.

    The examples I provided above are suitable for collection of thousand or millions of events per day on large networks (say up to 1 Gbps) and are compact enough to be stored for months or years. While I only collected usernames, collecting other metadata seems like a simple exercise for the reader.

    Let me propose an alternative to your use case of detecting or investigating an SSL MITM. Using a manual tool to investigate every suspected attack isn’t scalable--both in terms of amount of traffic and in terms of temporal scope (you likely don’t have full packets laying around indefinitely). You could however, in probably a couple minutes or hours develop a simple utility, similar to the one above or possibly slightly more refined, for collecting key metadata about SSL certificates writ large that would allow you to do the same analysis you did (compare against known good values) and allow you to it quickly over large amounts of traffic and over large time frames.

    While this sort of analysis may not be necessary or useful for some forms of incident response, this approach is necessary for those seeking to stay ahead of persistent attackers through security intelligence.

    The point I was trying to make in this post was that while the sky might be falling due to FUD (warranted or not) surrounding the subversion of “strong” authentication mechanisms, network sensors can provide an effective means of gaining instant visibility. Furthermore, developing the applications necessary to collect some data for some visibility can be done in minutes, not days or weeks.

    ReplyDelete