Fun with honeypots
I've been getting more interested in honeypots recently. This past spring, I setup a honeypot to learn more about what folks do once they successfully brute-force login to an SSH server. The concept was simple, setup a linux VM with common usernames and passwords (i.e. mysql/mysql, user/user, admin/admin, etc.) and wait to see what happens.
I created an isolated bridge network on my linux server, then setup a CentOS VM inside KVM. I used iptables to rate-limit the number of outbound connection to only 2/minute, to prevent anyone who logged into the honeypot from using my VM to do much damage to anyone else on the internet. I also used the iptables NFLOG target to save a copy of all packets to/from the VM so that I could analyze the traffic later.
I needed some way to monitor what happened within the VM without tipping off whoever was logging in to the VM that they were being monitored. So I turned on system call auditing as well as TTY auditing. Normally these log messages would be dumped out to /var/log via syslog, which would alert someone to the fact that everything they were doing was being logged and might cause them to cover their tracks. To prevent any of this logging, I modified the syslog configuration to suppress the audit logs from going to log files in the VM and redirected them to a serial port in the VM that was connected to a log file on the VM server host. This allowed me to monitor all system calls made by software they installed as well as anything they typed on the terminal in their ssh session. I wrote some python scripts to filter through the data and pull out just the details I was interested in.
In addition to all the monitoring within the VM, I also setup sslsplit and a fake certificate authority to capture any HTTPS traffic that left the VM. All TCP 443 traffic exiting the VM was sent to sslsplit, which performed SSL man-in-the-middle to decode the traffic. The fake certificate authority was added to the trusted CAs within the VM, so there wouldn't be any security warnings.
With most of the setup part of the project complete, I enabled the NAT rules to forward SSH traffic over to the VM. Within 4 hours I had my first login. Over the course of the next week, all of the user accounts I had setup had been brute forced.
It was interesting to see what the folks who logged into my VM were up to, but it wasn't too surprising. The people who accessed my VM were mainly interested in using the SOCKS proxy built-in to SSH to browse the internet. They did ignore the SSL warnings in their browser and continue to SSL websites. One installed an IRC bot. One installed their SSH brute-force tool and attempted to scan for more victims. Another attempted to run local privilege escalation exploits that did not apply to the version of CentOS my VM was running.
When I get some more time, I'd like to work more on the networking plumbing for the honeypot VM. Currently, I only run this from my home IP address and am limited to only one VM. I'd like to be able to cycle IP addresses more frequently, so my plan is to purchase a few cheap linux VPS systems and add a few IP addresses to them. I wouldn't run any of the honeypot software on the VPS, but instead install OpenVPN and forward all traffic for the secondary IPs back to a central honeypot router/firewall running on my home network. From the honeypot router, I'd use NAT to forward the traffic to and from the individual VMs, making the OpenVPN connection and honeypot router transparent to anyone interacting with the honeypot.
Once that is all setup, I'd like to test Tor exit nodes to see which operators are inspecting traffic. My plan is to setup several IP addresses and VMs, then login to each of them as root, over telnet or FTP, from specific Tor exit nodes and see what happens.