Before providing the solution let me first describe you the issue.
Early this morning I received a request from a customer to check out his servers he suspected that these were hacked. He complained about a similar issues a couple of weeks ago when he suspected something was wrong with nginx, apparently visitors from US were redirected to a page containing malware. One of my coworkers found the issue and it seemed that updating nginx fixed the problem. It seems that the problem was way bigger than we expected, because the same customer received an email from his hosting company – Gorilla Servers – stating that two of his servers (one physical machine and one OpenVZ container) might be infected with Ebury SSH Rootkit.
I read the VERY detailed report which was actually originating from US-CERT and was also providing information on how to debug and address the issue! This really got my attention, especially because it came from an organization that fights internet abuse.
The first step was to visit the link provided inside the abuse report which was cert-bund website. Here you can see detailed information on how to detect if you have been infected and a lot of information about the history of Ebury.
Ok, I got an idea about what was all about so I began to dig further and found a very interesting post about the link between Ebury, Cdorked, Onimiki and Calfbot on ESET Ireland website published just a day before! Here you can find a very detailed technical report about the “Operation Windigo”, a name given by the ESET team to the cybercriminal campain which managed to distribute the malicios code to over 25 000 UNIX Servers and it seems that 500 000 computers are affected every day because of this!
I spent some time reading the above listed report which is 69 pages in length – it even provides signatures for different versions of the malicious code – and managed to see the behavior of Ebury and what files it targeted.
There is no need to get into detail here, especially about inter process communication and shared memory since a lot more information than I could provide is available in the ESET techinical report so I’ll just list the steps needed to check and see if you are infected.
First, check to see if you see weird entries by running
[root@openvz ~]# ipcs -m [root@openvz ~]# ipcs -p
Normally you should not see ssh listed here with write permissions, nor nginx, unfortunately for me both were listed.
Another method to check if Ebury is present is to check the output of “ssh -G” command. A clean system notifies you of an illegal operation and then displays ssh command usage:
[root@openvz ~]# ssh -G ssh: illegal option -- G usage: ssh [-1246AaCfgKkMNnqsTtVvXxYy] [-b bind_address] [-c cipher_spec] [-D [bind_address:]port] [-e escape_char] [-F configfile] [-I pkcs11] [-i identity_file] [-L [bind_address:]port:host:hostport] [-l login_name] [-m mac_spec] [-O ctl_cmd] [-o option] [-p port] [-R [bind_address:]port:host:hostport] [-S ctl_path] [-W host:port] [-w local_tun[:remote_tun]] [user@]hostname [command]Note: The infected systems did NOT print the above message NOR the ssh usage! There was actually no output, and the exit code was 0 (ZERO) and not 255!
The next step was to check out if indeed /lib64/libkeyutils.so.1.3 was compromised. It was really simple for me to do this, because I had one virtual container built from an OpenVZ template, which was not updated recently, only nginx and php-fpm were installed extra on the affected system.
It took me a while to bring up another machine using the same OpenVZ template and compute the md5 checksum for all the files, but I wanted to be sure that no other file was changed. Indeed after comparing the checksums beetwen the two hosts I found out that only libkeyutils.so.1.3 was affected, not only that, but the size difference was huge. The genuine libkeyutils is ~10 KB in size, while the infected libkeyutils was ~33 KB in size. I also compared the checksum of the infected library with the ones provided in the ESET report about Operation Windigo but NO match was found, meaning that it must be a different version of the code!
Obviously the next step is to check which package provides this library, to see this you can use one of the commands:
[root@openvz ~]# rpm -q --whatprovides /lib64/libkeyutils.so.1.3 [root@openvz ~]# yum whatprovides /lib64/libkeyutils.so.1.3
Both will list keyutils-libs-1.4-4.el6.x86_64 as being the package responsible for providing the libkeyutils library (using yum for this is slower and it actually makes no sense since the package is installed on local system).
It was now time to take the machines offline and clean up the mess. First, grab a copy of the copy from a secure source, (I chose a local CentOS mirror) then reinstall the package and make sure you replace the files and the binaries:
[root@openvz ~]#Â wget ftp://ftp.iasi.roedu.net/pub/mirrors/centos.org/6.5/os/x86_64/Packages/keyutils-libs-1.4-4.el6.x86_64.rpm [root@openvz ~]# rpm -ivh --replacefiles --replacepkgs keyutils-libs-1.4-4.el6.x86_64.rpm
Although this wasn’t necessary I followed the same procedure and reinstalled nginx, php-fpm, openssh and openssh-clients before rebooting the servers.
Also all the ssh keys were removed and new ones were generated, changed all the passwords for all the users on the machine and notified all the people that accessed that machine to change their password immediately.
For security reasons access to these machines was restricted so now SSH is possible only over VPN or only through a secure gateway that was set up especially for this purpose.
It’s still a question on how the attackers managed to gain access to these 2 machines, we have some ideas but jumping to conclusions before being able to bring evidence that support accusations is not our way of doing business.
At least for now we’ll continue to monitor these machines and will do our best to collect as much information as possible so we can understand exactly when and how the systems were compromised.
On a side note, the attackers are preety smart, there are a lot of systems that compare the checksums of binaries (for example this is a standard feature for cPanel), but almost nobody checks the libraries.
Our ignorance is their power!
Hello,
on my cpanel server,
when i try ipcs -m and ipcs -p,they have output,
but ssh -G has the same output with your posting,
what does it mean ?
thank you
It means ssh is not vulnerable, no need to worry.
Have to worrie because the last version from ebury also show ssh: illegal option — G from “ssh -G” command
check https://www.cert-bund.de/ebury-faq for more information about how to detect ebury on your system
Hi out there,
all the hype can also happen, if you use openSSH.
This ssh client very well understands the -G option. If you go to their homepage you find the original homepage of the ssh command.
It declares the -G option to be used for listing the configuration of an host.
And please never trust anything coming from US-Cert or cert-bund without double confirmation.
Trying to keep you in fear, so it’s easier to lead you to where they want you to be.
Thanks for your attention
who walks like the Wolf
Pardon
should say manpage….
Getting older i believe