Recent Exploit using Fake Magento Extensions

All posts by Mark

Jul 25

  • Created: Jul 25, 2014 2:15 PM

Recent Exploit using Fake Magento Extensions

We are publishing this post in the hope that all Magento users can utilize this information to determine if their site has been compromised and take the steps required to correct the problem.

We were recently contacted by a client regarding a Common Point of Purchase Investigation that was initiated by a credit card issuer. These investigations are used to pinpoint the source of fraudulent activity reported by card holders. Our security team immediately began a comprehensive internal investigation to pinpoint the root cause of the fraudulent activity on the client’s account. Our security team found evidence of Magento core files having been modified to skim credit card data during the checkout process. The skimmed data would then be logged to a fake image file (actually a text file) located in the media folder, then the attacker would download these text files from a remote server.

Next, our security team began a scan of our entire infrastructure to determine if any other client sites were affected by the same exploit. We found a total of 39 sites (out of 15,000 Community and 1,500 Enterprise Magento stores) hosted with us, were affected by the same exploit. We immediately contacted all of the affected clients before their credit card processing companies had even detected a problem.

We have since cleaned all of the sites that were exploited and contacted all of the affected clients about the exploit.

PLEASE NOTE: If you are hosted with us and have not been contacted by our security team regarding this issue, then we believe your site has not been affected by this exploit. We are committed to the safety and security of your data and we take these issues very seriously. As a precaution, we are running hourly scans of our infrastructure to detect any further compromises.

Read more

Posted in: Nexcess
Apr 4

  • Created: Apr 4, 2014 4:31 PM

TLS 1.2 and openssl 1.0.1e Frequently Asked Questions

As we’ve upgraded a lot of our servers to openssl 1.0.1e we’ve seen a handful of problems with APIs or payment gateways. The companies whose API is being used say they don’t support openssl 1.0.1e and/or TLSv1.2 is not support and the server will have to use TLSv1.0. There seems to be a lot of confusion about openssl versions, TLS versions, and how they work. This blog post will clear up the confusion and help explain how to deal with APIs that are having problems.

What is TLS?

Transport Layer Security (TLS) is a protocol to allow two computers to communicate securely over the internet using encryption. It is frequently called Secure Sockets Layer (SSL) and the two terms are used interchangeably by lots of people.

What is openssl?

openssl is an implementation of the TLS protocol which is very popular with Linux distros. There are other implementations of the TLS protocol: NSS is used by Firefox and Thunderbird, Secure channel (SChannel) is used by Microsoft. Read more

Posted in: Nexcess
Jan 15

  • Created: Jan 15, 2014 2:00 PM

Changing The Timezone In CentOS From The CLI

2013 Round Up

Red Hat’s documentation on how to change the timezone tells you to use a GUI to change. As a sysadmin, I’m not going to install a gigantic GUI to change a timezone on a server.

The correct way to change the timezone without a GUI is:

1. edit /etc/sysconfig/clock to be what you want
2. run /usr/sbin/tzdata-update which will update /etc/localtime

There are a million sites telling you to copy or link /usr/share/zoneinfo/herp/derp to /etc/localtime but if you’re on RHEL or CentOS and forget to set the timezone in /etc/sysconfig/clock, you’ll find your clock will be off the next time tzdata updates.

This happens because there is a trigger on glibc-common for the tzdata package to run /usr/sbin/tzdata-update which copies timezone set in /etc/sysconfig/clock to /etc/localtime. If you didn’t update your timezone in /etc/sysconfig/clock you’ll find your server will revert to the old time zone which is annoying.

I was perplexed why they used an RPM trigger to do this (I had never actually seen one used before) so I did some research and its to avoid circular dependencies where the tzdata package would require glibc but glibc requires tzdata. I thought the explanation in a red hat bug was good so I’ll just link to:
Red Hat Bugzilla – Search by bug number
Red Hat Bugzilla – Bug 167787

I also though a symlink would be a good idea but they stopped doing that since the time would be incorrect until /usr is mounted:
Re: Making /etc/localtime a symlink?

Posted in: Nexcess
Oct 16

  • Created: Oct 16, 2013 2:00 PM

How Does a Server Know it Needs an fsck?

Shutdown uncleanly

If you’ve ever had to hard restart a Linux server, you know when it starts up you’ll see a message about your system being shut down uncleanly and it will do an fsck. But, how does it know you shut it down uncleanly?

The short story is, if the server finds a file at /.autofsck on boot, it knows you didn’t perform a clean shutdown. The /.autofsck file is put there by a startup script and removed when you do a clean shutdown using a commands such as halt, poweroff, shutdown, or reboot. When you perform an unclean shutdown, the shutdown scripts are never run and the /.autofsck file is never deleted, thus an fsck is initiated due to the unclean shutdown. Read more

Posted in: Linux, Nexcess
Jun 19

  • Created: Jun 19, 2013 2:00 PM

Sorting The Output Of du

Sorting the Output of du

du is a utility to tell you the disk usage of files or directories. Usually when people run it, they want the output in a human readable format like 314M, 2.7K, and 161G which are a lot easier to read than 314159265, 2718, 161803398874 respectively.

When people are looking for directories using the most storage, they can usually only care about directories using a gigabyte or more, so people will pipe the output of du to grep 'G' to only see them. This works OK but you might miss a directory that is using 987M of storage and the grep will also match a directory that has ‘G’ in it somewhere which an be annoying.

After having done the grep 'G' trick enough times and seen it shortcomings, I figured there had to be a better way to do this. I found two ways instead, one which is incredibly easy and useful and a slightly more complex one. Read more

Posted in: CentOS, Linux, Nexcess
Apr 18

  • Created: Apr 18, 2013 2:00 PM

Viewing A File With All The Special & Control Characters

Sometimes I want view a file and see all the tabs replaced with a tab character and see line endings clearly marked. This might be because I’m trying to parse the file somehow, writing a regex to match something, or I might be debugging some problem with special characters.

For example, I’ve had to deal with the output of a program looking like:

<br />
id	domain	name<br />
1	foobar.com	John Doe<br />
2	longest-domain-you-can-possibly-imagine-even-if-it-violates-rfc-1035.com	John Q. Public<br />

I was trying to process it with awk but, when looking at the output in a terminal, it wasn’t clear if spaces or tabs were seperating the different fields. So, I fonud a few ways to print something out and show all the special characters.

cat has flags that will control showing non-printable and other special characters. If you want to see it all, just use ‘-A’.

Read more

Posted in: Nexcess
Jan 30

  • Created: Jan 30, 2013 2:52 PM

Running a Public Mirror

Running a Public Mirror

We run the Nexcess public mirror for distros and software our servers and employees use. Running our own mirror reduces our bandwidth usage since the traffic for updates and installations of our CentOS servers never leaves our network. It also makes doing installs and updates much faster, it’s never fun installing something and your server ends up using a slow mirror on the other side of the country. For our employees, being able to download an ISO for a new distro on a big fat pipe from a local mirror is a nice benefit.

You’d think running a mirror would be relatively simple, just set up an rsync cron job and call it a day, but having run our mirror server for a year I’ve learned and done several things that have made our mirror server operate smoothly:

Read more

Jul 19

  • Created: Jul 19, 2012 12:00 PM

64 Bit CentOS Installing 32 Bit Packages

64 Bit CentOS Installing 32 Bit Packages

When you do a fresh install of x86_64 CentOS 5, you might be surprised and annoyed to find yum trying to install 32 bit packages on your 64 bit server. You’ve got a 64 bit processor and operating system so why is it trying to install these un-needed 32 bit packages?

CentOS comes with multilib support, since your 64 bit processor can run 32 bit binaries, yum sees no issues with having 32 bit and 64 bit packages installed at the same time. If you look at a repo for x86_64 you’ll even see a bunch of i686 packages available to the x86_64 release. It seems like a feature most people would never need but a good example of when you’d want to install a 32 bit package on a 64 bit OS is to get something like flash support which only has a 32 bit package. I’ve also seen some RPMs get released exclusively as 32 bit packages.

Read more

Posted in: CentOS, Linux
May 29

  • Created: May 29, 2012 12:00 PM

Puppet – Use It For More Than Just Servers

Puppet - Use It For More Than Just Servers

I use virtual machines at work and home to test and experiment with various things. After setting up a new virtual machine, I install some of the programs I use on a daily basis like emacs, gawk, ack, etc. But there are other programs I use less frequently that I always forget to install. It is quite annoying when you try to run mtr, nc, or nmap to debug a problem, only to discover they don’t exist since they were never installed. It’s easy to install the missing packages, but annoying to have to do it every time I set up a new virtual machine for the latest Fedora release.

With the last virtual server I did at home, Fedora 16, I stopped installing missing packages by hand and started writing a puppet manifest to install the packages I needed. If you’re not familiar with the puppet language, it is easy to get started with simple stuff like installing packages, it looks like this:

Read more

Posted in: Unix
Mar 26

  • Created: Mar 26, 2012 4:35 PM

Configuring djb’s dnscache on Interworx Servers for Speed

Configuring djb's dnscache on Interworx Servers for Speed

The control panel we use, Interworx, comes with djbdns as its DNS server. djbdns is actually a suite made up of several separate servers, the two main ones being tinydns for iterative (non-recursive) lookups and dnscache for recursive lookups. Other DNS servers, like BIND, support combining the iterative and recursive DNS servers in to one but djbdns requires them to be split apart for various reasons. This blog post is about improving DNS performance with dnscache but if you’ve never looked at tinydns, you should check it out, it is a nice lightweight DNS server that is easy to manage. It has its quirks but is still nice.

By default, when dnscache is set up, it acts as a recursive lookup server which is bad for performance. Every single server will walk the DNS tree to get a record for www.example.com. It will ask the root servers for the name servers of .com, then the .com servers for name servers of example.com, then the example.com for the A record of www.example.com. All of this takes less than a second usually and it will cache all of the results it gets but every single server is still incurring a delay to look up common records.

Read more

Posted in: Nexcess