Managing hardware RAID with MegaCLI

All posts by Sean

Dec 28

  • Created: Dec 28, 2010 3:34 PM

Managing hardware RAID with MegaCLI

Dell PERC
We use Dell PERC 5 and PERC 6 hardware RAID controller cards in quite a few of our servers, and the Linux configuration / management is a bit less than user friendly. The Dell PERC 5/i card is actually a re-branded LSI MegaRAID SAS 8408E and can actually be flashed with the LSI firmware, which has led to mixed success for some. Read more

Nov 21

  • Created: Nov 21, 2010 4:32 PM

Accessing Navigation Timing Data (Web Timing)

W3C Navigation Timing
Accurately measuring webpage performance across different browsers has always been a challenge. While there are benchmarks that are often quoted (such as the SunSpider JavaScript Benchmark and the ACID3 compliance test), measuring total page load time has always been problematic. Luckily for developers, and the users who will eventually benefit from all the optimization work, the W3 has a specification to remedy this. Their solution is called “Navigation Timing” and it’s currently a work in progress, but it should be quite helpful.

To understand the real problem with measuring page load time, we must first look at the traditional (inaccurate) methods being used to accomplish this. Right now, if you want to time how long it takes for your browser to “load” (a loosely defined term) a page, you can insert a JavaScript timer into the <head> element of a page, and use the onLoad event handler to stop the timer. The process is detailed here, if you’re interested.

The issues with starting the timer at the <head> tag of a page are many. Some of the more prominent issues that affect the load time greatly are:

  1. Connection time: How long it takes to connect to the web server.
  2. DNS: Any DNS events; incorrectly configured DNS records can take a significant amount of time.
  3. Request: Some sites send a significant amount of data to a web app. This can take a while on a slow connection.
  4. Redirects: Redirects can be slow and force the browser to load more than one page.
  5. Fetch: This is the time it takes to actually download the page. On almost any connection, this is significant.
  6. Render: The time it takes the browser to actually “draw” the page on the screen.

Another issue which might not have a huge effect yet, but might in the future, is the analysis of web pages by the browser at render-time for further optimization. For example, an advanced implementation of WebGL might pre-process some objects on the page and delegate more intensive tasks to the GPU, such as playing a video. While Navigation Timing could take this into account, a normal JavaScript timer might not report these events accurately.

Navagation Timing, or more specifically, ‘window.performance.navigation’ and ‘window.performance.timing’ will probably be implemented in the major browsers (Mozilla FireFox, Internet Explorer 9, and WebKit [the engine in Chromium, Chrome, and Safari]). Although FireFox 4.0b8pre doesn’t include this functionality, Google Chrome 6 and higher (and the version of Chromium that I’m running, 9.0.584.0 [66224]) do, and apparently the latest preview of Internet Explorer 9 does, as well. In Chrome / Chromium / IE9, to see Navigation Timing data, you can either take the blue pill and check out the Navigation Timing Demo, or take the red pill and hit ctrl+shift+J in Chrome / Chromium. Once your javascript console opens, you’ll want to enter the following:

var performance = window.performance || window.mozPerformance || window.msPerformance || window.webkitPerformance || {};
var timing = performance.timing || {};
var navigation = performance.navigation || {};

Then just enter ‘performance’ into the JS console. Enjoy.

This technology is very promising for developers as it will, at long last, allow us to accurately measure what end-users are seeing in terms of performance. I’m sure that this, along with all the other cutting-edge technology in modern browsers, will help shape the future of the web as we know it.

Below is a preview of Navigation Timing in action

This specification is still a work in progress, but you can view the latest draft of it in a comprehensive guide here: http://dvcs.w3.org/hg/webperf/raw-file/tip/specs/NavigationTiming/Overview.html

Further Reading:

Oct 16

  • Created: Oct 16, 2010 10:05 PM

Increase WordPress performance with WPSuperCache

WordPress
Last week, we covered speeding up WordPress with Memcache, which was simple enough, but most shared hosting plans don’t allow you to run memcached. Even if you are allowed to run the memcache daemon on your server and you have the WordPress memcache plugin installed, configured, and working properly, you’re still serving dynamic pages, and every visitor equals at least one MySQL database query.

The solution, or second half to the solution if you’re already using memcache in WordPress, is to use “static page caching”, which is an “old trick” that webmasters have been using for many years. What is newer, however, is the automatic generation / purging (called “garbage collection”) of cached objects that WPSuperCache can provide your site with.

Normally, and this is just an overview, the following happens when a user visits your WordPress site:

  1. Apache uses mod_rewrite to check the .htaccess file to see if the user should be redirected or if any URL transformations should be applied (“rewrites”).
  2. Apache sees that index.php needs to be processed by PHP because of the configured MIME type.
  3. Apache tells PHP to process index.php based upon the requested URL and any .htaccess transformations (“rewrites”).
  4. PHP loads and parses index.php (the WordPress software) and loads any additional files specified. These are sometimes called “includes”.
  5. PHP runs any necessary MySQL queries specified in the PHP files, and returns the results back to PHP for more processing.
  6. PHP generates the final page and hands it back to Apache.
  7. Apache sends the generated (“dynamic”) page back to the requesting user.

Again, this is a simplified version of what is actually happening. For more details on the exact process, if you’re interested, check out: http://codex.wordpress.org/Query_Overview

While the above process works well enough for a decently-sized site, there is still a lot of steps and a lot of work that the server has to do to generate the page. Add any overhead incurred by plugins, statistics tracking, comments, and ads, and you can see how sites that are reasonably quick with a few visitors can slow down dramatically when they receive serious traffic.

WPSuperCache aims to remedy this by generating the dynamic pages only once, then storing the page for the next visitor who requests it. The second visitor to the page gets a “static” page, which is much easier on the server, since it just has to send the data instead of doing any processing via PHP.

The diagram below provides an overview of this functionality:

Nexcess-WPCache

As you can see, this can be much more efficient. For more details on how WPSuperCache actually works, you can check out the developer’s page at http://ocaoimh.ie/wp-super-cache/

WPSuperCache can be installed directly from WordPress by going to Administration -> Plugins -> Add New, then searching for ‘WPSuperCache’ and clicking “Install”. WordPress should take care of the rest of the installation.

Now we’ll need to configure WPSuperCache to actually cache our pages. A link to the WPSuperCache plugin configuration page should appear in a red box at the top of your page, but it if doesn’t, just go to Settings -> WP Super Cache in your WordPress admin menu on the left side of the page.

Here are the options that we typically recommend for all users:

  • Cache hits to this website for quick access. (Recommended)
  • Use mod_rewrite to serve cache files. (Recommended)
  • 304 Not Modified browser caching. Indicate when a page has not been modified since last requested. (Recommended)
  • Cache rebuild. Serve a supercache file to anonymous users while a new file is being generated. (Recommended)
  • Clear all cache files when a post or page is published

Now click the blue “Update Status” button at the bottom of the top section, under the settings you just changed. You should now see a message at the top of your page, something like “Rewrite rules must be updated”. Scroll down the page until you get to the “Mod Rewrite Rules” section. Find the blue button that says “Update Mod Rewrite Rules” and click it. The “Mod Rewrite Rules” section should now have a green background, indicating that the rewrite rules in your .htaccess file are correctly configured.

To test the cache, log out of your WordPress administration panel and visit your site. View the source of the page (ctrl+u in some browsers) and scroll down to the very bottom. You should see something like this:

&lt;!-- Dynamic page generated in 0.208 seconds. --&gt;<br />
&lt;!-- Cached page generated by WP-Super-Cache on 2010-10-16 19:13:06 --&gt;<br />
&lt;!-- super cache --&gt;

If you don’t see an HTML comment from WPSuperCache similar to the above, try holding down the “shift” key on your keyboard and click your browser’s refresh button to “hard refresh” the page. If that still doesn’t work, try clearing your browser’s cache, restart your browser, and try again. Note that the timestamp in the HTML comment is the time that the page was cached at, so you can tell when the page was generated by WPSuperCache.

Oct 10

  • Created: Oct 10, 2010 11:29 PM

Handle More Traffic in WordPress with Memcache

Memcache is a High-performance, distributed object caching system.

WordPress is a great piece of blogging / CMS software. If you’re running a WordPress site and you’re having growing pains, you can combine the two to handle increased traffic and, more than likely, get pages to load faster for everyone.

WordPress has built-in support for extensible caching, but by default, it’s only valid for one session. While this helps speed up page loads / reduce server load for individual users, caching the objects in a non-session-exclusive (persistent) cache has much more potential for performance improvements and generally scales better. (You can read more about built-in WP_Cache mechanisms here: http://codex.wordpress.org/Function_Reference/WP_Cache)

We recommend and have successfully implemented the WordPress plugin “Memcached object cache” (http://wordpress.org/extend/plugins/memcached/), which is trivial to install / configure. By default, it assumes that you are running a single memcached server on the same server that PHP is running on (i.e. 127.0.0.1 or localhost). It also assumes the port is 11211, which is the “standard” memcached port.

You’ll need the PECL ‘memcached’ extension: http://pecl.php.net/package/memcached, as well as memcached actually installed, running and accepting connections on whatever you configure Memcached object cache to use (127.0.0.1:11211 is again, the default).

Installing the actual adapter that lets WP_Cache use memcached is just a matter of putting the ‘object-cache.php’ into your wp-content directory. If you’re set up using the defaults, it should just start working immediately. This, in combination with the fantastic WP-SuperCache-Plus plugin (http://wpscp.trac.armadillo.homeip.net/) will allow your site to handle much more traffic than would otherwise be possible.

Oct 3

  • Created: Oct 3, 2010 10:35 AM

Finding the status of Magento cron jobs / tasks

As covered in our last article, you should have a “cron job” (crontab) set up to run Magento’s cron.php file every so often (15 minutes or so is fine) via PHP directly on the server to take care of housekeeping tasks that keep Magento working well. Some other tasks, like updating tracking / inventory status, sending out newsletters, and other miscellaneous things also require that the crontab be properly set up, so if you haven’t taken care of that yet, please see the setup guide here, or contact support@nexcess.net and we can help you set it up.

Once your crontab is properly installed and configured, you might be curious as to what it’s actually doing behind the scenes, or you might want to verify that something did / did not happen, and when. Since Magento lacks this arguably critical information, we’ve whipped up a simple PHP script that can show you what’s scheduled, what’s running, and what already ran, along with all the other information hiding in the ‘cron_schedule’ table of your Magento database. Simply drop the script (linked to below) into your base HTML directory for Magento (usually this will be your “public_html” directory), change the file extension to “.php” instead of “.phps”, and load it up in your favorite browser.

You should see something like this:
Mage Cron

All of the fields should speak for themselves.

Copy and save the following PHP script.

<?php
 
// Parse magento's local.xml to get db info, if local.xml is found
 
if (file_exists('app/etc/local.xml')) {
 
$xml = simplexml_load_file('app/etc/local.xml');
 
$tblprefix = $xml->global->resources->db->table_prefix;
$dbhost = $xml->global->resources->default_setup->connection->host;
$dbuser = $xml->global->resources->default_setup->connection->username;
$dbpass = $xml->global->resources->default_setup->connection->password;
$dbname = $xml->global->resources->default_setup->connection->dbname;
 
}
 
else {
    exit('Failed to open app/etc/local.xml');
}
 
// DB Interaction
$conn = mysql_connect($dbhost, $dbuser, $dbpass) or die ('Error connecting to <a class="HelpLink" onclick="showHelpTip(event, hint_id_7); return false;" href="javascript:void(0)">mysql</a>');
mysql_select_db($dbname);
 
$result = mysql_query("SELECT * FROM " . $tblprefix . "cron_schedule") or die (mysql_error());
 
 
// CSS for NexStyle
echo '
<html>
<head>
<title=Magento Cron <span style='background-color:#CCFF00;'>Status</span>>
<style type="text/css">
html {
    width: 100%;
    font-family: Helvetica, Arial, sans-serif;
}
body {
    background-color:#00AEEF;
    color:#FFFFFF;
    line-height:1.0em;
    font-size: 125%;
}
b {
    color: #FFFFFF;
}
table{
    border-spacing: 1px;
    border-collapse: collapse;
    width: 300px;
}
th {
    text-align: center;
    font-size: 125%;
    font-weight: bold;
    padding: 5px;
    border: 2px solid #FFFFFF;
    background: #00AEEF;
    color: #FFFFFF;
}
td {
    text-align: left;
    padding: 4px;
    border: 2px solid #FFFFFF;
    color: #FFFFFF;
    background: #666;
}
</style>
</head>';
 
// DB info for user to see
echo '
<body>
<a href="http://nexcess.net">
<img src="http://static.nexcess.net/images/logoMainR2.gif" width="217" height="38" alt="Nexcess Beyond Hosting"></a>
 
 
 
<b>Table Prefix:</b> ' . $tblprefix . ''
. '<b>DB Host:</b> ' . $dbhost . ''
. '<b>DB User:</b> ' . $dbuser . ''
. '<b>DB Name</b>: ' . $dbname . '</p>';
 
// Set up <span style="background-color:#CCFF00;">the</span> table
echo "
        <table border='1'>
        <thread>
        <tr>
        <th>schedule_id</th>
           <th>job_code</th>
           <th><span style="background-color:#CCFF00;">status</span></th>
           <th>messages</th>
           <th>created_at</th>
           <th>scheduled_at</th>
           <th>executed_at</th>
           <th>finished_at</th>
           </tr>
           </thread>
           <tbody>";
 
// Display <span style="background-color:#CCFF00;">the</span> data from <span style="background-color:#CCFF00;">the</span> query
while ($row = mysql_fetch_array($result)) {
           echo "<tr>";
           echo "<td>" . $row['schedule_id'] . "</td>";
           echo "<td>" . $row['job_code'] . "</td>";
           echo "<td>" . $row['<span style="background-color:#CCFF00;">status</span>'] . "</td>";
           echo "<td>" . $row['messages'] . "</td>";
           echo "<td>" . $row['created_at'] . "</td>";
           echo "<td>" . $row['scheduled_at'] . "</td>";
           echo "<td>" . $row['executed_at'] . "</td>";
           echo "<td>" . $row['finished_at'] . "</td>";
           echo "</tr>";
}
 
// Close table and last few tags
echo "</tbody></table></body></html>";
 
mysql_close($conn);
?>

PLEASE NOTE: This script is designed to be secured for use by site administrators only. If you have any questions, please e-mail support@nexcess.net

Posted in: Magento, php / Tagged: , , , , , ,
Sep 24

  • Created: Sep 24, 2010 4:47 PM

Speed up Magento DataFlow (Import / Export)

For as long as we can remember, Magento has had issues with Import and Export profiles, especially regarding performance. We have tried many different solutions for speeding up DataFlow and dealing with other import / export related issues. We’ve found a solution that seems to help in the majority of cases. First, I’d like to mention that the Magento import / export status page occasionally just shows a white screen (page doesn’t load completely or loads blank), a 500 error, or some other random error when really, the import / export job is still running in the background and will complete successfully.

Magento DataFlow has 4 main pieces

  • Adapter (Read the external data source and allows the parser to access it)
  • Parser (Goes through the external data and translates it into something Magento can understand)
  • Mapper (Takes external data fields and associates them with the correct Magento data fields)
  • Validator (Ensures that data is correct before / after committing it)

A standard workflow looks something like this

Create an import / export profile (either via Profile Manager or Advanced Profile which allows you to tweak the actual XML profile)

You probably want to select “Local File” for “File Information -> Type” when creating the profile. “Local File” means that the file will be saved to [Magento basedir]/var/export if you’re exporting data. It is critical that you ensure the file does not exist or that you manually specify a new filename for each export; sometimes Magento has trouble overwriting an existing file and this will cause cryptic errors / export failure. Your best bet is to use a new filename for each attempt.

Run the profile either via a custom script (not recommended or necessary in most cases, unless running via cron)

When DataFlow receives an XML request for an import or export, it will connect to either the database in the case of an export or the external data source in the case of an import and (after parsing, mapping, and possibly validating) start building up the results (to be later written to a file or imported into to the database) in the following tables.

dataflow_batch_import<br />
dataflow_batch_export

There is an issue with Magento where it does not truncate (empty the data from) the tables before starting an import / export. Also, having extra data in those tables or the logs will slow DataFlow down quite a bit.

Here is what we recommend for speeding up DataFlow

  1. Log into your Magento Dashboard (admin) and go to System > Configuration
  2. Go to Advanced > System -> Log Cleaning on the side menu
  3. Change “save log, days”. We recommend 14.
  4. Select “Enable log cleaning”
  5. Ensure your crontab is properly configured. Contact support@nexcess.net if you’re not sure or follow this guide: ( http://docs.nexcess.net/article/setting-up-a-cron-job.html )
    To make sure the logs get cleaned, run cron.php manually just before the import by loading http://yoursite.tld/cron.php or wherever cron.php is located (it’s in your Magento base directory) in a web browser. There won’t be any output displayed in your browser, but you should get a HTTP/200 response code if everything ran OK.
  6. You can check the status of the export by doing a “SELECT COUNT(batch_export_id) FROM dataflow_batch_export” in MySQL. Alternatively, you can download the PHP script below and rename it “mage-dataflow.php” and upload it to your Magento base directory, then load it in your web browser. It will show you the number of rows in the import and export tables. When the rows go up when you reload the page, you can see that more items are being processed. When the count drops back to zero, the dataflow operation has started writing out the file for the export or importing the records for an import and should be done in less than 5 minutes. Additionally, the page lets you truncate the tables at the press of a button (WARNING: Only use if your dataflow process is stuck! It will abort the dataflow process).
<?php

function emptyTables() {

// Parse magento's local.xml to get db info, if local.xml is found

	if (file_exists('app/etc/local.xml')) {

		$xml = simplexml_load_file('app/etc/local.xml');

		$tblprefix = $xml->global->resources->db->table_prefix;
		$dbhost = $xml->global->resources->default_setup->connection->host;
		$dbuser = $xml->global->resources->default_setup->connection->username;
		$dbpass = $xml->global->resources->default_setup->connection->password;
		$dbname = $xml->global->resources->default_setup->connection->dbname;

	} 
	
	else {
	    exit('Failed to open app/etc/local.xml');
	}

	$conn = mysql_connect($dbhost, $dbuser, $dbpass) or die ('Error connecting to mysql');
	mysql_select_db($dbname);

	mysql_query("TRUNCATE " . $tblprefix . "dataflow_batch_export") or die (mysql_error());
	mysql_query("TRUNCATE " . $tblprefix . "dataflow_batch_import") or die (mysql_error());
}

// Get the name of this script
$myname = $_SERVER["SCRIPT_NAME"];

// Check to see if we're truncating tables
$p = $_REQUEST['clear'];
if ($p) {
	emptyTables();
	echo "<h1>Tables truncated!</h1><br /><br />";
}


// DB Interaction
if (file_exists('app/etc/local.xml')) {

	$xml = simplexml_load_file('app/etc/local.xml');

	$tblprefix = $xml->global->resources->db->table_prefix;
	$dbhost = $xml->global->resources->default_setup->connection->host;
	$dbuser = $xml->global->resources->default_setup->connection->username;
	$dbpass = $xml->global->resources->default_setup->connection->password;
	$dbname = $xml->global->resources->default_setup->connection->dbname;

	} 
	
	else {
	    exit('Failed to open app/etc/local.xml');
	}

	$conn = mysql_connect($dbhost, $dbuser, $dbpass) or die ('Error connecting to mysql');
	mysql_select_db($dbname);


mysql_select_db($dbname);

$exportresult = mysql_query("SELECT COUNT(batch_id) FROM " . $tblprefix . "dataflow_batch_export") or die (mysql_error());
$importresult = mysql_query("SELECT COUNT(batch_id) FROM " . $tblprefix . "dataflow_batch_import") or die (mysql_error());

$numexportrows = mysql_fetch_array($exportresult);
$numimportrows = mysql_fetch_array($importresult);

$numexport = $numexportrows[0];
$numimport = $numimportrows[0];



// CSS for NexStyle
echo '
<html>
<head>
<title=Magento DataFlow Status>
<style type="text/css">
html {
    width: 100%;
    font-family: Helvetica, Arial, sans-serif;
}
body {
    background-color:#00AEEF;
    color:#FFFFFF;
    line-height:1.0em;
    font-size: 125%;
}
b {
    color: #FFFFFF;
}
table{
    border-spacing: 1px;
    border-collapse: collapse;
    width: 300px;
}
th {
    text-align: center;
    font-size: 125%;
    font-weight: bold;
    padding: 5px;
    border: 2px solid #FFFFFF;
    background: #00AEEF;
    color: #FFFFFF;
}
td {
    text-align: left;
    padding: 4px;
    border: 2px solid #FFFFFF;
    color: #FFFFFF;
    background: #666;
}
</style>
</head>';

// DB info for user to see
echo '
<body>
<a href="http://nexcess.net">
<img src="http://static.nexcess.net/images/logoMainR2.gif" width="217" height="38" alt="Nexcess Beyond Hosting"></a>
<br />
<br />
<br />
<b>Table Prefix:</b> ' . $tblprefix . '<br />'
. '<b>DB Host:</b> ' . $dbhost . '<br />'
. '<b>DB User:</b> ' . $dbuser . '<br />'
. '<b>DB Name</b>: ' . $dbname . '<br /><br /></p>';

// Set up the Export table
echo "
	<h1>Export</h1>
	<h2>$numexport rows</h2>
	<br />
	<h1>Import</h1>
	<h2>$numimport rows</h2>";

echo '<INPUT type="button" value="Truncate import and export tables (runs in a new window)" onClick="window.open(\'' . $myname . '?clear=1\',\'mywindow\',\'width=400,height=510\')">';

mysql_close($conn);
?>
Posted in: Magento, php / Tagged: , ,