Posts Tagged Backup

Upgrading your Ubuntu Linux filesystem from Ext3 to Ext4

With recent Linux distributions (based on the 2.6.28 kernel) the Ext4 file system is considered stable and provides many improvements over Ext3.

Ubuntu systems running 9.04 (Jaunty Jackalope) had the option to format Ext4 during install time, but Ext3 was the default until 10.04 (Lucid Lynx). However, it’s possible (and fairly easy) to upgrade Ext3 to Ext4 in-place and without reformatting or reinstalling. Nice, huh?

The following guide is based heavily off the Ubuntu documentation, which some added steps and clarification. If you’re going to do this, use your head and make sure you have backups of your important data before you start, just in case. Also, don’t skip any steps, and double-check yourself along the way. A typo could ruin your day.

Zero – Before you start

Verify your kernel version by running uname -r. Make sure it is 2.6.28 or higher. If it is not, STOP. Upgrade your kernel before attempting to proceed.

Also, check that you’re currently using Ext3 — this would be pointless if you’re already running ext4 — by using the mount command. You should see something like the following:

/dev/sda1 on / type ext3

This shows ext3 as the current filesystem type.

One – Turn off any automatic updating

Go to System > Administration > Update Manager. Click Settings and set Automatic Updates to Only notify about available updates. You want to make sure that the system doesn’t start applying an update halfway through this and give you the potential worst-case of an unbootable system.

Two – Prepare to load the Ext4 driver

Ext4 is backwards compatible with Ext3, which makes this update process as easy as it is. We’ll start by telling the system to load the Ext4 driver instead of the Ext3 driver at boot. This will allow the system to boot and run normally for the steps that follow.

Edit the file /etc/fstab in your favorite text editor. gksu gedit /etc/fstab will do fine. Change any references for disks that you plan to convert from Ext3 to Ext4. Take a look at the example:

# /etc/fstab: static file system information.
#
#                
proc            /proc           proc    defaults        0       0
# /dev/sda1
UUID=327c1819-14e1-4b96-b9d2-d5e55e50f1ae /               ext3    defaults,errors=remount-ro,relatime 0       1
# /dev/sda5
UUID=900e39f2-ad49-42ee-a7f5-8e6807d6b35b none            swap    sw              0       0
/dev/scd0       /media/cdrom0   udf,iso9660 user,noauto     0       0

The above example shows an Ext3 filesystem on /dev/sda1. I’ll use /dev/sda1 as the file system for the rest of the guide for clarity. So, change ext3 to ext4 in the above file and save it.

Reboot. ( shutdown -r now )

Three – Update the filesystem options

After rebooting, the kernel will be using the Ext4 filesystem driver, even though your filesystem on disk is still natively ext3. That’s what you want, you’ll be making the actual changes to the filesystem soon.

Now, run the following command to make the changes to your filesystem that adds the Ext4 features:
sudo tune2fs -O extents,uninit_bg,dir_index /dev/sda1

Note: There’s no spaces after the comma-separated options. Copy/paste if you’re not sure.

This assumes /dev/sda1, like I stated above. Change it if it’s not correct for you, or if you’re doing multiple filesystems, run it for each one.

Reboot again ( shutdown -r now )

Four – Take a deep breath

Due to the changes that tune2fs made on your filesystem, when you reboot you’re going to get the message

“Errors were detected on your filesystem, press ‘F’ to fix…” or something similiar. Press F and let it do it’s thing.

You may get the following error as well (I did):

The disk drive for for /tmp is not ready yet or not present.
Continue to wait or ...

Just wait patiently. You’re waiting for the changes to finish on the currently-processing filesystem, and it’s just telling you that it’s trying to mount or access /tmp and it’s having trouble, due to the currently-running process.

After a few minutes (depending on the size of your partitions and the amount of data on them), everything will go back to normal and you’ll be able to log in again.

There’s just a few more steps.

Five – Reinstall grub

Run the following command to reinstall grub. Note that the partition number is omitted from the device path — this is intentional and correct.

sudo grub-install /dev/sda

You should get a message indicating the operation was successful.

If you are upgrading from 8.04 LTS to 10.04 LTS, you will need to install grub2 as soon as possible, as grub1 is not Ext4-aware. (Users running 9.04 are not affected by this.)

sudo apt-get install grub2

followed by

sudo update-grub

Once it’s done, no more reboots are really necessary, though you can always reboot to make sure grub actually reinstalled correctly. Files written from now on will be written with full Ext4 structures. You can also turn automatic updates back on at this point.

You can verify all the filesystem features are set by running:

sudo tune2fs -l /dev/sda | grep features

This is the list of ext4 features you should have set:
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery extent sparse_super large_file uninit_bg
If you’re missing one or more features, go back to step three above and check that you entered the command correctly.

Questions, comments, and feedback are welcome, as always.

, , , ,

Leave a comment

Speed up Windows by disabling the “last access” time stamp

By default, Windows keeps track of the last time a file was accessed through the “last access” time stamp. If you use this time stamp for backup purposes or you make frequent use of the Windows search function base on time stamp, then you may actually have a use for it.

In other cases you can disable the update and it will speed up Windows by avoiding having to update (write) that time stamp every time a file is read.

There are a few different methods for disabling that time stamp:

Via the command line

Open an administrator-level command prompt and enter this command:

fsutil behavior set disablelastaccess 1

Replace the 1 with a 0 (zero) to turn the “last access” time stamp updating back on.

Via regedit

Navigate to the following registry location:

HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlFileSystem

Right-click the right-side panel and select New > DWORD Value. Call it NtfsDisableLastAccessUpdate and give it a value of 1.

To reenable, change the value to 0 (zero) or just delete it.

A reboot is required when the value is changed.

Via a registry file

Take the code from one of the following settings and create a new file ending in a .reg extension. Double-click to make the change, and reboot to make it take effect.

Enable

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlFileSystem]
"NtfsDisableLastAccessUpdate"=-

Disable:

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlFileSystem]
"NtfsDisableLastAccessUpdate"=dword:00000001

Questions, comments, and feedback is appreciated, as always.

,

Leave a comment

Correctly Recognize Alps Touchpad on Dell E6510 in Linux

Laptops which use newer Alps touchpad hardware may experience some lack of functionality as a result of a regression in the kernel psmouse driver — the touchpad is detected and works as a pointing device, but only functions with basic features. Scrolling, disabling tap-to-click, off when typing, and multi-touch (on supported devices) are some of the missing functionality. This appears to be the case with E5510, E6410, M6400, and other Dell (potentially all E2) and some non-Dell models.

From Simon Dierl:

Apparently, newer ALPS touchpads use a new, undocumented and unsupported protocol. The touchpad falls back to a legacy emulation mode, resulting in faulty detections. The kernel.org bug lists some efforts to reverse-engineer the protocol and has some patches based on DELL contributions that enable ImPS/2 emulation (scrolling works). This, however, still does not allow for synaptics support (turn off when typing, horizontal scroll, etc.). Additionally, some people report problems on suspend/resume. [sic]

The best way to notice if you have a machine which is affected by this bug, is to go to System > Preferences > Mouse and look for a Touchpad tab. If it’s absent, you are probably affected by this bug.

Another way to see if you are affected by this bug is to run lsinput and look for something like the following:

/dev/input/event9
bustype : BUS_I8042
vendor  : 0x2
product : 0x1
version : 0
name    : "PS/2 Generic Mouse"
phys    : "isa0060/serio1/input0"
bits ev : EV_SYN EV_KEY EV_REL

The above output shows the touchpad being identified and driven by the PS/2 driver.

This bug has been entered into Launchpad as bug #606238 and has its roots in Kernel bug #14660. Since it’s a mainline kernel bug, it’s likely to affect every Linux distribution. So far, it’s still a work-in-progress and there’s not been an accepted patch submitted to the Linux kernel team. There’s a discussion on ubuntuforums.org that this is a regression, and this was working in older kernel versions.

The below is based on a patch from cmg238 which, at the very least, causes the kernel to correctly recognize the device as a touchpad and enable some missing functionality. I have made adjustments to the instructions for clarity and explanation.

Download kernel source (to /usr/src):

sudo apt-get build-dep --no-install-recommends linux-image-$(uname -r)
apt-get source linux-image-$(uname -r)

(Note: in Ubuntu Precise 12.04, do the following instead, based on this LaunchPad comment)

sudo apt-get build-dep --no-install-recommends linux-image-$(uname -r)
sudo git clone git://github.com/bgamari/linux.git
cd linux
sudo git checkout origin/alps
sudo cp /boot/config-$(uname -r) .config

Also note that on Ubuntu Precise 12.04, you will be asked a bunch of additional questions at make-time. Accept the defaults, unless you have a reason to do otherwise.

Read about how to “undo” an apt-get build-dep and uninstall previously installed packages here.

Patch drivers/input/mouse/alps.c by locating alps_model_info and adding the additional line below, as follows:

static const struct alps_model_info alps_model_data[] = {
	{ { 0x73, 0x02, 0x64 }, 0xcf, 0xcf, ALPS_PASS | ALPS_DUALPOINT | ALPS_PS2_INTERLEAVED }, /* Dell Latitude E6510 */

Compile psmouse.ko module

cd src/drivers/input/mouse
make -C /lib/modules/`uname -r`/build M=`pwd` psmouse.ko

(On Ubuntu Precise 12.04, use the following instead:)

cd /usr/src/linux/drivers/input/mouse
make -C /lib/modules/`uname -r`/build M=`pwd` psmouse.ko

The following steps will cause you to lose mouse functionality until the modprobe psmouse statement. Be prepared. Also, You may want to backup your existing /lib/modules/(kernel version)/kernel/drivers/input/mouse/psmouse.ko before doing this!

rmmod psmouse
cp psmouse.ko /lib/modules/`uname -r`/kernel/drivers/input/mouse/
modprobe psmouse

The last thing to mention is if you update your kernel you will receive the distributed psmouse.ko module. If the kernel does not include a fix for this bug you will need to follow the directions in this post again to recompile the above patch back into the kernel.

Since this is a mainline kernel issue, I would ask that any reader who is able to, please visit the links within this post and contribute where ever you can to help in getting this resolved. You are welcome and encouraged to share your thoughts and feedback in the comments below as well.

, , , , , , , , ,

Leave a comment

Backing up your server using JungleDisk Server Edition – part 2

In part 1, I told you how to set up JungleDisk backup for your Linux server. In this part 2, I’ll tell you how to automatically have it dump and backup your MySQL databases (correctly)!

There are security implications if this permissions are not set correctly on the files, as you’re storing your MySQL password in the script, and you’re going to have complete database dumps sitting on your hard drive. However, I will attempt to explain as clearly as possible my solution, and I’m not responsible if it doesn’t work for you.

So, start out by deciding where you want your database-dumping script to run from. A few good example spots are /root (root users home directory), and /etc/jungledisk (with the configuration files). I’ve called my backup script prebackup.sh. You’ll understand the name further below, but I’m going to use that name the rest of the way through.

So create your prebackup.sh script as root, and set it to mode 700.

touch prebackup.sh && chmod 0700 prebackup.sh

This makes sure that root is the only user who can read, write, or execute it.

Now, using your favorite text editor, you can use the following sample script:

#!/bin/bash
date=`date -I`
dir=/var/backups
file=sqlbackup-$date.sql
touch $dir/$file &&
chmod 600 $dir/$file &&
mysqldump --all-databases -p'__MySQLPassword__' >> $dir/$file
find $dir -ctime +7 -delete

Danny pointed out that creating gzipped MySQL dumps, as I had in my original script, is discouraged as it defeats the data de-dup feature of jungledisk. The above script has been changed from the original to make uncompressed dumps. Thanks, Danny!

The last line, with the find statement, is responsible for the cleanup of the directory. It will delete any file which is older than 7 days. If you want more or less time, simply change +7 to your desired number of days (keeping the + sign).

Warning: There’s very little (read: none) sanity-checking in this script. Replace __MySQLPassword__ with your root MySQL password.

Jay pointed out that there will likely be issues with handling special characters in the SQL password. If you have any suggestions, please feel free to post them in the comments below. 

After saving, you should have a 183-byte (give or take) file with 0700 mode:

[email protected]:/etc/jungledisk# ls -l prebackup.sh
-rwx------ 1 root root 183 Mar 24 17:08 prebackup.sh

You should make sure that the directory in $dir is owned by user and group root and mode 0700. This will make sure that noone else has access to your dumped databases. Now that you have your script, it’s time to automate it. You could schedule a cron job to run the script, but it’s easier to have JungleDisk run it for you at the start of every backup job.

Start your management-side program, login, and go to your server’s backup configuration. Under Backup Options, check Enable Pre and Post Backup Scripts. Now, click Configure, and in the Pre-Backup Script, enter the full path and filename of your newly created script, i.e. /etc/jungledisk/prebackup.sh.

That’s it!

Now, on your assigned schedule, the server engine will run your database dumping script, and start your backup job. Of course, make sure whatever directory you’re dumping your databases to is included in your backup set.

The last thing to note is this script is amazingly light; it doesn’t delete any old databases and it doesn’t do much else. You’re free to modify it, and I would greatly appreciate any feedback on your modifications.

Comments are welcome, as always.

, , , , ,

Leave a comment

Backing up your server using JungleDisk Server Edition – part 1

 

This guide assumes you’re using a Debian-based (Ubuntu, Debian) build of Linux, and we’ll be using the 64-bit download from JungleDisk. The instructions don’t really change for the 32-bit version, except for the installer file name.

The first steps:

Go over to the JungleDisk Business page and sign up for, and download, the server edition. Now, this edition comes in two very important parts: The server-side program, and the management-side program.

The server-side program is what runs on your server. That’s the “backup engine” if you will. You will download the program appropriate for your server environment.

The management-side program is the program that you remotely connect to the “server” to configure it. You will download the program appropriate for running on your desktop computer.

It is fine to download the server-side version for Linux and the management-side version for Windows, if that is your configuration. In this case, I’m downloading the Linux .deb server-side installer for 64-bit linux, and the Windows management-side program. I can’t give you the actual download links; you’ll find them in your account page.

Now, on the server, I’m going to install the server-side engine as root. Navigate to the directory where you placed the downloaded file, and run:

sudo dpkg -i junglediskserver_315-0_amd64.deb

That will install the server. Follow the directions. Now, since it’s the deb package, it will automatically set up init scripts to make sure the jungledisk engine runs on startup. However, you will notice that at the end of the setup you were prompted to copy and edit an xml file. Copy /usr/local/share/jungledisk/junglediskserver-license-EXAMPLE.xml to /etc/jungledisk/junglediskserver-settings.xml

cp /usr/local/share/jungledisk/junglediskserver-license-EXAMPLE.xml /etc/jungledisk/junglediskserver-settings.xml

Now use your favorite editor to make a few changes to /etc/jungledisk/junglediskserver-settings.xml:

between and , enter your license key (found in your JungleDisk account).

Now, restart the jungledisk service.

/etc/init.d/junglediskserver restart

Myself, I also added my login user name and password between and respectively, but I don’t think it’s necessary. Worth keeping in mind if something doesn’t work right.

That’s all for the server-side configuration.

Now, on your management-side program, simply run the program and log into your JungleDisk account and your server should appear in the list. Double-click on it and the configuration screen will appear, where you can create backup sets and schedule them as you wish.

Want to correctly backup your MySQL databases in your backup set? See part 2 of this article, coming soon!

Comments are welcome, as always!

, , , , , ,

Leave a comment

Optimizing WordPress

So after my little fiasco with plug-ins and CPU throttling, I’ve been looking for ways to make WordPress at least a little lighter and faster. I’m not going to cover disabling plug-ins, I’m going to go over a few other ways, starting with …

Disabling revisions:

Every time a post is edited and/or published, a new revision is created. These stick around in the database (never deleted) and can not only grow the database, but can also lengthen query times for information. So, per MyDigitalLife and the WordPress codex, here’s the quick-and-dirty:

…simply add the following line of code to wp-config.php file located in the root or home directory of WordPress blog.

define('WP_POST_REVISIONS', false);

If you would rather limit the number of revisions to something small, say 2 for example, just use the number instead of FALSE:

define('WP_POST_REVISIONS', 2);

It should be added somewhere before the require_once(ABSPATH . 'wp-settings.php'); line. That’s it. Revisions will no longer be created. If you want to delete previously created revisions, read on…

Deleting revisions:

So now that you’ve disabled revisions, how do you delete all the old cruft laying around? MyDigitalLife has the answer on this one too.

…and then issue the following [SQL] command (it’s also recommended to backup the database before performing the deletion SQL statements):

DELETE FROM wp_posts WHERE post_type = "revision";

All revisions should now be deleted from the database.

Caching:

Caching is a hot button for sites that could potentially see high amounts of traffic (and since we would all like to be in that category…) The caching plug-in that I use and recommend is WP Super Cache. The UI is easy enough to work around, though it does require editing of the .htaccess file.

Database queries:

Shared hosting providers get real upset when applications and scripts perform excessive and unoptimized database queries. Heavy themes, excessive numbers of widgets, and badly-written plug-ins all contribute to this. Fortunately, a post on CravingTech points to an easy method to check the number of queries happening on a single page load.

You can insert this snippet of code on your Footer.php file (or anywhere) to make it display the number of queries:

<?php echo $wpdb->num_queries; ?> <?php _e(‘queries’); ?>

After looking at the number of queries occurring on a page load, try changing themes, disabling plug-ins, and/or reducing the number of widgets on a page to reduce the number of queries. SQL Monitor looks like a useful plug-in for further examining SQL queries, but I haven’t used it, so I can’t comment on it’s usefulness (or lack thereof).

Also…

I’ve stumbled on some additional information while researching, and apparently the “WordPress should correct invalidly nested XHTML automatically” setting (under Settings > Writing) can not only increase the load when a post is saved, but can also break some plug-ins. If you’re familiar enough with (X)HTML to handle correctly closing tags, you might actually be better turning this off.

You can also find other settings for wp-config.php on the WordPress Codex page.

, , , , ,

Leave a comment

Whole-disk encryption

Two times recently I’ve had friends who have had data stolen from them physically; one had her house broken into and her laptop stolen, another had her external backup drive stolen.

It’s one thing to have a laptop or a hard drive stolen, but it’s much much worse to worry about the impact that the now-compromised data can have on your life — Stored passwords, confidential data, personal information, photos, the list goes on. It’s even possible that a thief could use the saved passwords and cookies saved on your computer to access your on-line accounts and do considerably more damage. On-line banking, email accounts, social-media accounts, etc.

Even if your laptop is damaged, it’s still possible for an attacker to take the hard drive out and hook it up to another computer to gain access to your information. Log-in passwords don’t protect against this if the OS is bypassed. Your data is completely accessible.

While it is possible to secure a large part of your data by encrypting your most private files, that still doesn’t cover areas like browser cookies, temp files, and the swap space. Data from secured areas can “leak” into those areas and still be viable for attackers. In addition, this requires effort, and I talked about this in my post about backups.

Hard drive passwords are one good tool, as they render the drive effectively useless to all but the most sophisticated attackers (read: all except police, government, and attackers with sophisticated tools). The hard drive is logically “locked” at the firmware level and cannot be unlocked without the correct password or some circumvention of this. Your data is still stored on the drive, but attempts to read the drive will fail. Most modern computers (especially laptops) and hard drives support hard drive passwords. This is a good tool, but if you’re at all concerned with the potential weaknesses of this, you might want something a little stronger.

Enter full-disk (or whole-disk) encryption. This is one of the strongest tools to protect against any time of attack against the hard drive. By storing the data on the drive in an encrypted form, it becomes next-to-impossible to get anything useful off the drive. Full-disk encryption typically uses the AES method, which is well-established to be secure.

There are several commercial solutions to full-disk encryption, but as a big supporter of free/open-source software, I’m only going to cover the free and cross-platform ones.

TrueCrypt (Windows, Mac, Linux) – This is an exceptional tool for encrypting both internal and external drives, and creating encrypted “containers” to store files in. Free and open-source, and from what I’ve seen, rock solid. I’ve used this under Windows to do full-disk encryption, and I still use it to keep my 1TB external hard drive encrypted. Setup is easy and doesn’t require you to reinstall the OS — encryption of your existing drive can be done on-the-fly and you won’t lose any data. (Though having a backup beforehand is always a good idea)

Ubuntu has a few options ingrained into the OS. Home directory encryption is a choice during installation, which protects your files when you’re not logged in. The encryption is very good, but there’s still the chance that file information will leak out into unencrypted areas of your drive. When you’re installing Ubuntu, and you’re at the part where you enter your chosen username and password, at the bottom of the screen you’ll see the option “Require my password to log in and decrypt my home directory.” That’s the option which enables home directory encryption.

The “alternate” installer CD gives a solution to this: Full-disk encryption using LVM/dmcrypt. Unfortunately, this option will require you to reinstall your OS as it requires the disk to be repartitioned as LVM and encrypted. Also, it’s a little harder to set up. Although the installer is guided (and some very good walkthroughs exist) there’s no fancy GUI. It’s also not easily reversible, but as far as I’m concerned, there’s no reason you’d want to. While installing using the alternate CD, choose “LVM with encryption” while you’re setting up partitions. It’s worth noting that this installation was markedly slower than a typical install (I think it took an hour-something) but considering the amount of disk I/O that was taking place, I’m really not surprised.

Performance versus an unencrypted drive in all cases is good — your system will take a performance hit but it wont be very noticeable except in cases of disk thrashing, or very heavy disk read/write activity. You will notice a little bit of a slowdown in system performance then, but it won’t be much.

Thoughts or opinions on this? Please share them!

, , ,

Leave a comment

Why aren’t you backing up your data?

Would you only ever have one house key? Car key?

Would you only get one picture of your child? Your spouse?

Then why would you not treat your computer data the same way?

Being a computer technician, I can’t tell you how many times (a day!) I hear “Will this affect my data/hard drive/information/etc…” You know who I always hear that from? People who don’t have backups.

If you’ve ever lost an important file because of a system crash, hard drive failure, or mistakenly deleted it, or even worse — suffered at the hands of a theft or destruction from a computer virus or malware, then you’ve likely already learned this very important lesson (rather painfully, no doubt).

If you’re working on something that is so important you’re worried about it, why wouldn’t you keep a second copy of it?

I’ll tell you exactly why: Because it takes time and effort.

But for something so important, there really are very simple (and inexpensive) solutions.

You could burn a CD or DVD. CDs only hold about 700MB of data, and most people have far more than that. Dividing up folders and folders of pictures and music over 700MB CDs is frustrating at best. Download movies? Most won’t even fit on a 700MB CD. There’s DVDs, sure. However, one of the biggest drawbacks to optical media is their shelf life (5 years or so, often times much less). Optical media degrades with exposure to light, heat, and may warp if stored vertically. Rewritable media has an even shorter shelf life, as every write cycles “burns” the disc and degrades it further. That leaves you with a very real possibility that when you go to reach for your data, it won’t be there.

You could use an external hard drive. External hard drives are just as inexpensive (per MB/GB) as optical media (sometimes more so), and have a longer shelf life. They are a great backup destination for large amounts of data, and can be backed up to quickly and easily. Unfortunately, magnetic media can’t be exposed or stored near strong electrical or magnetic fields. They are also fragile while powered on, they too do degrade over time, and can sometimes fail without warning. You could spend some money on a RAID array and have a nearly fail-safe solution… but it doesn’t protect against fire or theft.

You could backup to a flash drive. Unfortunately flash drives are actually the smallest capacity and the highest cost of any removable media. They are great for carrying around a small amount of data (some files back and forth from work, for example), but as a backup solution, they are impractical.

I prefer the set-it-and-forget-it approach of online backups, and I really encourage you to try the same.

Online backups charge you a small fee (usually monthly or yearly) and store your files on a remote server in case of a disaster. All you need is a reasonably fast internet connection. Storage and retrieval are limited to the speed of your internet connection, but this really takes the effort out of it. Backups are done routinely in the background and happen automatically. If disaster ever strikes in the form of a lost file, you simply connect to the online service and re-download your file.

So here’s a few suggested services and the last pricing structure I recall them having and my thoughts on each:

CrashPlan (Windows, Mac, Linux)

Cost: Free if you’re backing up to an external drive or a friends computer (even off site); $59/yr for one computer or $100/yr for all your computers to back up to their storage center (“CrashPlan Central”).

Pros: Inexpensive, unlimited storage space. Easy to use application. Supports local destinations for rapid backups and restores. Supports encryption. Cross-platform. Data de-duplication reduces upload size on changed files.

Cons: Requires payment for the service term up front. Minor display issues related to GDK_NATIVE_WINDOWS under Linux. Some features require additional “CrashPlan Pro” license.

My thoughts: If you’re a Linux user this is the service for you. Slightly cheaper than Mozy for a single computer for the year; much cheaper for multiple computers.

Mozy (Windows, Mac)

Cost: Free for the first 2GB of storage; $5/mo per computer for unlimited.

Pros: Inexpensive for a few PCs. Easy to use application. Option to display icons on files to show what is backed up and what is pending. Easy to use options. The option to order restore DVDs is available for disaster recovery, but it is expensive.

Cons: No plans for a linux client. Slow transfer speed.

My thoughts: For Windows-only users this is a great service. Automatic monthly payments make the cost easy to budget.

JungleDisk (Windows, Mac, Linux)

Cost: $2-5 per month and $0.15 per GB. Transfer rates apply with storage on Amazon S3, or no transfer fee with storage on Rackspace.

Pros: The price structure is fair — pay for what you use. A very reliable infrastructure in the two providers. Encryption. Multiple datacenters to assure your data is safe. They’ve been around for a while. Inexpensive for small amounts of data. Data de-duplication reduces storage space, cost, and upload size.

Cons: Can get expensive with large amounts of data. The application is somewhat confusing at first.

My thoughts: Another good cross-platform provider. Although a bit more costly than CrashPlan or Mozy, the thought of multiple data centers is appealing to those with mission-critical data.

Symform (Windows, Mac, Linux [Beta])

Cost: First 10GB free, $0.15/GB/Month each additional (or free if you contribute)

Pros: Generous amounts of free space, and no limits on the amount of space you can earn if you contribute storage. Contribution is not required. Interface is simple, and setup is easy. Support can be paid by contributing space as well.

Cons: No option yet to select files to exclude, or for single file restores. Contributing requires setting up port forwarding.

My thoughts: Symform is a good, spacious alternative to other backup providers, and especially appealing for users who have space to contribute.

Bottom line: There really are no “perfect fit” backup solutions, but the best practice is to use one or more different methods and keep at least one at a second location (“off-site”). Worst case, your home could burn to the ground or be broken into, and your optical discs and external hard drives would be forever gone. Online backups do alleviate that fear, but rely on an internet connection to recover your data. I’ve found it best to keep one backup copy on an external hard drive (for accessing large amounts of data quickly) and use an online provider for worst-case recovery (the backup hard drive crashes, or fire or theft claims the backup). It’s all about how valuable your data is to you.

Comments and feedback are welcome, as always.

, , , , , , , ,

Leave a comment

CrashPlan : Troubleshooting real-time file backup on linux

CrashPlan on Linux depends on the inotify kernel module to know when files update in real-time.

Inotify was merged into the 2.6.13 Linux kernel, so if you’re running a kernel equal to or newer than this, it’s already installed. If not, you’ll have to install it yourself. If inotify is installed, you may need to increase the number of watches that can be created.

The inotify module is govered by a property called max_user_watches. If you attempt to exceed the max number you’ll get the following error in the engine_error.log (but the process lives on).

inotify_add_watch: No space left on device

Any file not covered by a watch does not have real-time backup protection.

The default on my Ubuntu 11.04 box is 524288, which seems plenty sufficient for me. I have not experienced any issues, but if you find that you are, you may want to increase the watch value.

Updating the Watch Value

You can temporarily update the value with:

echo 1048576 > /proc/sys/fs/inotify/max_user_watches

You can update the value permanently by putting the following value in /etc/sysctl.conf and restarting:

fs.inotify.max_user_watches=1048576

For more information, see CrashPlan’s Forums.

, , , , ,

Leave a comment

Fix /etc/rc.d/init.d/functions: line 513: 1345 aborted “[email protected]” FAILED

Edit the file /etc/X11/xorg.conf and remove (or comment out) any references to Keyboard Input.

Create a backup of your xorg.conf file:

cd /etc/X11/
cp xorg.conf backup.xorg.conf

Delete the following lines in xorg.conf:

InputDevice    "Keyboard0" "CoreKeyboard"
Section "InputDevice"
# generated from data in "/etc/sysconfig/keyboard"
Identifier     "Keyboard0"
Driver         "keyboard"
Option         "XkbLayout" "us"
Option         "XkbModel" "pc105"
EndSection

Reboot your system:

shutdown -r now

,

Leave a comment