Posts Tagged CrashPlan
By default, CrashPlan backs up everything in your home folder including all hidden directories (directories starting with a dot (.). This would include some directories your probably don’t want backed up, such as ~/.local/share/Trash (your trash) and a bunch of other hidden directories.
Fortunately CrashPlan’s file exclusion feature includes a way to specify exclusions by regular expression. Simply go to Settings > Backup and next to Filename Exclusions click the configure button.
Check the box for Regular Expression and enter this:
Click the plus sign, then ok, then save again.
That will exclude all the dotted directories from your backups.
Have any filename exclusions that you use on your backups? Feel free to share your rationale in the comments below!
So today I was using smbmount to mount a network share from my Synology DiskStation to my Linux PC when I noticed a rather annoying file permissions issue that I couldn’t seem to fix. Why am I using smbmount and not Gnome’s GUI to mount? Because I need root to have access to the file system as well so that CrashPlan can back up to it.
Here’s what happened:
First, I mounted the share (as root):
smbmount //diskstation/mike /mnt/mynas -o credentials=/home/mike/mike.cred,uid=mike,gid=mike
(For more information on the smbmount or the mount.cifs credentials file, see the Ubuntu manpage for mount.cifs)
That worked great, except for when I do this (as root)…
ls -ld /mnt/mynas
… I get the following output:
drwxrwxrwx 17 mike mike 0 2011-05-20 09:25 mynas
I sure didn’t want the directory world-writable. So I tried specifying
dir_mode as both
0755 using the following (as root):
smbmount //diskstation/mike /mnt/mynas -o credentials=/home/mike/mike.cred,uid=mike,gid=mike,file_mode=0755,dir_mode=0755
Then I checked it:
ls -ld /mnt/mynas
… and got:
drwxrwxrwx 17 mike mike 0 2011-05-20 09:25 mynas
That didn’t do anything at all to help. Why? Because as it turns out the DiskStation is using a Samba server with CIFS extensions and is passing the permissions to smbmount (mount.cifs). The
dir_mode options are ignored if the remote server is using CIFS extensions.
If the server does not support the CIFS Unix extensions this overrides the default file mode.
If the server does not support the CIFS Unix extensions this overrides the default mode for directories.
Source: Ubuntu manpages.
So there’s a couple of options here. First, I could set it to mount somewhere inside /home/mike, which would generally protect it. But I’d really like to know what’s up with the file permissions. So I did a little more Google-fu.
As it turns out, the CIFS extensions on the DiskStation can be disabled, all it takes is to edit a file. Lepoulpe posted on the Synology forums the following edit:
you can disable “unix extensions” in the ds106′s samba server. To achieve this, you need to add the folowing line in the [global] section of /usr/syno/etc/smb.conf :
So, I SSH’d into my DiskStation as root (should be the same password as ‘admin’ if you’re having trouble) and used the vi editor to make the edit. Afterwards, I restarted samba on the DiskStation by doing this:
Then I remounted the Samba share as root…
smbmount //diskstation/mike /mnt/mynas -o credentials=/home/mike/mike.cred,uid=mike,gid=mike,file_mode=0750,dir_mode=0750
… and checked the permissions:
ls -ld /mnt/mynas
… and got the following output:
drwxr-x--- 17 mike mike 0 2011-05-20 09:25 mynas
So now I have /mnt/mynas mounted to my share on the DiskStation. If I wanted it to mount on boot, I could add something like the following to /etc/fstab:
//diskstation/mike /mnt/mynas smbfs auto,credentials=/home/mike/mike.cred,uid=mike,gid=mike,dir_mode=0750,file_mode=0750,user 0 0
Questions about my method? Have any feedback or alternate methods to share? Please feel free to do so in the comments below. Thank you!
For about a week now I’ve been wrestling with implementing a system where CrashPlan would backup to my network drive. I ran into a really bit problem: When you mount a network location in Gnome using the GUI (gvfs), root can’t access it. Since the CrashPlan engine runs at root, it makes the network location unusable as a backup destination.
After a while of working on different ways to solve this rather large hurdle, I came up with the idea of simply mounting the network location using smbmount (mount.cifs). After some testing and tweaking, I was able to get it successfully working and added an entry to fstab to have it mount at boot time. I chose /mnt/mynas as the mount point.
See Synology DiskStation and Samba mount permissions for my method of getting it mounted with the correct file permissions.
Once it was set to mount at boot-time, I can now open the CrashPlan client and set /mnt/mynas as a destination folder, and now I have both local and off-site backups!
Feel free to share your thoughts and/or feedback in the comments below!
The official instructions for removing the CrashPlan Linux app are this:
Linux: Run the uninstaller shell script that comes in the installer package.
Unfortunately the uninstaller doesn’t do a very good job, and leaves a lot laying around. So here’s instructions on how to get rid of everything:
Stop the CrashPlan daemon task
sudo /etc/init.d/crashplan stop
Delete the files
sudo rm -rf /usr/local/crashplan sudo rm -rf /var/lib/crashplan sudo rm -rf /usr/local/var/crashplan sudo rm -rf /etc/init.d/crashplan /etc/rc2.d/S99crashplan
(yes, these could be combined into a simple statement, but I broke them up for ease of reading.)
That should take care of it.
Note that these instructions are based on a Ubuntu installation. And as always, exercise a little common sense with any command that begins in
sudo rm -rf If you break it, you get to keep all the pieces.
Would you only ever have one house key? Car key?
Would you only get one picture of your child? Your spouse?
Then why would you not treat your computer data the same way?
Being a computer technician, I can’t tell you how many times (a day!) I hear “Will this affect my data/hard drive/information/etc…” You know who I always hear that from? People who don’t have backups.
If you’ve ever lost an important file because of a system crash, hard drive failure, or mistakenly deleted it, or even worse — suffered at the hands of a theft or destruction from a computer virus or malware, then you’ve likely already learned this very important lesson (rather painfully, no doubt).
If you’re working on something that is so important you’re worried about it, why wouldn’t you keep a second copy of it?
I’ll tell you exactly why: Because it takes time and effort.
But for something so important, there really are very simple (and inexpensive) solutions.
You could burn a CD or DVD. CDs only hold about 700MB of data, and most people have far more than that. Dividing up folders and folders of pictures and music over 700MB CDs is frustrating at best. Download movies? Most won’t even fit on a 700MB CD. There’s DVDs, sure. However, one of the biggest drawbacks to optical media is their shelf life (5 years or so, often times much less). Optical media degrades with exposure to light, heat, and may warp if stored vertically. Rewritable media has an even shorter shelf life, as every write cycles “burns” the disc and degrades it further. That leaves you with a very real possibility that when you go to reach for your data, it won’t be there.
You could use an external hard drive. External hard drives are just as inexpensive (per MB/GB) as optical media (sometimes more so), and have a longer shelf life. They are a great backup destination for large amounts of data, and can be backed up to quickly and easily. Unfortunately, magnetic media can’t be exposed or stored near strong electrical or magnetic fields. They are also fragile while powered on, they too do degrade over time, and can sometimes fail without warning. You could spend some money on a RAID array and have a nearly fail-safe solution… but it doesn’t protect against fire or theft.
You could backup to a flash drive. Unfortunately flash drives are actually the smallest capacity and the highest cost of any removable media. They are great for carrying around a small amount of data (some files back and forth from work, for example), but as a backup solution, they are impractical.
I prefer the set-it-and-forget-it approach of online backups, and I really encourage you to try the same.
Online backups charge you a small fee (usually monthly or yearly) and store your files on a remote server in case of a disaster. All you need is a reasonably fast internet connection. Storage and retrieval are limited to the speed of your internet connection, but this really takes the effort out of it. Backups are done routinely in the background and happen automatically. If disaster ever strikes in the form of a lost file, you simply connect to the online service and re-download your file.
So here’s a few suggested services and the last pricing structure I recall them having and my thoughts on each:
CrashPlan (Windows, Mac, Linux)
Cost: Free if you’re backing up to an external drive or a friends computer (even off site); $59/yr for one computer or $100/yr for all your computers to back up to their storage center (“CrashPlan Central”).
Pros: Inexpensive, unlimited storage space. Easy to use application. Supports local destinations for rapid backups and restores. Supports encryption. Cross-platform. Data de-duplication reduces upload size on changed files.
Cons: Requires payment for the service term up front. Minor display issues related to GDK_NATIVE_WINDOWS under Linux. Some features require additional “CrashPlan Pro” license.
My thoughts: If you’re a Linux user this is the service for you. Slightly cheaper than Mozy for a single computer for the year; much cheaper for multiple computers.
Mozy (Windows, Mac)
Cost: Free for the first 2GB of storage; $5/mo per computer for unlimited.
Pros: Inexpensive for a few PCs. Easy to use application. Option to display icons on files to show what is backed up and what is pending. Easy to use options. The option to order restore DVDs is available for disaster recovery, but it is expensive.
Cons: No plans for a linux client. Slow transfer speed.
My thoughts: For Windows-only users this is a great service. Automatic monthly payments make the cost easy to budget.
JungleDisk (Windows, Mac, Linux)
Cost: $2-5 per month and $0.15 per GB. Transfer rates apply with storage on Amazon S3, or no transfer fee with storage on Rackspace.
Pros: The price structure is fair — pay for what you use. A very reliable infrastructure in the two providers. Encryption. Multiple datacenters to assure your data is safe. They’ve been around for a while. Inexpensive for small amounts of data. Data de-duplication reduces storage space, cost, and upload size.
Cons: Can get expensive with large amounts of data. The application is somewhat confusing at first.
My thoughts: Another good cross-platform provider. Although a bit more costly than CrashPlan or Mozy, the thought of multiple data centers is appealing to those with mission-critical data.
Symform (Windows, Mac, Linux [Beta])
Cost: First 10GB free, $0.15/GB/Month each additional (or free if you contribute)
Pros: Generous amounts of free space, and no limits on the amount of space you can earn if you contribute storage. Contribution is not required. Interface is simple, and setup is easy. Support can be paid by contributing space as well.
Cons: No option yet to select files to exclude, or for single file restores. Contributing requires setting up port forwarding.
My thoughts: Symform is a good, spacious alternative to other backup providers, and especially appealing for users who have space to contribute.
Bottom line: There really are no “perfect fit” backup solutions, but the best practice is to use one or more different methods and keep at least one at a second location (“off-site”). Worst case, your home could burn to the ground or be broken into, and your optical discs and external hard drives would be forever gone. Online backups do alleviate that fear, but rely on an internet connection to recover your data. I’ve found it best to keep one backup copy on an external hard drive (for accessing large amounts of data quickly) and use an online provider for worst-case recovery (the backup hard drive crashes, or fire or theft claims the backup). It’s all about how valuable your data is to you.
Comments and feedback are welcome, as always.
CrashPlan on Linux depends on the inotify kernel module to know when files update in real-time.
Inotify was merged into the 2.6.13 Linux kernel, so if you’re running a kernel equal to or newer than this, it’s already installed. If not, you’ll have to install it yourself. If inotify is installed, you may need to increase the number of watches that can be created.
The inotify module is govered by a property called max_user_watches. If you attempt to exceed the max number you’ll get the following error in the engine_error.log (but the process lives on).
inotify_add_watch: No space left on device
Any file not covered by a watch does not have real-time backup protection.
The default on my Ubuntu 11.04 box is 524288, which seems plenty sufficient for me. I have not experienced any issues, but if you find that you are, you may want to increase the watch value.
Updating the Watch Value
You can temporarily update the value with:
echo 1048576 > /proc/sys/fs/inotify/max_user_watches
You can update the value permanently by putting the following value in /etc/sysctl.conf and restarting:
For more information, see CrashPlan’s Forums.