Posts Tagged JungleDisk
The one bad thing I’ve come to notice about the JungleDisk Server Edition is, over time, it tends to hog a lot of memory, even when it’s not running backups. The author at geektank.net noticed this too, and recommended it may not be a good fit for low-memory VPS configurations.
But if JungleDisk is a good fit for your needs, and the memory usage is the only issue, here’s something to try. It’s either a clever solution or an ugly workaround. Call it what you will.
What we’re going to do is create a cron job that will restart jungledisk when it is done running the backup, which will free up any potentially wasted memory.
So, we’ll start by creating a postbackup.sh script to run after your backup job. For advice on how to create and schedule this script, see my previous article, Backing up your server using JungleDisk Server Edition – part 2.
Create your postbackup.sh file with the following line:
Now, create the following jd-check.sh script and make it executable. It should be setuid root.
#!/bin/bash if [ -e /etc/jungledisk/.restartjd ] then rm /etc/jungledisk/.restartjd && /etc/init.d/junglediskserver restart fi
That’s about as simple as it gets, right there.
The new script should be run on a cron job that will cause it to run often enough to restart jungledisk after a backup. A suggestion would be to have it run about a half-hour to an hour after your backups are scheduled to start.
There are some security implications to where you store your temp file, what your name it, and what permissions you give it, so use your head. If you carefully read part 2, you can get a good handle on how to be mindful of the security issues.
It’s also possible to simply restart junglediskserver on a cron job, but there’s the potential you could restart it when it’s in the middle of a backup. This would cause the backup to either postpone the backup, or resume immediately, and leave stale memory allocations again, which defeats the point. What I’m aiming for here is to have it restart as quickly as possible once the update completes.
Do you have any thoughts on this approach? Know of a way that might work better? Feel free to share your thoughts in the comments below! Thank you.
In part 1, I told you how to set up JungleDisk backup for your Linux server. In this part 2, I’ll tell you how to automatically have it dump and backup your MySQL databases (correctly)!
There are security implications if this permissions are not set correctly on the files, as you’re storing your MySQL password in the script, and you’re going to have complete database dumps sitting on your hard drive. However, I will attempt to explain as clearly as possible my solution, and I’m not responsible if it doesn’t work for you.
So, start out by deciding where you want your database-dumping script to run from. A few good example spots are /root (root users home directory), and /etc/jungledisk (with the configuration files). I’ve called my backup script prebackup.sh. You’ll understand the name further below, but I’m going to use that name the rest of the way through.
So create your prebackup.sh script as root, and set it to mode 700.
touch prebackup.sh && chmod 0700 prebackup.sh
This makes sure that root is the only user who can read, write, or execute it.
Now, using your favorite text editor, you can use the following sample script:
#!/bin/bash date=`date -I` dir=/var/backups file=sqlbackup-$date.sql touch $dir/$file && chmod 600 $dir/$file && mysqldump --all-databases -p'__MySQLPassword__' >> $dir/$file find $dir -ctime +7 -delete
Danny pointed out that creating gzipped MySQL dumps, as I had in my original script, is discouraged as it defeats the data de-dup feature of jungledisk. The above script has been changed from the original to make uncompressed dumps. Thanks, Danny!
The last line, with the find statement, is responsible for the cleanup of the directory. It will delete any file which is older than 7 days. If you want more or less time, simply change
+7 to your desired number of days (keeping the
Warning: There’s very little (read: none) sanity-checking in this script. Replace
__MySQLPassword__ with your root MySQL password.
Jay pointed out that there will likely be issues with handling special characters in the SQL password. If you have any suggestions, please feel free to post them in the comments below.
After saving, you should have a 183-byte (give or take) file with 0700 mode:
root@ve:/etc/jungledisk# ls -l prebackup.sh
-rwx------ 1 root root 183 Mar 24 17:08 prebackup.sh
You should make sure that the directory in $dir is owned by user and group root and mode 0700. This will make sure that noone else has access to your dumped databases. Now that you have your script, it’s time to automate it. You could schedule a cron job to run the script, but it’s easier to have JungleDisk run it for you at the start of every backup job.
Start your management-side program, login, and go to your server’s backup configuration. Under Backup Options, check Enable Pre and Post Backup Scripts. Now, click Configure, and in the Pre-Backup Script, enter the full path and filename of your newly created script, i.e. /etc/jungledisk/prebackup.sh.
Now, on your assigned schedule, the server engine will run your database dumping script, and start your backup job. Of course, make sure whatever directory you’re dumping your databases to is included in your backup set.
The last thing to note is this script is amazingly light; it doesn’t delete any old databases and it doesn’t do much else. You’re free to modify it, and I would greatly appreciate any feedback on your modifications.
Comments are welcome, as always.
This guide assumes you’re using a Debian-based (Ubuntu, Debian) build of Linux, and we’ll be using the 64-bit download from JungleDisk. The instructions don’t really change for the 32-bit version, except for the installer file name.
The first steps:
Go over to the JungleDisk Business page and sign up for, and download, the server edition. Now, this edition comes in two very important parts: The server-side program, and the management-side program.
The server-side program is what runs on your server. That’s the “backup engine” if you will. You will download the program appropriate for your server environment.
The management-side program is the program that you remotely connect to the “server” to configure it. You will download the program appropriate for running on your desktop computer.
It is fine to download the server-side version for Linux and the management-side version for Windows, if that is your configuration. In this case, I’m downloading the Linux .deb server-side installer for 64-bit linux, and the Windows management-side program. I can’t give you the actual download links; you’ll find them in your account page.
Now, on the server, I’m going to install the server-side engine as root. Navigate to the directory where you placed the downloaded file, and run:
sudo dpkg -i junglediskserver_315-0_amd64.deb
That will install the server. Follow the directions. Now, since it’s the deb package, it will automatically set up init scripts to make sure the jungledisk engine runs on startup. However, you will notice that at the end of the setup you were prompted to copy and edit an xml file. Copy /usr/local/share/jungledisk/junglediskserver-license-EXAMPLE.xml to /etc/jungledisk/junglediskserver-settings.xml
cp /usr/local/share/jungledisk/junglediskserver-license-EXAMPLE.xml /etc/jungledisk/junglediskserver-settings.xml
Now use your favorite editor to make a few changes to /etc/jungledisk/junglediskserver-settings.xml:
, enter your license key (found in your JungleDisk account).
Now, restart the jungledisk service.
Myself, I also added my login user name and password between
respectively, but I don’t think it’s necessary. Worth keeping in mind if something doesn’t work right.
That’s all for the server-side configuration.
Now, on your management-side program, simply run the program and log into your JungleDisk account and your server should appear in the list. Double-click on it and the configuration screen will appear, where you can create backup sets and schedule them as you wish.
Want to correctly backup your MySQL databases in your backup set? See part 2 of this article, coming soon!
Comments are welcome, as always!
Would you only ever have one house key? Car key?
Would you only get one picture of your child? Your spouse?
Then why would you not treat your computer data the same way?
Being a computer technician, I can’t tell you how many times (a day!) I hear “Will this affect my data/hard drive/information/etc…” You know who I always hear that from? People who don’t have backups.
If you’ve ever lost an important file because of a system crash, hard drive failure, or mistakenly deleted it, or even worse — suffered at the hands of a theft or destruction from a computer virus or malware, then you’ve likely already learned this very important lesson (rather painfully, no doubt).
If you’re working on something that is so important you’re worried about it, why wouldn’t you keep a second copy of it?
I’ll tell you exactly why: Because it takes time and effort.
But for something so important, there really are very simple (and inexpensive) solutions.
You could burn a CD or DVD. CDs only hold about 700MB of data, and most people have far more than that. Dividing up folders and folders of pictures and music over 700MB CDs is frustrating at best. Download movies? Most won’t even fit on a 700MB CD. There’s DVDs, sure. However, one of the biggest drawbacks to optical media is their shelf life (5 years or so, often times much less). Optical media degrades with exposure to light, heat, and may warp if stored vertically. Rewritable media has an even shorter shelf life, as every write cycles “burns” the disc and degrades it further. That leaves you with a very real possibility that when you go to reach for your data, it won’t be there.
You could use an external hard drive. External hard drives are just as inexpensive (per MB/GB) as optical media (sometimes more so), and have a longer shelf life. They are a great backup destination for large amounts of data, and can be backed up to quickly and easily. Unfortunately, magnetic media can’t be exposed or stored near strong electrical or magnetic fields. They are also fragile while powered on, they too do degrade over time, and can sometimes fail without warning. You could spend some money on a RAID array and have a nearly fail-safe solution… but it doesn’t protect against fire or theft.
You could backup to a flash drive. Unfortunately flash drives are actually the smallest capacity and the highest cost of any removable media. They are great for carrying around a small amount of data (some files back and forth from work, for example), but as a backup solution, they are impractical.
I prefer the set-it-and-forget-it approach of online backups, and I really encourage you to try the same.
Online backups charge you a small fee (usually monthly or yearly) and store your files on a remote server in case of a disaster. All you need is a reasonably fast internet connection. Storage and retrieval are limited to the speed of your internet connection, but this really takes the effort out of it. Backups are done routinely in the background and happen automatically. If disaster ever strikes in the form of a lost file, you simply connect to the online service and re-download your file.
So here’s a few suggested services and the last pricing structure I recall them having and my thoughts on each:
CrashPlan (Windows, Mac, Linux)
Cost: Free if you’re backing up to an external drive or a friends computer (even off site); $59/yr for one computer or $100/yr for all your computers to back up to their storage center (“CrashPlan Central”).
Pros: Inexpensive, unlimited storage space. Easy to use application. Supports local destinations for rapid backups and restores. Supports encryption. Cross-platform. Data de-duplication reduces upload size on changed files.
Cons: Requires payment for the service term up front. Minor display issues related to GDK_NATIVE_WINDOWS under Linux. Some features require additional “CrashPlan Pro” license.
My thoughts: If you’re a Linux user this is the service for you. Slightly cheaper than Mozy for a single computer for the year; much cheaper for multiple computers.
Mozy (Windows, Mac)
Cost: Free for the first 2GB of storage; $5/mo per computer for unlimited.
Pros: Inexpensive for a few PCs. Easy to use application. Option to display icons on files to show what is backed up and what is pending. Easy to use options. The option to order restore DVDs is available for disaster recovery, but it is expensive.
Cons: No plans for a linux client. Slow transfer speed.
My thoughts: For Windows-only users this is a great service. Automatic monthly payments make the cost easy to budget.
JungleDisk (Windows, Mac, Linux)
Cost: $2-5 per month and $0.15 per GB. Transfer rates apply with storage on Amazon S3, or no transfer fee with storage on Rackspace.
Pros: The price structure is fair — pay for what you use. A very reliable infrastructure in the two providers. Encryption. Multiple datacenters to assure your data is safe. They’ve been around for a while. Inexpensive for small amounts of data. Data de-duplication reduces storage space, cost, and upload size.
Cons: Can get expensive with large amounts of data. The application is somewhat confusing at first.
My thoughts: Another good cross-platform provider. Although a bit more costly than CrashPlan or Mozy, the thought of multiple data centers is appealing to those with mission-critical data.
Symform (Windows, Mac, Linux [Beta])
Cost: First 10GB free, $0.15/GB/Month each additional (or free if you contribute)
Pros: Generous amounts of free space, and no limits on the amount of space you can earn if you contribute storage. Contribution is not required. Interface is simple, and setup is easy. Support can be paid by contributing space as well.
Cons: No option yet to select files to exclude, or for single file restores. Contributing requires setting up port forwarding.
My thoughts: Symform is a good, spacious alternative to other backup providers, and especially appealing for users who have space to contribute.
Bottom line: There really are no “perfect fit” backup solutions, but the best practice is to use one or more different methods and keep at least one at a second location (“off-site”). Worst case, your home could burn to the ground or be broken into, and your optical discs and external hard drives would be forever gone. Online backups do alleviate that fear, but rely on an internet connection to recover your data. I’ve found it best to keep one backup copy on an external hard drive (for accessing large amounts of data quickly) and use an online provider for worst-case recovery (the backup hard drive crashes, or fire or theft claims the backup). It’s all about how valuable your data is to you.
Comments and feedback are welcome, as always.
If you’re a jungledisk user on linux, and you put junglediskdesktop in your Startup Applications you may receive unusual errors when you log in.
Such errors are:
- The jungledisk tray icon does not appear
- The jungledisk window floats and cannot be closed
- The jungledisk app gives unusual errors
The problem appears to be that there’s a race condition where junglediskdesktop starts before Gnome is ready to handle it as a tray app.
- Create a text file with gedit (or your editor of choice)
- In the file, enter these two lines:
#!/bin/bash sleep 3 && /usr/local/bin/junglediskdesktop
You’ll need to make your new script executable, so at a terminal do:
chmod +x filename
Now, in startup applications, use your new script instead of junglediskdesktop.
What this script does:
It ‘sleeps’ for 3 seconds before starting the junglediskdesktop application.
Doing that allows Gnome to be ready to handle junglediskdesktop correctly.
It’s my opinion that this is an issue with junglediskdesktop itself (not waiting for Gnome to be ready) rather than an issue with the Gnome itself.