Advertisements

Backing up your server using JungleDisk Server Edition – part 3

This is the third part in a three-part series. Make sure to read part 1 and part 2!

The one bad thing I’ve come to notice about the JungleDisk Server Edition is, over time, it tends to hog a lot of memory, even when it’s not running backups. The author at geektank.net noticed this too, and recommended it may not be a good fit for low-memory VPS configurations.

But if JungleDisk is a good fit for your needs, and the memory usage is the only issue, here’s something to try. It’s either a clever solution or an ugly workaround. Call it what you will.

What we’re going to do is create a cron job that will restart jungledisk when it is done running the backup, which will free up any potentially wasted memory.

So, we’ll start by creating a postbackup.sh script to run after your backup job. For advice on how to create and schedule this script, see my previous article, Backing up your server using JungleDisk Server Edition – part 2.

Create your postbackup.sh file with the following line:

touch /etc/jungledisk/.restartjd

Now, create the following jd-check.sh script and make it executable. It should be setuid root.

#!/bin/bash
if [ -e /etc/jungledisk/.restartjd ]
then
rm /etc/jungledisk/.restartjd && /etc/init.d/junglediskserver restart
fi

That’s about as simple as it gets, right there.

The new script should be run on a cron job that will cause it to run often enough to restart jungledisk after a backup. A suggestion would be to have it run about a half-hour to an hour after your backups are scheduled to start.

There are some security implications to where you store your temp file, what your name it, and what permissions you give it, so use your head. If you carefully read part 2, you can get a good handle on how to be mindful of the security issues.

It’s also possible to simply restart junglediskserver on a cron job, but there’s the potential you could restart it when it’s in the middle of a backup. This would cause the backup to either postpone the backup, or resume immediately, and leave stale memory allocations again, which defeats the point. What I’m aiming for here is to have it restart as quickly as possible once the update completes.

Do you have any thoughts on this approach? Know of a way that might work better? Feel free to share your thoughts in the comments below! Thank you.

Advertisements

, , ,

  1. #1 by joey on June 9, 2011 - 5:16 pm

    great couple of posts. I’ve used the first two to get my JD setup going. i didn’t even see this third post, but when i noticed jungledisk taking 18% cpu in process manager I turned to google and found you again!

    So I guess I will go with the restart after backup completion. Is this a confirmed bug with Jungledisk? I hope they are aware of it and fixing it for the next release.

  2. #3 by Joey on June 27, 2011 - 11:44 am

    That was weird i think my comment got erased somehow. Anyways, Mike was just curious what your reasoning was behind using two scripts as opposed to just calling the jungledisk restart as the post backup script? I’m assuming the client manager doesn’t run whatever you put in for post backup until the backup is complete. Is this incorrect?

    Thanks a ton Mike!

    • #4 by Mike on June 27, 2011 - 4:01 pm

      Joey,

      I went back and looked in the spam filters and I didn’t see any other comments from you. If you posted a previous comment and it didn’t make it, I deeply apologize.

      I’m using this method as I don’t know exactly how JD handles starting the postbackup.sh script — that is, whether it’s run directly as a child process or is forked as JD quits — and I was worried about the potential implications of killing a parent process though a child process, plus the implications of having postbackup.sh running suid root directly from JD.