Posts Tagged Backup
The following error appears if you try to include a system image in your backup using the Windows 7 File Recovery backup tool.
There was a failure in preparing the backup image of one of the volumes in the backup set.
Details: The mounted backup volume is inaccessible. Please retry the operation.
Error code: 0x807800C5
According to this forum…
For Win 8 only: The Win 7 back up program included with win 8 does not support backing up a image file to any kind of NAS device (UNIX, Linux) . Internally the program gives an error that the NAS device has an incompatible sector mapping type. You can backup to a hard drive that is attached to a different windows machine and then back up that file to your NAS. Convoluted, but it works.
So, backing up a system image to a Samba share is out of the question. To work around this, disable the creation of a system image in your backup.
I haven’t tried backing up to an NTFS-formatted iSCSI LUN, which might work. If anyone has tried that, I’d be interested to know the results.
If you use Windows 7 File Recovery to attempt to backup your system to a NAS device, you may receive the following error:
0x80070544: The specified network location cannot be used.
Verify the path points to a correct network location and that the supplied credentials can be used for write access to the folder.
The validation information class requested was invalid. (0x80070544).
The solution to this is rather simple. You have to prefix your username with the name of the machine where the Samba share is located. So, if you are backing up to diskstationbackups, prefix your username with diskstation.
In my case, my username on that device is mike. So instead of using mike as my username, I had to use diskstation\mike.
It works now. Enough said.
Symform is a cloud-based backup solution which allows you to have 10 GB of backup space free, and get additional free space, as well as support, by contributing space.
In order to contribute, you need to have a port forwarded to your Synology device. However, in my experience, I wasn’t able to choose the port (as it’s chosen randomly during installation). If the port number that the Symform service chooses is already taken, or you prefer to assign another port number, here’s how to do it.
To do this, you will already need to know how to set up port forwarding on your router, and install and set up the Symform service on your Synology NAS, as well as be familiar with how to SSH into your Synology NAS. This only shows you how to manually edit the contribution port number chosen by the Symform service.
Make sure the Symform service is stopped
Do this by logging into your Synology on the admin port (usually 5000 or 5001) and going to Package Center. Under Installed, you can stop the Symform service by clicking the stop button. Once the service is stopped (as shown below), you can continue.
SSH into your Synology NAS
If you haven’t already, turn on the SSH (or telnet) service by going to Control Panel > Terminal, and enabling the desired service. Next, SSH (or telnet) into your Synology NAS box. Once logged in, go to the Symform configuration directory by typing:
Next, open node.config with the vi editor:
Locate a line starting with
<contribution enabled="True" fragmentStorePath= and scroll to the right of that line, and you will see
port="43100" (or another port number). If you’re not familiar with the vi editor, carefully follow the following commands to edit the file in-place:
- Press the a key to enter append (editor) mode
- Cursor to the value and use the keyboard to edit it
- Press the ESC key to exit editing mode
- Type :w followed by enter to save the file
- Type :q followed by enter to quit the editor
Now go back to Package Center and start the Symform service.
You will be able to see the updated port number in your Symform control panel.
If you have any questions, comments, or thoughts to share, please do so in the comments below. Thank you!
If you have a lot of WordPress posts and might want to find all posts containing a certain keyword for any reason, you can start by using the following SQL code, which was taken from this post. I used this in phpMyAdmin for a MySQL database. Make sure you are in the correct database first!
You can substitute any keyword for ‘needle’ below, but you must have the single-quotes and percent signs around it.
SELECT ID FROM wp_posts WHERE post_content LIKE '%needle%';
Example: Let’s say you use the NG Gallery plugin, which has you add a tag to all your posts to include said gallery. Now you find to find all posts which have that NG Gallery tag in them. The following query would work:
SELECT ID FROM wp_posts WHERE post_content LIKE '%nggallery%';
This can also be built upon for find-and-replace operations.
Keep in mind, you can really muck things up using SQL. Make a backup first if you don’t know what you’re doing!
Questions, comments, and feedback are always welcome. Thanks!
In my quest for the perfect “in my dreams” backup solution for my Ubuntu VPS, I created this very simple script which can be run as a cron job and can be easily modified to backup any amount of data to any remote FTP or SFTP server.
You could very easily include a database backup by running mysqldump beforehand, but I’m not including it in this script.
This required yafc to be installed, but Ubuntu installations can easily install it by running
sudo apt-get install yafc
And now, for the script:
#!/bin/bash # format of the open command is proto://username:password@HOSTorIP/ # proto is either ftp or ssh # special characters in the username or password are not well tolerated # anything in the EOF tags are direct commands to yafc. Test if unsure DIR=`date +%F` yafc <<EOF open ftp://username:firstname.lastname@example.org/ cd backup-dir mkdir $DIR cd $DIR put -p -r * close exit EOF
Enjoy! Questions, comments, and feedback are welcome and appreciated. Thank you!
If you set up your Google account using your Android phone, or you added contacts to your phone but didn’t set them as Google account contacts, you will find they’re not synced to your Google account. This means that they’re not available as contacts when composing messages, and worse, you’re not using your Google account as a backup in case your phone is lost or damaged, or you swap phones.
You can easily fix this by doing the following steps from your Android phone:
- Open Contacts
- Hit menu > Import/Export
- Export to SD card, then hit OK to confirm.
After a few moments, your data will be exported.
Next, we delete all contacts, to prevent confusion
- In Contacts, hit menu > delete > Select All > delete
If you don’t have this option, try Delete All Contacts from Android Market.
(If you’re concerned about deleting your contacts before re-importing them, you can always import them, then resolve the duplicates manually, but you will have stale contacts in your phone. Deleting your contacts then re-importing them a second time will take care of that.)
Lastly, re-import all the contacts to your Google account
- Hit Import/Export again
- Select Import from SD Card
- Select Save contact to… (your Google account) NOT phone
After a few moments, your data will be re-imported, and synced with your Google account online. Note that it may take up to a few minutes for the contacts to start appearing in your Google account.
If you mistakenly import multiple times, you may end up with duplicates in your online Google account. To fix this, simply open more > Find and Merge duplicates from your Contact manager as shown below:
Yes, I really do have 329 contacts.
Note: It is not possible to preserve group information during an export/import. It’s not supported by Google.
Questions and comments are welcome below, thank you!
By default, CrashPlan backs up everything in your home folder including all hidden directories (directories starting with a dot (.). This would include some directories your probably don’t want backed up, such as ~/.local/share/Trash (your trash) and a bunch of other hidden directories.
Fortunately CrashPlan’s file exclusion feature includes a way to specify exclusions by regular expression. Simply go to Settings > Backup and next to Filename Exclusions click the configure button.
Check the box for Regular Expression and enter this:
Click the plus sign, then ok, then save again.
That will exclude all the dotted directories from your backups.
Have any filename exclusions that you use on your backups? Feel free to share your rationale in the comments below!
For about a week now I’ve been wrestling with implementing a system where CrashPlan would backup to my network drive. I ran into a really bit problem: When you mount a network location in Gnome using the GUI (gvfs), root can’t access it. Since the CrashPlan engine runs at root, it makes the network location unusable as a backup destination.
After a while of working on different ways to solve this rather large hurdle, I came up with the idea of simply mounting the network location using smbmount (mount.cifs). After some testing and tweaking, I was able to get it successfully working and added an entry to fstab to have it mount at boot time. I chose /mnt/mynas as the mount point.
See Synology DiskStation and Samba mount permissions for my method of getting it mounted with the correct file permissions.
Once it was set to mount at boot-time, I can now open the CrashPlan client and set /mnt/mynas as a destination folder, and now I have both local and off-site backups!
Feel free to share your thoughts and/or feedback in the comments below!
Having been frustrated by some of the recent regressions in Firefox 4, particularly those involving Flash graphs, I’ve picked up Chrome and so far couldn’t be happier.
Moving my bookmarks over wasn’t too hard either. Here’s how to do it.
In Firefox 4, click Bookmarks > Show All Bookmarks (or press Ctrl-Shift-O)
Then choose Import and Backup > Export HTML…
Save that file somewhere you can find it for the next step.
Now, in Chrome, open the Bookmark Manager. You can find it by clicking the wrench icon, then Bookmark Manager.
Now choose Organize > Import Bookmarks from the Bookmark Manager and import that HTML file you just exported from Firefox.
Readers may also want to consider trying the free service Xmarks, which features automatic bookmark syncing across multiple browsers using a plug-in. Supports Firefox, Chrome, Internet Explorer, and Safari (Mac OS).
This was done using Firefox 4.0.1 and Chromium Browser 10.0.648.205 (81283) on Ubuntu 11.04. Questions, comments, and feedback are welcome and appreciated!
The one bad thing I’ve come to notice about the JungleDisk Server Edition is, over time, it tends to hog a lot of memory, even when it’s not running backups. The author at geektank.net noticed this too, and recommended it may not be a good fit for low-memory VPS configurations.
But if JungleDisk is a good fit for your needs, and the memory usage is the only issue, here’s something to try. It’s either a clever solution or an ugly workaround. Call it what you will.
What we’re going to do is create a cron job that will restart jungledisk when it is done running the backup, which will free up any potentially wasted memory.
So, we’ll start by creating a postbackup.sh script to run after your backup job. For advice on how to create and schedule this script, see my previous article, Backing up your server using JungleDisk Server Edition – part 2.
Create your postbackup.sh file with the following line:
Now, create the following jd-check.sh script and make it executable. It should be setuid root.
#!/bin/bash if [ -e /etc/jungledisk/.restartjd ] then rm /etc/jungledisk/.restartjd && /etc/init.d/junglediskserver restart fi
That’s about as simple as it gets, right there.
The new script should be run on a cron job that will cause it to run often enough to restart jungledisk after a backup. A suggestion would be to have it run about a half-hour to an hour after your backups are scheduled to start.
There are some security implications to where you store your temp file, what your name it, and what permissions you give it, so use your head. If you carefully read part 2, you can get a good handle on how to be mindful of the security issues.
It’s also possible to simply restart junglediskserver on a cron job, but there’s the potential you could restart it when it’s in the middle of a backup. This would cause the backup to either postpone the backup, or resume immediately, and leave stale memory allocations again, which defeats the point. What I’m aiming for here is to have it restart as quickly as possible once the update completes.
Do you have any thoughts on this approach? Know of a way that might work better? Feel free to share your thoughts in the comments below! Thank you.