Advertisements

Posts Tagged VPS

Basic Ubuntu VPS server backup via FTP or SSH SFTP

In my quest for the perfect “in my dreams” backup solution for my Ubuntu VPS, I created this very simple script which can be run as a cron job and can be easily modified to backup any amount of data to any remote FTP or SFTP server.

You could very easily include a database backup by running mysqldump beforehand, but I’m not including it in this script.

This required yafc to be installed, but Ubuntu installations can easily install it by running

sudo apt-get install yafc

And now, for the script:

#!/bin/bash
# format of the open command is proto://username:password@HOSTorIP/
# proto is either ftp or ssh
# special characters in the username or password are not well tolerated
# anything in the EOF tags are direct commands to yafc. Test if unsure
DIR=`date +%F`
yafc <<EOF
open ftp://username:password@ftp.example.com/
cd backup-dir
mkdir $DIR
cd $DIR
put -p -r *
close
exit
EOF

Enjoy! Questions, comments, and feedback are welcome and appreciated. Thank you!

Advertisements

, , , , , , , , ,

1 Comment

Quick list of useful SPF DNS records

Sender Policy Framework (SPF) is a DNS record that’s used to authenticate who is allowed to send mail appearing to come from a specific domain. It is used to help prevent email spamming and spoofing, and works by making available a list of what domains, mailservers, and IP addresses are authorized to send mail from a domain, and what to do with mail that does not match those rules.

SPF is a DNS text record, and is added to your DNS records for the domain that matches the part after the @ sign in the email address. For example, for @example.com, the SPF TXT record should be added to the example.com domain.

I’m not going to cover every possible SPF setup, simply the ones I use most often and the rationale behind them. You can check the documentation links below my examples if you want to build more elaborate or specific SPF records.

In the below examples, substitute with your web server’s IP address in dotted-quad format without a space. E.g. IP:10.1.2.3. You can also specify a CIDR range such as IP:10.1.2.3/20.

Allow from the domains IP, it’s listed mailservers, and a specific IP. Soft-fail all others (Messages that return a SOFTFAIL are accepted but tagged). The recommended configuration for most dedicated/VPS web server environments. Used when you send/receive mail at your domain, and software on your domain may send mail out as you, but no other mail server or mail exchanger will send mail as you. Users of shared hosting environments will probably want to ask their web hosting provider for the recommended SPF record to use.

v=spf1 a mx ip4: ~all

Include Google’s SPF records, if you use Google Apps as your domains mail. Add include:_spf.google.com, similar in rationale to the above, but used if you use Google Apps for email, and your software on your web server may also send mail as you.

v=spf1 a mx ip4: include:_spf.google.com ~all

Fail all mail. Used only if you send no mail. Example: a parked domain or a domain that is not used in email at all.

v=spf1 -all

In all the examples above except for the last, I denote soft-fail (~all) instead of fail (-all). This is because you may inadvertently make a mistake or misconfiguration, and soft-failing will not prevent mail from being delivered, it will simply flag it in the email headers. You can also specify neutral (?all) as an alternative.

Here’s an example email header from Gmail which includes the SPF record’s lookup result. I’ve edited the email address and IP, of course.

Received-SPF: pass (google.com: domain of email@example.com designates IP as permitted sender) client-ip=IP;
Authentication-Results: mx.google.com; spf=pass (google.com: domain of email@example.com designates IP as permitted sender) smtp.mail=email@example.com

By this example, you can see the SPF record matched and was passed.

SPF records are a good tool for many reasons. They give mail servers the ability to authenticate your email to your domain, which helps keep it out of recipient’s spam folders, and they help prevent others from spoofing your domain in email, which could cause serious trouble.

Also, SPF records do not decide whether or not to accept mail for delivery — they only serve as an authentication mechanism for who is allowed to send mail appearing to come from that domain.

Further reading:

Questions, comments, or feedback about the above SPF records or how they’ve been explained? Please share your thoughts in the comments below! Thank you.

, , ,

Leave a comment

Backing up your server using JungleDisk Server Edition – part 3

This is the third part in a three-part series. Make sure to read part 1 and part 2!

The one bad thing I’ve come to notice about the JungleDisk Server Edition is, over time, it tends to hog a lot of memory, even when it’s not running backups. The author at geektank.net noticed this too, and recommended it may not be a good fit for low-memory VPS configurations.

But if JungleDisk is a good fit for your needs, and the memory usage is the only issue, here’s something to try. It’s either a clever solution or an ugly workaround. Call it what you will.

What we’re going to do is create a cron job that will restart jungledisk when it is done running the backup, which will free up any potentially wasted memory.

So, we’ll start by creating a postbackup.sh script to run after your backup job. For advice on how to create and schedule this script, see my previous article, Backing up your server using JungleDisk Server Edition – part 2.

Create your postbackup.sh file with the following line:

touch /etc/jungledisk/.restartjd

Now, create the following jd-check.sh script and make it executable. It should be setuid root.

#!/bin/bash
if [ -e /etc/jungledisk/.restartjd ]
then
rm /etc/jungledisk/.restartjd && /etc/init.d/junglediskserver restart
fi

That’s about as simple as it gets, right there.

The new script should be run on a cron job that will cause it to run often enough to restart jungledisk after a backup. A suggestion would be to have it run about a half-hour to an hour after your backups are scheduled to start.

There are some security implications to where you store your temp file, what your name it, and what permissions you give it, so use your head. If you carefully read part 2, you can get a good handle on how to be mindful of the security issues.

It’s also possible to simply restart junglediskserver on a cron job, but there’s the potential you could restart it when it’s in the middle of a backup. This would cause the backup to either postpone the backup, or resume immediately, and leave stale memory allocations again, which defeats the point. What I’m aiming for here is to have it restart as quickly as possible once the update completes.

Do you have any thoughts on this approach? Know of a way that might work better? Feel free to share your thoughts in the comments below! Thank you.

, , ,

4 Comments

Problems installing PECL libraries when you have noexec /tmp and /var/tmp

At first I had all sorts of issues installing software on my VPS until I realized that my /tmp and /var/tmp were both mounted noexec. It’s a great security precaution, but if you’re not aware of it, it can cause all sorts of headaches, especially when trying to install using pecl. This is usually the case when your VPS is a Parallels Virtuozzo container. /tmp and /var/tmp are set noexec by Virtuozzo and even if you do mount they will not show that way.

Here’s a few of the example output errors indicating that you have a silently-mounted noexec /tmp or /var/tmp

#pecl install zip
.
.
.
checking whether the C compiler works… configure: error: cannot run C compiled programs.
If you meant to cross compile, use `–host’.
See `config.log’ for more details.
ERROR: `/tmp/tmpqZc37R/zip-1.8.0/configure’ failed

Another:

#pecl install zip
.
.
.
/usr/local/bin/phpize: /tmp/tmpnkcW3i/zip-1.8.0/build/shtool: /bin/sh: bad interpreter: Permission denied
Cannot find autoconf. Please check your autoconf installation and the $PHP_AUTOCONF
environment variable is set correctly and then rerun this script.

ERROR: `phpize’ failed

Here’s the solution: Rebinding mount points. As root, execute the following commands.

cd ~
mkdir tmp
mkdir vartmp
mount --bind ~/tmp /tmp
mount --bind ~/vartmp /var/tmp

This binds /tmp to /root/tmp and /var/tmp to /root/vartmp. This overcomes the noexec permission issues. Now you can run your pecl install. When you’re done, make sure you unmount your mountpoints.

umount /tmp
umount /var/tmp

All done.

Questions, comments, and feedback on this are welcome and appreciated!

1 Comment

WordPress, suPHP, and Ubuntu Server 10.04

If you have WordPress running under an unprivileged user account, you may have noticed that when trying to install or delete a plugin that it prompts you for FTP information. This is due to a rather unintuitive way that WordPress checks for file access:

The following code is from the get_filesystem_method() method in the wp-admin/includes/file.php file:

if( function_exists('getmyuid') && function_exists('fileowner') ){
    $temp_file = wp_tempnam();
    if ( getmyuid() == fileowner($temp_file) )
        $method = 'direct';
    unlink($temp_file);
}

This code creates a temporary file and confirms that the file just created is owned by the same user that owns the script currently being run. In the case of installing plugins, the script being run is wp-admin/plugin-install.php.

This may seem a little counter-intuitive, since the only thing WordPress really needs to be able to do is write to the wp-content/plugins directory.

If you’re on your own server (i.e. your own box or a VPS) and not worried about security implications, you can simply make the files owned by your web server process (usually www-data or nobody). This will have WordPress’ check succeed and no longer ask for your information.

If you’re on your own server and running a shared hosting environment, or just care about the security implications, you should install suPHP.

What are the security implications? If all web files are owned by the web server process, it’s extremely easy for someone to introduce malicious php code which can affect other sites on the server. Since the web server process has access to all of the web server files across the server, malicious code would have no problem gaining access to other files and directories on the server.

suPHP, configured correctly, causes all php scripts under a defined directory (usually /home) to run as the user account they are owned by. It also enforces other security measures, such as requiring that directories and files do not have write permissions for anyone other than the user.

I could go on and on about what it does, but my biggest struggle has been getting it to work. Installation is easy, but it’s painfully clear it does not work out of the box. After dozens of searches I found varying different ways of making it work, but sometimes drastic and not clean nor easy, few didn’t require recompiling something (which I wasn’t going to do), and none of them seemed to work.

After more than a day of searching and testing, I finally came up with a simple, elegant, working solution. Note that this was written and based on Ubuntu Server 10.04 64-bit, and libapache2-mod-suphp 0.7.1-1 and may or may not work for other platforms.

Install suPHP:

apt-get install suphp-common libapache2-mod-suphp

Edit the sites-enabled/xxxx.conf file for your VirtualHost

Inside your directive, add:

php_admin_flag engine off
AddHandler application/x-httpd-php .php .php3 .php4 .php5 .phtml
suPHP_AddHandler application/x-httpd-php
suPHP_Engine on

Lastly, edit /etc/suphp/suphp.conf and under ;Handler for php-scripts (at the bottom) change:

application/x-httpd-suphp="php:/usr/bin/php-cgi"

to

application/x-httpd-php="php:/usr/bin/php-cgi"

Restart apache and all should be well.

/etc/init.d/apache2 restart

Note: You might get an error message like the following:

Syntax error on line 7 of /etc/apache2/sites-enabled/example.com.conf:
Invalid command 'php_admin_flag', perhaps misspelled or defined by a module not included in the server configuration

In this case, check that you actually have the Apache PHP mod installed and enabled. In can get uninstalled or disabled on occasion when upgrading Apache. Here’s how to reinstall/reenable:

sudo apt-get install libapache2-mod-php5
sudo a2enmod php5

Checking that it’s working

Create a phpinfo.php file with the follow contents:

<?php phpinfo(); ?>

Call it via your browser and check the Server API line near the top: CGI / FastCGI means suphp is working. Anything else means it’s not.

Suphp is slow!

Yes, unfortunately suphp is slow. Suphp runs PHP scripts in CGI mode, which reportedly causes them to run slower. I would argue that the security advantages outweigh the need for fast scripts, but each situation is unique. You have to decide for yourself.

500 Internal Server Error

If you’re getting the 500 Internal Server Error, it means that suphp is probably working, but for some reason it won’t allow the script to run.

Check that you don’t have any PHP opcode caching (APC, etc) running. If you are running any type of PHP opcode cache suphp will never work. You must disable your opcode caching. If you’re using APC, you can disable it system-wide by simply editing /etc/php5/conf.d/apc.ini and commenting the line out with a semicolon as follows:

;extension=apc.so

Another element of importance is file permissions. SuPHP will fail (with a 500 Internal Server Error) any file that has permissions which are not allowed, as defined in /etc/suphp/suphp.conf. For example:

; Security options
allow_file_group_writeable=false
allow_file_others_writeable=false
allow_directory_group_writeable=false
allow_directory_others_writeable=false

Any file or directory with the attributes defined as allow=false will fail. Based on the configuration above, any file that is group- or world-writable will automatically fail. Same with directories. It’s best to leave these options alone (instead of changing them), and change the permissions on your scripts instead.

However, it is supposedly possible to disable it on a per-VirtualHost basis. I haven’t tested this.

Also check that your /var/log/suphp/suphp.log file isn’t over 2GB. If it is, rotate it or delete it.

If all else fails, check /var/log/suphp/suphp.log and /var/log/apache2/error.log for hints.

Many thanks to all of the blogs and articles that each held a piece of this puzzle. :)

, , , , , , , , ,

11 Comments

Bad robots

As part of being on a VPS, bandwidth is limited. One of the things you have to watch for is bots, crawlers, and scrapers coming and stealing your content and bandwidth.

Some of these bots are good and helpful, like the Google, Yahoo, and Bing crawlers. They index your site so it will appear in the search engines. Others, like the Yandex bot, crawl and index your pages for a Russian search engine. If you have an English-only site targeting US visitors, you might want to consider blocking the Yandex bot.

In my searches I also came across the Dotbot, which seems to crawl your pages just to get your response codes. I’m not sure what they do with the data, but in my opinion it’s better to block them.

So how does one block these bots? The Robots Exclusion Protocol states that a file, called robots.txt, can be put in your DocumentRoot with directives for bots to follow. For example, if your domain is example.com, your robots.txt should be at the following URL:

http://example.com/robots.txt

The robots.txt directives can tell bots which files they are allowed to index and which they are not. Well-behaved web robots will look at this file before attempting to crawl your site, and obey the directives within. The directives are based on the bots UserAgent string. A couple of examples:

Block the Dotbot robot from crawling any pages:

UserAgent: dotbot
Disallow: /

Block all robots from crawling anything under the /foo/ directory:

UserAgent: *
Disallow: /foo/

The Google Webmaster Tools has an excellent tool for checking your robots.txt file. You can find instructions on how to access it here. Google account required.

However, not all bots obey (or even look at) the robots.txt file. Those that don’t need special treatment in the .htaccess file, which I’ll describe in another post.

, , ,

Leave a comment