Backup a Linux machine with LVM Snapshots and rdiff-backup

Here is the completed script I wrote on Episode 461. Make sure you check out the full episode for details on how to make this work for you.

#!/bin/sh
/sbin/lvcreate -L10G -s -n lvm_snapshot /dev/ubuntu-mate-vg/root
/bin/mount /dev/ubuntu-mate-vg/lvm_snapshot /mnt/snapshot

/usr/bin/rdiff-backup -v5 --print-statistics \
  --exclude /mnt/backup/ \
  --include /mnt/snapshot/home/ \
  --include /mnt/snapshot/etc/fstab \
  --include /mnt/snapshot/var/log/ \
  --exclude '**' \
  / \
  /mnt/backup/

/bin/umount /mnt/snapshot
/sbin/lvremove -f /dev/ubuntu-mate-vg/lvm_snapshot

And of course, here is the episode:

Make it so mountpoint can’t be written to if not mounted.

Have you ever accidentally saved files to a Linux mountpoint when the drive wasn’t mounted, and then couldn’t mount the drive thereafter? Or worse, had a backup run when the backup drive wasn’t mounted, only to fill your filesystem and crash the server?

These problems can be avoided by simply making your mountpoint immutable! What this means is, your mountpoint (the folder itself) cannot be written to. However, even as an immutable folder, it can be mounted to, and the filesystem of the mounted drive then controls the permissions of the folders therein.

It’s a simple Linux command. We’ll pretend our mountpoint is simply /mountpoint. Here’s all you have to do:

chattr +i /mountpoint

Brilliant! And oh, so simple.

Here’s a sample of what happens when I do this as root. Note that ‘mymountpoint’ is setup for me in my /etc/fstab file so it normally auto-mounts.

root@server:/# umount mymountpoint
root@server:/# chattr +i mymountpoint
root@server:/# cd mymountpoint
root@server:/mymountpoint# touch test
touch: cannot touch `test': Permission denied
root@server:/mymountpoint# mount -a
root@server:/mymountpoint# touch test
root@server:/mymountpoint#

Enjoy that little tidbit!

As a side note, you might want to also get a notification if your drive isn’t mounted… so you could use the mountpoint command to send you an email if there’s a problem. Just add something like this to your backup script:

mountpoint -q /mymountpoint || mail -s "/mountpoint is not mounted for the backup" [email protected]

That simply checks if /mountpoint is a mounted mountpoint. If yes, it does nothing. If no, it will send you an email.

-Robbie

Convert video to several JPG images on Linux without ffmpeg.

These days I just use this command and hit CTRL-C when the video frames (V:) stop moving:

mplayer -vo jpeg:outdir=screenshots -sstep 10 filename.mp4

But, this post remains for the sake of historical record – lol!


I admit… I do love PHP in the command line. Does that make me a bad person? 😉

Here’s a tiny little script that I wrote to create many JPG screenshots of a video file. I use this each week to create a bunch of stills from our broadcast so I can use them as thumbnails and so-on. I didn’t want it to depend on ffmpeg since I don’t have that on any of my modern systems.

It requires just three packages: mplayer mediainfo php-5

Save it as whatever.php and run it like this: php whatever.php file.wmv

It will create a folder called file-Screenshots/ and will save one picture per 10 seconds for any video source. Just change “file.wmv” to the name of your video. Include the path if it’s not in the current folder.

<?php
  // Depends: mplayer mediainfo
  // Does not need ffmpeg (deprecated)

  if ($_GET) {
    $file = $_GET['file'];
  } else {
    $file = $argv[1];
  }
  
  if (strlen($file) < 3) exit('Need a proper filename for input.' . PHP_EOL);  
  $dir = array_shift(explode('.',$file)) . '-Screenshots';

  $duration = duration($file);
  echo 'Duration in Seconds: ' . $duration . PHP_EOL;
  echo 'Saving to folder:    ' . $dir . PHP_EOL;
  echo 'Creating ' . ($duration/10) . ' JPG images from source...';
  exec('mplayer -vo jpeg:outdir=' . $dir . ' -sstep 10 -endpos ' . ($duration-2) . ' ' . $file . ' > /dev/null 2>&1');
  echo ' Done.' . PHP_EOL; 

  function duration($file) {
    if (file_exists($file)) {
      exec('mediainfo -Inform="Audio;%ID%:%Format%:%Language/String%\n" ' . $file . ' | grep -m1 Duration | cut -d\':\' -f2',$result);
      $tmp = explode('h',$result[0]);
      $seconds = ((intval($tmp[0]*60)+intval($tmp[1]))*60);
      return intval(trim($seconds));
    } else {
      exit('File ' . $file . ' not found.' . PHP_EOL);
    }
  }
?>

Hope it helps you out.

-Robbie

Preventing rsync from doubling–or even tripling–your S3 fees.

Using rsync to upload files to Amazon S3 over s3fs?  You might be paying double–or even triple–the S3 fees.

I was observing the file upload progress on the transcoder server this morning, curious how it was moving along, and I noticed something: the currently uploading file had an odd name.

My file, CAT5TV-265-Writing-Without-Distractions-With-Free-Software-HD.m4v was being uploaded as .CAT5TV-265-Writing-Without-Distractions-With-Free-Software-HD.m4v.f100q3.

I use rsync to upload the files to the S3 folder over S3FS on Debian, because it offers good bandwidth control.  I can restrict how much of our upstream bandwidth is dedicated to the upload and prevent it from slowing down our other services.

Noticing the filename this morning, and understanding the way rsync works, I know the random filename gets renamed the instant the upload is complete.

In a normal disk-to-disk operation, or when rsync’ing over something such as SSH, that’s fine, because a mv this that doesn’t use any resources, and certainly doesn’t cost anything: it’s a simple rename operation. So why did my antennae go up this morning? Because I also know how S3FS works.

A rename operation over S3FS means the file is first downloaded to a file in /tmp, renamed, and then re-uploaded.  So what rsync is effectively doing is:

  1. Uploading the file to S3 with a random filename, with bandwidth restrictions.
  2. Downloading the file to /tmp with no bandwidth restrictions.
  3. Renaming the /tmp file.
  4. Re-uploading the file to S3 with no bandwidth restrictions.
  5. Deleting the temp files.

Fortunately, this is 2013 and not 2002.  The developers of rsync realized at some point that direct uploading may be desired in some cases.  I don’t think they had S3FS in mind, but it certainly fits the bill.

The option is –inplace.

Here is what the manpage says about —inplace:

This option changes how rsync transfers a file when its data needs to be updated: instead of the default method of creating a new copy of the file and moving it into place when it is complete, rsync instead writes the update data directly  to  the destination file.

It’s that simple!  Adding –inplace to your rsync command will cut your Amazon S3 transfer fees by as much as 2/3 for future rsync transactions!

I’m glad I caught this before the transcoders transferred all 314 episodes of Category5 Technology TV to S3.  I just saved us a boatload of cash.

Happy coding!

– Robbie

Running phpcs against many domains to test PHP5 Compatibility.

Running a shared hosting service (or otherwise having a ton of web sites hosted on the same server) can pose challenges when it comes to upgrading.  What’s going to happen if you upgrade something to do with the web server, and it breaks a bunch of sites?

That’s what I ran into this week.

For security reasons, we needed to knock PHP4 off our Apache server and force all users onto PHP5.

But a quick test showed us that this broke a number of older sites (especially sites running on old code for things like OS Commerce or Joomla).

I can’t possibly scan through billions of lines of client code to see if their site will work or break, nor can I click every link and test everything after upgrading them to PHP5.

So automation takes over, and we look at PHP_CodeSniffer with the PHPCompatibility standard installed.

Making it work was a bit of a pain in the first place, and you’ll need some know-how to get it to go.  There are inconsistencies in the documentation and even some incorrect instruction on getting it running.  However, a good place to start is http://techblog.wimgodden.be…..

Running the command on a specific folder (eg. phpcs –extensions=php –standard=PHP53Compat /home/myuser/domains/mydomain.com/public_html) works great.  But as soon as you decide to try to run it through many, many domains, it craps out.  Literally just hangs.  But usually not until it’s been running for a few hours, so what a waste of time.

So I wrote a quick script to help with this issue.  It (in its existing form – feel free to mash it up to suit your needs) first generates a list of all public_html and private_html folders recursive to your /home folder.  It then runs phpcs against everything it finds, but does it one site at a time (so no hanging).

I suggest you run phpcs against one domain first to ensure that you have phpcs and the PHPCompatibility standard installed and configured correctly.  Once you’ve successfully tested it, then use this script to automate the scanning process.

You can run the script from anywhere, but it must have a tmp and results folder within the current folder.

Eg.:
mkdir /scanphp
cd /scanphp
mkdir tmp
mkdir results

And then place the PHP file in /scanphp and run it like this:
php myfile.php (or whatever you ended up calling it)

Remember, this script is to be run through a terminal session, not in a browser.

<?php
  exec('find /home -type d -iname \'public_html\' > tmp/public_html');
  exec('find /home -type d -iname \'private_html\' > tmp/private_html');
  $public_html = file('tmp/public_html');
  $private_html = file('tmp/private_html');

  foreach ($public_html as $folder) {
    $tmp = explode('/', $folder);
    $domain = $tmp[(count($tmp)-2)];
    if ($domain == '.htpasswd' || $domain == 'public_html') $domain = $tmp[(count($tmp)-3)];
    $user = $tmp[2];
    echo 'Running scan: ' . $folder . $user . '->' . $domain . '... ';
    exec('echo "Scan Results for ' . $folder . '" >> results/' . $user . '_' . $domain . '.log');
    exec('phpcs --extensions=php --standard=PHP53Compat ' . $folder . ' >> results/' . $user . '_' . $domain . '.log');
    exec('echo "" >> results/' . $user . '_' . $domain . '.log');
    exec('echo "" >> results/' . $user . '_' . $domain . '.log');
    echo 'Done.' . PHP_EOL . PHP_EOL;
  }

  foreach ($private_html as $folder) {
    $tmp = explode('/', $folder);
    $domain = $tmp[(count($tmp)-2)];
    if ($domain == '.htpasswd' || $domain == 'private_html') $domain = $tmp[(count($tmp)-3)];
    $user = $tmp[2];
    echo 'Running scan: ' . $folder . $user . '->' . $domain . '... ';
    exec('echo "Scan Results for ' . $folder . '" >> results/' . $user . '_' . $domain . '.log');
    exec('phpcs --extensions=php --standard=PHP53Compat ' . $folder . ' >> results/' . $user . '_' . $domain . '.log');
    exec('echo "" >> results/' . $user . '_' . $domain . '.log');
    exec('echo "" >> results/' . $user . '_' . $domain . '.log');
    echo 'Done.' . PHP_EOL . PHP_EOL;
  }

?>

See what we’re doing there?  Easy breezy, and solves the problem when having to run phpcs against a massive number of domains.

Let me know if it helped!

– Robbie