Farewell, Space Fish

August 16, 2011 - October 4, 2013

Space Fish: August 16, 2011 – October 4, 2013

Having made his first international appearance during Episode 204 of Category5 TV Tuesday August 16, 2011, our studio mascot Space Fish, Major Tom passed on this day, Friday October 4, 2013.  He spent 2 years, 1 month, 19 days with us in-studio.

Major Tom will always be remembered fondly for his colorful appearance. He also had a distinct talent for stinking up the studio despite our futile efforts to keep his habitat clean.

Major Tom’s final appearance on the live broadcast took place during Episode 283, Tuesday February 19, 2013.

Preventing rsync from doubling–or even tripling–your S3 fees.

Using rsync to upload files to Amazon S3 over s3fs?  You might be paying double–or even triple–the S3 fees.

I was observing the file upload progress on the transcoder server this morning, curious how it was moving along, and I noticed something: the currently uploading file had an odd name.

My file, CAT5TV-265-Writing-Without-Distractions-With-Free-Software-HD.m4v was being uploaded as .CAT5TV-265-Writing-Without-Distractions-With-Free-Software-HD.m4v.f100q3.

I use rsync to upload the files to the S3 folder over S3FS on Debian, because it offers good bandwidth control.  I can restrict how much of our upstream bandwidth is dedicated to the upload and prevent it from slowing down our other services.

Noticing the filename this morning, and understanding the way rsync works, I know the random filename gets renamed the instant the upload is complete.

In a normal disk-to-disk operation, or when rsync’ing over something such as SSH, that’s fine, because a mv this that doesn’t use any resources, and certainly doesn’t cost anything: it’s a simple rename operation. So why did my antennae go up this morning? Because I also know how S3FS works.

A rename operation over S3FS means the file is first downloaded to a file in /tmp, renamed, and then re-uploaded.  So what rsync is effectively doing is:

  1. Uploading the file to S3 with a random filename, with bandwidth restrictions.
  2. Downloading the file to /tmp with no bandwidth restrictions.
  3. Renaming the /tmp file.
  4. Re-uploading the file to S3 with no bandwidth restrictions.
  5. Deleting the temp files.

Fortunately, this is 2013 and not 2002.  The developers of rsync realized at some point that direct uploading may be desired in some cases.  I don’t think they had S3FS in mind, but it certainly fits the bill.

The option is –inplace.

Here is what the manpage says about —inplace:

This option changes how rsync transfers a file when its data needs to be updated: instead of the default method of creating a new copy of the file and moving it into place when it is complete, rsync instead writes the update data directly  to  the destination file.

It’s that simple!  Adding –inplace to your rsync command will cut your Amazon S3 transfer fees by as much as 2/3 for future rsync transactions!

I’m glad I caught this before the transcoders transferred all 314 episodes of Category5 Technology TV to S3.  I just saved us a boatload of cash.

Happy coding!

– Robbie

The new transcoder is proving itself.

Well, we’ve been on the new transcoders for one week now, and I’m excited to see the impact.

Last night was the first night where I was able to initiate an automated transcode of an episode shortly after we signed off the air.

There are still some things I need to work out.  For example, I could not initiate the conversion until I had imported the photos, because the transcoder uses the episode’s image for the ID3 cover art on MP3 transcodes.  So after I finished choosing and uploading the images for last night’s show, I fired the transcoder.

Here’s the log output:

Episode 314 Begin:  Tue Sep 24 20:49:01 EDT 2013
  Create Thumbnail Files Begin:  Tue Sep 24 20:49:01 EDT 2013
  Create LD File Begin:  Tue Sep 24 20:49:01 EDT 2013
  Create HD File Begin:  Tue Sep 24 20:49:01 EDT 2013
  Create WEBM File Begin:  Tue Sep 24 20:49:01 EDT 2013
  Create MP3 File Begin:  Tue Sep 24 20:49:01 EDT 2013
  Create SD File Begin:  Tue Sep 24 20:49:01 EDT 2013
  Create Thumbnail Files Complete:  Tue Sep 24 20:49:57 EDT 2013 (0d 0h 0m 56s)
  Create MP3 File Complete:  Tue Sep 24 20:56:55 EDT 2013 (0d 0h 7m 54s)
  Create LD File Complete:  Tue Sep 24 21:37:23 EDT 2013 (0d 0h 48m 22s)
  Create WEBM File Complete:  Tue Sep 24 22:39:23 EDT 2013 (0d 1h 50m 22s)
  Create SD File Complete:  Tue Sep 24 22:40:07 EDT 2013 (0d 1h 51m 6s)
  Create HD File Complete:  Tue Sep 24 23:20:05 EDT 2013 (0d 2h 31m 4s)
  Move Master File Begin:  Tue Sep 24 23:20:05 EDT 2013
  Create Master File Complete and Finish Job:  Tue Sep 24 23:20:05 EDT 2013 (0d 2h 31m 4s)

It was less than 8 minutes after I initiated the transcoder that the MP3 RSS feeds received the new episode.  Just a little more than 48 minutes after initiating the transcoder, the Low Definition (LD) file completed.  The show went up on the web site almost immediately after that (the files first get sync’d to our CDN and then added to the database, automatically).

All files (MP3, LD, SD, HD and WEBM) were complete in just 2 hours 31 minutes 4 seconds, including all distribution, even cross-uploading to Blip.TV (also automated now).

From 17 hours to only 2.5 hours.  This thing is incredible.

And that means, on average, we’ll be able to transcode nearly 10 episodes per day — almost double the turnaround of our first week.  That means the job which was estimated to take 72 days on our main server alone has been cut to only a day or two longer than one month.  In just one month from now, all back episodes – six years worth of Category5 TV – will be transcoded.

Why am I so excited about this “transcoder” thing?

There’s something I’ve been really excited about the past little while, and some may not understand why.

It’s the new Category5 Transcoders.

Transcoding is the direct analog-to-analog or digital-to-digital conversion of one encoding to another, such as for movie data files or audio files. This is usually done in cases where a target device (or workflow) does not support the format or has limited storage capacity that mandates a reduced file size,[1] or to convert incompatible or obsolete data to a better-supported or modern format. [Wikipedia]

Here is what I wanted to achieve in building a custom transcoding platform for Category5:

  1. Become HTML5 video compliant.
  2. Provide screaming fast file delivery via RSS or direct download.
  3. Provide instant video loading in browser embeds, with instant playback when seeking to specific points in the timeline.
  4. Provide Flash fallback for users with terrible, terrible systems.
  5. Ensure our show is accessible across all devices, all platforms, and in all nations.
  6. Make back-episodes available, even ones which are no longer available through any other means.
  7. Reduce the file size of each version of each episode in order to keep costs down for us as well as improve performance for our viewers.
  8. Ensure our video may be distributed by web site embeds, popup windows, RSS feed aggregators, iTunes, Miro Internet TV, Roku, and more.
  9. Ensure our videos are compatible with current monetization platforms such as Google AdSense for Video.

In the past, we’ve been limited to third-party services from Blip and YouTube.  Both of these services are huge parts of what we do, but relying on them exclusively has had some issues:

  1. Both Blip and YouTube services are blocked in Mainland China, meaning our viewers there have trouble tuning in.
  2. Both services, in their default state, require manually labour in order to place episodes online in a clean way (eg., including appropriate title, description and playlist integration).
  3. Blip does not monetize well.
  4. YouTube monetizes well on their site, but they restrict advertising on embeds (so if people watch the show through our site rather than directly on YouTube, we don’t get paid).

The process of transcoding the files and making them available to our viewers has been a onerous task since the get-go.  We grew so quickly during Season 1 that we didn’t really have the infrastructure to provide the massive amount of video that was to go out each month.  We had one month in 2012 for example, where we served nearly 125 Terribytes of video.

It takes me many hours each week just to make the files available to our viewers, and the new transcoder has been developed to cut that task down to only a few minutes, while simultaneously pumping out the video much, much faster.

I’ll try to explain how this happens in a mockup:

Old Vs. New Transcoding ProcessThe new transcoder not only does things faster: it does things simultaneously.

While transcoding the files for the RSS feeds, it has already placed a web-embedded copy of the show on our web site, in as little as 45 minutes.  Not only that, but once it’s all said and done, the transcoder server then automatically uploads the file to Blip.

The new transcoder consists of two servers at two different locations sharing the task itself, and then the files are distributed through two of our CDNs (one which is powered by Amazon, the other is our own affordable solution based on the old “alt” feed model).

We have been working with the team at Flowplayer, who are soon to introduce a public transcoding and hosting / distribution service for content providers.  With this new relationship, we will be able to serve up ads in a friendly way to help offset distribution costs.  This also means we now have our own embed player, no longer relying on YouTube or Blip’s embedded player.

This means, viewers in Mainland China can now watch Category5 directly through our main web site.  No more workarounds!

As long as we can offset the added expense of self-hosting video, this could lead to some great things.  I’ll be keeping an eye on it over the next while, and encourage you to submit your feedback.  I love the idea of Category5 finally being accessible to everyone, everywhere, and very quickly following each show.  I also love that my Tuesday nights will no longer be so arduously long.

Transcoders are a very difficult thing to explain, and the way we’re doing it is hard to explain, but to me, it’s exciting.  Just know that it means “everything is better than ever”, with fast video load time through our site, RSS feeds that are more than 10x faster than before, global access (even in Mainland China), and room to grow.

I’m currently running the system through countless tests, but the transcoders are live.  It will be working its way (automatically) through back-episodes, so you’ll start to see the YouTube player disappearing from the site, replaced with our own player.  Eventually, all 312+ episodes will be available.

Thanks for growing with us!

– Robbie

 

RSS feeds have been migrated!

As per my last post, our video and audio RSS feeds have been migrated to our [formerly known as] alternate servers.

The alternate servers were originally built to allow viewers in Mainland China to view Category5 Technology TV.  They were the “alternate” servers because Blip.TV and YouTube are blocked in Mainland China.

However, through my tests, I discovered that these servers were in fact substantially faster than pulling video from Blip.TV, so I wrote a migration script to automatically merge all files to the alternate servers upon their release, and deploy them via our RSS feeds.

This means you’ll now receive our files faster (even fast enough to stream to your browser directly).  It also means our files are now available everywhere, including Mainland China, directly from our main feeds.

But it means we generate $0 ad revenue from our feeds.  GASP!

The next step is to allow China viewers an opportunity to disable YouTube on our main web site, and embed a streaming player which utilizes our new servers’ files.  Again, the catch 22 in doing this is simple: YouTube helps pay the bills.

The changes mean we’re incurring more cost, but adding the possibility to generate less ad revenue.  It’s completely backwards to anyone trying to make money.  Fortunately for you, my goal is to make our service as good as possible, and I believe with all my heart that viewers and advertisers will choose to support us.  Watch for an announcement soon–you could be part of our ad sales team and even make yourself some extra cash monies while supporting the show you love!

Enjoy the new feeds!  If you have means to do so, please considering donating, or subscribing to a monthly donation amount.  You can do so at cat5.tv/c.

Thanks!

– Robbie

Category5 RSS feeds could be 5500% faster with more world accessibility by switching away from Blip.tv for RSS distribution.

I’ve been hearing for a while that Blip.tv is slow.

It’s never seemed bad to me, but I didn’t really have anything to compare it to.  I have to be honest, I really love the features Blip.tv gives to its producers.  Not so much to its viewers.  But to the producers.  The automated file conversions from FTP uploaded masters is an exceptional time saver in post, and the automated upload to YouTube, while not perfect, also saves some redundant work for me after the show each Tuesday night.

So, to hear that Blip.tv is slow seemed backward to me; it is a real time saver.  To me, a show producer.

Last July, we launched a syndicate in China, because Blip.tv is blocked in China and our viewers were crying out (in particular, Mainland China residents who had traveled to places like Germany for school and had fallen in love with the show, which is very popular there).

So for kicks, I thought I’d test the speed difference between Blip.tv and our China syndication system.

For this little experiment, I used the exact same file from 3 sources (Blip.tv, Amazon S3 and our China syndication system).

Here are the shocking results:

File:  Episode 295, H.264 SD Quality, 246378147 Bytes (235 MB).
Blip.TV:  34m 25s, 117 KB/s
Amazon S3:  52s, 4.48 MB/s
Our China Syndication System:  37s, 6.19M/s

I’m sorry, what?  Blip.tv took nearly 35 minutes to download the episode, whereas our syndication system into Mainland China, which is housed at our datacentre in California, took only 37 seconds!  That’s basically one second for every minute it took through Blip.tv.  I did not expect that!  I’m also impressed that the little syndicating system (which I designed) outperformed S3.

No, we are not going to drop Blip.tv.  It has its place, and that place is as I described.  They’re a big part of our distribution chain.  But perhaps it’s best to retire them as the source for our RSS feeds and let them stick to what they do best: from the encoding to the distribution to YouTube.

So I have a feeling our system which was built to help viewers in China watch the show may soon become our world-wide source for RSS files.  What do you think?  Want to receive Category5 episode 5500% faster?

Now, to git’r done!

Comment below.

Running phpcs against many domains to test PHP5 Compatibility.

Running a shared hosting service (or otherwise having a ton of web sites hosted on the same server) can pose challenges when it comes to upgrading.  What’s going to happen if you upgrade something to do with the web server, and it breaks a bunch of sites?

That’s what I ran into this week.

For security reasons, we needed to knock PHP4 off our Apache server and force all users onto PHP5.

But a quick test showed us that this broke a number of older sites (especially sites running on old code for things like OS Commerce or Joomla).

I can’t possibly scan through billions of lines of client code to see if their site will work or break, nor can I click every link and test everything after upgrading them to PHP5.

So automation takes over, and we look at PHP_CodeSniffer with the PHPCompatibility standard installed.

Making it work was a bit of a pain in the first place, and you’ll need some know-how to get it to go.  There are inconsistencies in the documentation and even some incorrect instruction on getting it running.  However, a good place to start is http://techblog.wimgodden.be…..

Running the command on a specific folder (eg. phpcs –extensions=php –standard=PHP53Compat /home/myuser/domains/mydomain.com/public_html) works great.  But as soon as you decide to try to run it through many, many domains, it craps out.  Literally just hangs.  But usually not until it’s been running for a few hours, so what a waste of time.

So I wrote a quick script to help with this issue.  It (in its existing form – feel free to mash it up to suit your needs) first generates a list of all public_html and private_html folders recursive to your /home folder.  It then runs phpcs against everything it finds, but does it one site at a time (so no hanging).

I suggest you run phpcs against one domain first to ensure that you have phpcs and the PHPCompatibility standard installed and configured correctly.  Once you’ve successfully tested it, then use this script to automate the scanning process.

You can run the script from anywhere, but it must have a tmp and results folder within the current folder.

Eg.:
mkdir /scanphp
cd /scanphp
mkdir tmp
mkdir results

And then place the PHP file in /scanphp and run it like this:
php myfile.php (or whatever you ended up calling it)

Remember, this script is to be run through a terminal session, not in a browser.

<?php
  exec('find /home -type d -iname \'public_html\' > tmp/public_html');
  exec('find /home -type d -iname \'private_html\' > tmp/private_html');
  $public_html = file('tmp/public_html');
  $private_html = file('tmp/private_html');

  foreach ($public_html as $folder) {
    $tmp = explode('/', $folder);
    $domain = $tmp[(count($tmp)-2)];
    if ($domain == '.htpasswd' || $domain == 'public_html') $domain = $tmp[(count($tmp)-3)];
    $user = $tmp[2];
    echo 'Running scan: ' . $folder . $user . '->' . $domain . '... ';
    exec('echo "Scan Results for ' . $folder . '" >> results/' . $user . '_' . $domain . '.log');
    exec('phpcs --extensions=php --standard=PHP53Compat ' . $folder . ' >> results/' . $user . '_' . $domain . '.log');
    exec('echo "" >> results/' . $user . '_' . $domain . '.log');
    exec('echo "" >> results/' . $user . '_' . $domain . '.log');
    echo 'Done.' . PHP_EOL . PHP_EOL;
  }

  foreach ($private_html as $folder) {
    $tmp = explode('/', $folder);
    $domain = $tmp[(count($tmp)-2)];
    if ($domain == '.htpasswd' || $domain == 'private_html') $domain = $tmp[(count($tmp)-3)];
    $user = $tmp[2];
    echo 'Running scan: ' . $folder . $user . '->' . $domain . '... ';
    exec('echo "Scan Results for ' . $folder . '" >> results/' . $user . '_' . $domain . '.log');
    exec('phpcs --extensions=php --standard=PHP53Compat ' . $folder . ' >> results/' . $user . '_' . $domain . '.log');
    exec('echo "" >> results/' . $user . '_' . $domain . '.log');
    exec('echo "" >> results/' . $user . '_' . $domain . '.log');
    echo 'Done.' . PHP_EOL . PHP_EOL;
  }

?>

See what we’re doing there?  Easy breezy, and solves the problem when having to run phpcs against a massive number of domains.

Let me know if it helped!

– Robbie

Create links to specific points in any Category5 TV episode.

The new Timestamp feature allows you to start each episode at any point in the video.

The new Timestamp feature allows you to start each episode at any point in the video.

New Feature:

Do you run a blog and want to link to specific portions of a Category5 Technology TV episode?  Or just want to share a specific clip with your family or friends?

Now you can!  Just append the timestamp to the URL as follows:

  • Go to www.Category5.tv
  • Find the episode you’re looking for and open its show notes page
  • Scrub to the point in the video where you want to start and make note of the time (for example, 8 minutes, 19 seconds)
  • Add a slash to the URL in your address bar, and then the timestamp in mm:ss format (for example, /8:19)

Give it a try:  http://www.category5.tv/episodes/291.php/8:19

Make Your Site Faster – Cloudflare’s CDNJS vs. Google Hosted Libraries – SHOCKING Results

I have used Google Hosted Libraries for as long as I can remember, and it’s what we use on Category5.TV to accelerate the javascript end of our site.  For all the javascript and CSS (plus images and so-on) we use that aren’t available through Google’s hosted solution, I use Amazon S3 and distribute it through Cloudflare to make it load quickly for our viewers.

I’ve been fast falling in love with Cloudflare’s CDNJS.

CDNJS boasts that it is in fact much faster than Google Hosted Libraries.

Neah… that can’t be true!  Google’s the “big dog”… Cloudflare is still relatively new.

RELATED VIDEO

So I took a look.  The first thing that shocked me was the absolute magnitude of how many javascript tools are available through CDNJS.  Gone is the need to (for example) load jQuery from Google Hosted Libraries but then have to download and deploy a copy of Fancybox 2 locally or on your own CDN.  CDNJS seems to have it all.  Or at least a great selection, plus the ability to add a library yourself via GitHub.

Sorry, what?  Yeah, baby.

So I thought, let’s run the world’s simplest test: how fast does wget receive the jQuery library on Linux?  It may not be a realistic benchmark in all cases, but it gives us a bit of a look at how quickly each service delivers the js.

Here are those simple (but amazing) results from my location in Ontario, Canada:

Google Hosted Libaries
robbie@robbie-debian:/tmp$ wget http://ajax.googleapis.com/ajax/libs/jquery/1.9.1/jquery.min.js
–2013-03-22 13:50:47–  http://ajax.googleapis.com/ajax/libs/jquery/1.9.1/jquery.min.js
Resolving ajax.googleapis.com… 74.125.133.95, 2607:f8b0:4001:c02::5f
Connecting to ajax.googleapis.com|74.125.133.95|:80… connected.
HTTP request sent, awaiting response… 200 OK
Length: unspecified [text/javascript]
Saving to: `jquery.min.js.1′ [ <=> ] 92,629      –.-K/s   in 0.1s

2013-03-22 13:50:47 (798 KB/s) – `jquery.min.js.1′ saved [92629]

CDNJS
robbie@robbie-debian:/tmp$ wget http://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.min.js
–2013-03-22 13:49:57–  http://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.min.js
Resolving cdnjs.cloudflare.com… 141.101.123.8, 190.93.243.8, 190.93.242.8, …
Connecting to cdnjs.cloudflare.com|141.101.123.8|:80… connected.
HTTP request sent, awaiting response… 200 OK
Length: unspecified [application/x-javascript]
Saving to: `jquery.min.js’ [ <=> ]  92,629      –.-K/s   in 0.04s

2013-03-22 13:49:58 (2.21 MB/s) – `jquery.min.js’ saved [92629]

Note the filesize (92,629) is exactly the same; we’re dealing with the same version of jQuery here–identical files.  Also note that I’ve used a non-secure (http) connection for each.  The difference in speed is incredible.

Now, for a basic site, the fraction-of-a-second difference may not matter to you.  But for a big site like mine, this kind of difference could mean a full second off the load time–possibly more!  That’s unheard of for a simple copy-and-paste change in code.

Time to update Category5.TV.  What about your site?  Please comment below.

Update:  Garbee made a great point in our IRC room:  You’re only seeing results from my location.  Fair enough.  We want to make sure this isn’t just me that is experiencing such a massive difference.  Therefore, please run this exact test yourself, and post your results below in a comment.  I’m in Ontario, Canada.  Where are you?  Thanks!

First Day Results Extracted from Reader Comments:

  • Me:
    Ontario Canada – Google @ 798 KB/s, CDNJS @ 2.21 MB/s, CDNJS is 2.77x the speed of Google.
    Brea California – Google @ 2.27 MB/s, CDNJS @ 14.5 MB/s, CDNJS is 6.39x the speed of Google.
  • Garbee:
    Virginia USA – Google @ 429 KB/s, CDNJS @ 496 KB/s, CDNJS is 1.16x the speed of Google.
    New Jersey USA – Google @ 104 KB/s, CDNJS @ 2.60 MB/s, CDNJS is 25x the speed of Google.
  • Chris Neves:
    Montana USA – Google @ 123 KB/s, CDNJS @ 300 KB/s, CDNJS is 2.44x the speed of Google.
  • Alan Pope:
    Farnborough UK – Google @ 1.26 MB/s, CDNJS @ 1.16 MB/s, Google is 1.08x the speed of CDNJS.
    London England – Google @ 6.79 MB/s, CDNJS @ 4.72 MB/s, Google is 1.44x the speed of CDNJS.
  • steve5:
    Leeds UK – Google @ 153 KB/s, CDNJS @ 178 KB/s, CDNJS is 1.16x the speed of Google.
  • Bryce:
    Seattle Washington – Google @ 1.83 MB/s, CDNJS @ 659 KB/s, Google is 2.78x the speed of CDNJS.
  • Calvin:
    Massachusetts USA
    Test 1: Unsecure Connection
    – Google @ 810 KB/s, CDNJS @ 876 KB/s, CDNJS is 1.08x the speed of Google.
    Test 2: Secure Connection – Google @ 721 KB/s, CDNJS @ 1.08 MB/s, CDNJS is 1.5x the speed of Google.

CDNJS

LogMeIn lost all my accounts.

As a technical support company, we have used LogMeIn for years to help us remotely administer client systems.  Many of those clients have 20-30 computers, or more, and we had loaded them all into our LogMeIn account for easy access by our technical support team.

We have many “free” accounts connected to it, and many “paid” accounts.  Some of our customers needed the “paid” features such as printer support, so we set them up with a paid account.

So our account, over the years, became a well-organized assortment of both paid and free LogMeIn accounts.  And we had a lot of them.

And then on March 5, 2013, LogMeIn sent the following email (excerpt):

“For nearly a decade, LogMeIn Free has provided unlimited free remote access to users on as many computers as they wish. In order to ensure that we can continue providing this free service and make meaningful improvements to it, we will be limiting the number of accessible Free computers in all remote access accounts to 10.”

We stopped reading around there.  But it goes on…

“Should you choose not to upgrade, only the first 10 Free computers in your account, according to alphabetical order, will be shown as available” … “These changes will take effect in just a few weeks, so act now to take advantage of our special rate.”

Well, we acted.  We moved all our customer systems (including the paid ones) onto our own hosted support solution and left LogMeIn a distant memory.  Didn’t have to think twice.  LogMeIn effectively pulled the plug on our business-customer relationship.

As a business owner, it’s important not to forget your customers.  They’re the ones who make your business work after-all.  In LogMeIn’s case, they made a stupid move. And unfortunately a lot of it has to do with communication.  I now know they are offering a reasonably priced “Central” service to allow the continued use, but their email didn’t mention anything about that in the first paragraph, and in big bold characters it simply stated, and I quote:  “Important message: Your account will soon be limited to 10 Free computers.”  We didn’t read any further before taking action.

So, in an effort to reduce the number of “free” accounts in use on their system, LogMeIn has also lost all our paid accounts.

It reminds me of when Neighbours (a coffee drive-thru) stopped taking debit as a form of payment.  Their focus was entirely on the wrong thing: the fees to run a debit machine.  Here’s the ripple effect: I used to get my coffee there each morning, and quite often a breakfast sandwich, but when they made that change, I didn’t waste any time (because I don’t carry cash)… I just drove across the road to Tim Hortons.  Stupid move on their part.  They’ve since re-introduced debit at their drive-thru.  Perhaps someone at the company woke up and realized they just cut out a large chunk of their business to save a few pennies per transaction.  Which costs more?

But where does that leave LogMeIn?  Their focus is obviously in the wrong place in the same way.  And we’ve gone elsewhere.

Own a business?  Think about your customer first, and then figure out how to make money while taking good care of your customer.  If you can’t be good to your customer, they’ll just go across the road and leave you wondering where all the business went.

– Robbie