Bad user experience…

My VMware support engineer forwarded on the current VMware knowledge base weekly digest, and one of the new KB article titles caught my eye:

Upgrading VMware Tools using PowerCLI or vSphere API (2147641)

Hey!  That sounds like something to look over and possibly provide to my operations team to help them upgrade older VMs that were now running on newer VMware hosts!  Clicking on the URL for that KB article (KB 2147641) brought up the usual Details and Solution sections, but they were strangely lacking.

The “Details” section usually explains the subject of the KB article in detail.  This one just said:

VMware Tools can be upgraded using vSphere PowerCLI, vSphere API and other methods. For optimum performance, upgrade VMware Tools in not more than five virtual machines simultaneously.

And the “Solution” section was even less helpful:

None

Talking with my support engineer, he thinks that the article may have been posted to note that it is possible to upgrade the VMware tools using either of those methods, just the actual steps how to do this were not documented.

I know from a quick search there are plenty of examples from non-VMware.com sites on how to do this:

 

Ironically, there are some links to VMware.com pages addressing this:

I’ve seen odd VMware KB articles in the past – hopefully the addition of a “Tip of the Week” flag or at least a sentence in the Solution field denoting the article is not fully fleshed out solution would save a lot of confusion.

Bad administrator, no cookie…

Well, it had to happen.  I finally got my new site up and planned to restore the blog posts to the new site.  My backup files from various past sites were all in place – I had setup a backup script to dutifully collect the data monthly (I didn’t update the sites all that often), and also clean up after itself and only keep three months of backups.

The script ran, the backup files appeared and automatically cleaned themselves up after 90 days.  When I first ran it I verified that the files were complete – I didn’t restore them anywhere, but the blog text was there.  Success!  Add it to cron on my desktop and let it run.

And run.   And run.   And run.  Unattended.  For the past couple years.

I had been lax and wasn’t blogging much so I let my SquareSpace site go away a year ago.  Recently I decided to resume my ramblings..er, um, blogging, so I installed WordPress on my site and went looking through the backups.

The good news, my cron job continued to work dutifully backing up all the blog posts.  Except when I canceled my SquareSpace account, it continued to “backup” the site – except this time the files it saved were essentially “the site does not exist” messages.  (Insert sad face here…)  Thankfully I was able to restore some of the older text using Archive.org and I’m still combing through other old sources.  But much of it is a loss.

So, what did I learn (or re-learn) from all this?

  1. A single copy of a backup is not a backup.  Use the “Three/Two/One” rule.
  2. Don’t cleanup archives when it’s not necessary.  The backup files were small enough (less than a megabyte after compressions), so I could have kept many years in couple gigabytes on my server.
  3. Keep track of services and the processes associated with it – I didn’t need to keep the backup script running after cancelling the service. This didn’t have a real expense associated with it, but how often have we looked at our budget and realized that we’ve continued to pay for something well after we stopped using it.

Subversion to Git

I have a Subversion project that I’m migrating to use Git, but I don’t want to loose the history if possible.  I found these steps and they mostly worked for me with one exception (below):

https://git-scm.com/book/en/v2/Git-and-Other-Systems-Migrating-to-Git

The only problem was during the export I got the error message:

Author: (no author) not defined in authors file

After a bit of searching I found this workaround:

Author: (no author) not defined in authors file

In short I had to add this line to my users.txt file:

(no author) = no_author <no_author@no_author>

Successful backups in three, two, one…

Let me start off by saying that I didn’t come up with this backup mnemonic, rather Peter Krogh first wrote this up (to my knowledge) in this blog post.

As I recently re-learned, backups even done right are hard to do well.  In my case there’s a chance that I still would have lost my data, but there’s no accounting for human error in every case.

The “Three, Two, One” backup strategy is pretty simple:

  • Three – A file isn’t backed up until there are at least three copies of it, the original and two other copies not on that machine.
  • Two – The backups must be on two different media types.  For example, a hard drive and a DVD drive, or a tape backup.
  • One – Finally, one of those copies should be stored off-site or at least off-line.  A cloud storage service such as Carbonite, Amazon Cloud Drive, Google Drive, or even storing it at a friends house.

In my case (backing up my website), one version would have been the site itself, a second would have been a copy stored on my home computer, and a third would have been stored on a DVD (probably not every month, but probably once every six months or so) or I would have copied it up to my Google Drive.

Sadly, I didn’t take those precautions and now I’m paying the price (thankfully a small one).

And I’ll add one more thing – be sure to VERIFY the backup you created periodically.  It does you no good if the restore process fails or isn’t documented for someone else to perform.

An upgraded Apple for sale.

This article hit home for me:

https://medium.com/charged-tech/apple-just-told-the-world-it-has-no-idea-who-the-mac-is-for-722a2438389b#.6q18so27a

I haven’t been a Mac user for a long time.  I worked on early (1990’s vintage) Macintosh “box” computers, but never really wanted much to do with them until they ditched the “System” operating system and went to “OSX” (now “MacOS”).  I’ve been a long-time Unix guy – I really wanted a NeXT computer back in the day, but didn’t have near the cash for it, and now that OSX had a nice user interface with the power of a Unix command line for all the tools and automation/scripting goodness I really wanted one.

I was lucky that my sister gave my dad her previous MacBook Pro 15″ (Mid 2010) when she upgraded a couple years later.  He liked it, but he is a Windows guy through-and-through so when BootCamp started giving him fits he was about to toss it in the dumpster.  I offered to trade him my 2-year-old HP laptop (which runs Windows just fine) and he took me up in a heartbeat.

In the past 18 months I’ve really grown fond of the MBP and OSX.  The software upgrades have gone fine with me, the battery life is still excellent, the hardware fit and finish is still solid and continues to look “current” even being 6 years old.

Knowing that the entire system is getting a bit long in the tooth and it has the occasional unexplained power issue, I was waiting for the next Apple MacBook Pro announcement.

In a word, I was disheartened.

My 6 year old 15″ MBP has an Intel Core i7 @ 2.66GHz, 8GB RAM and a dedicated NVidia GeForce GT 330M video card.  The latest MBP has an upgrade i7 CPU, but the performance compared between the two is barely noticeable in regular use.   Same goes for video – I don’t do any high-end editing or video processing, nor do I do much gaming anymore.  What I was looking forward to was a system that officially supported 32GB RAM and had a SSD drive that I could upgrade over time.  My primary tasks I do on my current MBP are playing with virtual machines (VirtualBox or VMWare), and RAM is the biggest constraint.

Instead, the big “new things” that the MBP brings us are that it is

  • “Thinner” – Really?  You couldn’t have made it the same thickness and given us 25-50% more battery life?  And keep at least one USB3 port?
  • “Touch function row buttons” – Ok, neat, but it’s not a good solution if you touch type (which I do), or if you’re vision impaired.
  • And if touch is such a good thing, why not make a touch-screen?  After-all, it’s been such a failure for everyone else…not.
  • Removed the “MagSafe” power – Really?  As a parent, that was one of the great things I like about my current MBP.  I’ve fixed a few laptop screens when a child or pet ran by and caught the power cord, sending the unit flying to the floor.
  • And apparently those that are knowledgeable about USB-C, their ports are good, but the clean lines of the Apple brand are all lost when I have multiple dongles for my other devices.  I’ll have a clean looking laptop, but my laptop bag will be a jumble of adapters.

The good thing is that there are enough people that will jump on the “latest, greatest” train and resell their older MBP.  I just hope I can luck into one for a decent price, especially when they realize that their devices need all those adapters.

Maybe I’ll be lucky and my sister will want to dump hers on me again.  No, she’s too smart to fall for that again.

(Edited Nov 1, 2016)

Resurrecting a lost hard disk… The Sequal.

In a previous post I documented the use of ddrescue to recover the data from a failing hard drive.  I follow-up a few months later noted that the second drive had started failing but this time I was able to copy the data before needing to resort to the rescue tools.  As I promised, here’s a follow-up.

After using the second replacement drive for just a couple weeks, I started noticing the same errors creeping into the “dmesg” output.  Though I know some manufacturers have a bad line, I’ve never experienced any that failed that rapidly, especially when the production runs and the differences were getting to be substantial.

My first thought was that the motherboard might be failing, unfortunately I wasn’t able to find an inexpensive SATA disk controller so I did the next best thing and move the disk to a different SATA port.  The move helped a bit but the errors still came back after a bit of hard disk activity.  On a whim I decided to change out the SATA cable with a different one in my collection.  Neither cable was especially “high quality” compared to the other, but when I put the new cable in on Dec 2 I haven’t seen an error since.

I’m at a loss as to what has happened to the old cable – the drive and cable are well inside the case and not touching anything so I don’t think it’s a problem with wear, but it’s possible there is some oxidation/rust on the cable that I can’t see.

I hope this is my last update on this issue, but I’ll continue the saga if it’s necessary.

Programming the ATTiny chips using an Arduino Duemilanove and the Arduino IDE.

My two girls and I are making personalized home-made “Arduino Blinkies” this year.  We’re making the “64 pixels” display that is written up here:

http://tinkerlog.com/howto/64pixels/

This project only requires three components:

  • An Atmel ATTiny2313 micro controller
  • An 8×8 LED grid
  • A two AA battery holde and two batteries

Up to now, all of my Arduino experience has been playing with a Duemilanove with the Atmel ATMega328 in the socket.  I have seen descriptions of how to use the chip “bare”, but at $3-$5 I didn’t really feel like experimenting with them that much.  (Plus if I did use one in a project, I would have to flash the bootloader onto it’s replacement and I haven’t tackled that yet, either.)

While poking around on the Internet looking for a fun project to introduce my girls to the other side of computers and how they work, I came across the 64pixels project, and that introduced me to the ATTiny2313.  This chip (also by Atmel) is on the small end of their line of compatible chips, and costs a whopping $0.95 per chip!  The entire cost of the 64pixels project is below $5 each, so I can afford to let the girls experiment a bit and not break the bank.

So, the first thing I had to do was determine how to program the ATTiny chips on my Duemilanove.  The pins on the ATTiny aren’t the same as the ATMega so I can’t just plug it in.  Terms such as ISP (In-System Programming) and JTAG (Joint Test Action Group) were tossed around and friends on my mailing lists offered to loan me theirs – but that was like loaning a pair of snow skis to a Texan.   I didn’t know how to use it, or if I even really did.

Thankfully a few nights of searching the Internet found people had documented bits and pieces of it.  Through a lot of reading and trial-and-error, I’ve put together my notes on how to flash a common Arduino Processing-based program onto any Atmel AVR-based chip.

  1. Downloaded latest Arduino IDE (1.0.3).
    1. On my system, I’m running Linux, so I extracted it in $HOME/arduino-1.0.3/.  On a Windows system, you will install it as normal (presumably to the C:\Program Files\ directory).
    2. The “Arduino IDE” is the “Integrated Development Environment” that can be used to write, debug, and upload Arduino programs (called “sketches”) to the chips.
  2. Downloaded latest “arduino-tiny” library files to add the necessary support files to the Arduino IDE so it knows how to create the proper code for the ATTiny line of processors.
    1. http://code.google.com/p/arduino-tiny/
    2. Followed readme.txt in …/tiny/readme.txt
      1. Extracted ZIP file into ~/arduino-1.0.3/hardware/
      2. Confirmed the boards.txt “upload.using” lines all read “arduino:arduinoisp”
  3. Setup Duemilanove to act as an ISP which will forward the programming the IDE does across to the ATtiny processor.
    1. http://hlt.media.mit.edu/?p=1706
    2. Basic steps:
      1. Connect the Duemilanove to my computer
      2. Start the Arduino IDE
        1. For me I ran “~/arduino-1.0.3/arduino”
      3. Confirmed the Duemilanove was seen and communicating with the Arduino IDE
      4. Opened the “ArduinoISP” example program
        1. File -> Examples -> ArduinoISP
      5. Uploaded this program to my Duemilanove
      6. Leave the ATMega chip in the Duemilanove
        1. This step wasn’t clear in many on-line tutorials.  Given that you have to upload a bit of code to the ATMega328 chip, leaving it in the Duemilnove programming board makes sense.
  4. Chose the correct ATTiny chip you wish to program from the Tools -> Board menu within the IDE.
    1. I tried both 8MHz and 1MHz, both with success.
  5. Connected the header pins on the Duemilanove to the pins of the ATTiny2313
    1. This is another step that wasn’t clear in the other on-line tutorials.  Most walked you through what jumper wires went where for a specific chip, but no-one ever really explained what each wire was going to.  In short, there are four programming pins (plus GND and VCC) on the ATTiny chips that need to be connected: SCK, MISO, MOSI, and Reset.  If you have a different chip (“Introducing the NEW ATTiny9876”), as long as you match the “SCK” port from the Duemilanove to the SCK port on the new chip, and do the same for MISO, MOSI, and Reset, thse steps shoudl work.  (Assuming the “arduino-tiny” library has been updated, too.)
    2. Here’s a quick grid showing the connection from the Duemilanove header ports to the ATTiny2313 pins:
      • The text on the right is from the ATTiny2313 data sheet describing the pinouts of the chip.  The bolded works should match the pins on the Duemilanove headers.
      • Duemilanove header <–> ATTiny2313 pins
      • Pin 13 (SCK) <–> Pin 19 (USCK/SCL/SCK/PCINT7)
      • Pin 12 (MISO) <–>Pin 18 (MISO/DO/PCINT6)
      • Pin 11 (MOSI) <–>Pin 17 (MOSI/DI/SDA/PCINT5)
      • Pin 10 (Reset) <–>Pin 1 (PCINT10/RESET/dW)
      • 5v <–>Pin 20 (VCC)
      • Ground <–>Pin 10 (GND)
    3. Again, for non-ATTiny2313 chips, find the SCK/MISO/MOSI/Reset pins and connect them the same.
  6. Upload a test program
    1. For my first test I use the basic Arduino “blink” program.  The program performs a digitalWrite to output #13, but pin 13 on the ATTiny didn’t blink my LED.  After some poking around on the chip, I found that it was actually “pin 16” on the ATTiny2313.  Based on my testing I made a quick map of the “pinMode()” pin number to the actual pin on the chip.
      1. Outputs 0 through 7 map to pin (output+2)
        1. ex: output 3 -> pin 5
      2. Outputs 8 through 16 map to pin (output+3)
        1. ex: output 11 -> pin 14
  7. To run the chip standalone, supply appropriate voltage and ground to pins 20 and 10 of chip
    1. It may need to have the reset pin (pin 1) pulled high.
Sample test code:
int pin = 8;
int value = HIGH;

// the setup routine runs once when you press reset:
void setup() {                
  // initialize the digital pin as an output.
  pinMode(pin, OUTPUT);     
}

// the loop routine runs over and over again forever:
void loop() {
    digitalWrite(pin, value);
    if (value == HIGH) {
      value = LOW;
    } else {
      value = HIGH;
    }
    delay(20);
}
I was also interested in fading an LED in/out using PWM output.  From what I can deduce, the Arduino standard “analogWrite(pin, value)” only works on specific PWM pins that are marked with “OCxx” on the datasheet.  On the ATTiny2313, these are pins 14,15, and 16.
Sample test code:
// Define the pin to test for analog features
int anapin = 13;
// Define a digital pin to flash each time the 0..255 analog cycle has completed.
int digipin = 2;
int value = 0;

// the setup routine runs once when you press reset:
void setup() {                

  // initialize the digital pin as an output.
  pinMode(anapin, OUTPUT);     
  pinMode(digipin, OUTPUT);
}

// the loop routine runs over and over again forever:
void loop() {
    analogWrite(anapin, value);
    value += 2;
    digitalWrite(digipin, LOW);
    if (value >= 255) {
      value = 0;
      digitalWrite(digipin, HIGH);
      delay(5);
    }
    delay(1);
}

Cell phone companies double-dipping into my personal life.

Dear Verizon – I know my monthly $150 donation is barely adequate for you to resolve the spotty reception and poor data connection quality I experience, so please make additional money by selling my private calling and location information.

I don’t mind companies making a profit, even when they are profiting from my personal information.  Case in point Facebook, Google search, GMail, YouTube, Yahoo, CNN, HotMail, etc.  All of these “free” sites have a hidden cost – when we enter our information (name, age, email, address) or even use it (thus supplying them with our “usage pattern” information, possibly location, etc), they can then collect that information to start making highly intelligent facts about us known.  For example, “Dan checks his e-mail and Facebook over lunch while he’s sitting in Burger King 90%, so lets put ads for ‘weight loss’ and ‘Subway’ along the side.”

But, there is also some more devious information contained within our on-line checkups.  “At 12:15, Dan logged into his personal e-mail and facebook pages from the Burger King at 114th and Dodge, and will probably be there for the next 40 minutes.  He is currently 20.1 miles and 24 minutes away from home.”  Based on that bit of information, it would be extremely easy to break in to our house and be highly certain that I wouldn’t return.  Thankfully, this type of location information is restricted to the sites marketing departments….yeah, right. Google and Facebook sell this information en-mass – it’s a big portion of their business model.

As I said, I don’t mind the free sites I use paying their bills by selling ad space – in this case, I’m the ‘product’ being sold.  But, when I pay for a service, I don’t want them double-dipping and selling my personal information on top of charging me for their services.  Case in point, the cellular telephone industry.

Quick question: Who wants to sign up to let a large company track our every move 24 hours a day for two years?  This may include information about our web browsing history, private communications via voice, text messages and e-mail, exact location, etc.  Sounds like a dictator state dream situation?  Me too, but I signed up anyway…and I see you’ve joined too.  You’re carrying the only piece of equipment necessary to do this – your cell phone.  In my case, I’ve even opted into the advanced photo documenting feature since I take most of my family photos with the integrated camera – each of them are geo-tagged with the location, and Google does a good job of facial recognition.

Again, I’m ok with Google doing this since I use their service to store and share the photos with friends and family far away.  I’m sure they could scan for a child in a birthday hat, and put up ads for toys to send.  Now to my main beef and the subject of this post.  I feel that a service that I pay for should’t be reselling my private information, too.  Case in point, the industry ‘Customer Proprietary Network Information’ (CPNI).

For my family, we pay Verizon wireless over $150 a month for three phones (two smart-phones, and a feature phone for our daughter).  Contained within our CPNI information are nuggets of valuable information such as who we contacted (via voice and text), how long we talked, where we were when we used these services, etc.  Unfortunately the CPNI information pages at Verizon and AT&T aren’t specific in the exact details, but one can surmise that there is other additional information contained that would be valuable to better “know us”  for marketing purposes.  (And by “know us” I’m not meaning they want to give us gifts…)

I’d suggest everyone with a cellphone go to your providers site and update your CPNI options so that it is kept private.  Here are the links I’ve been able to dig up:

Verizon: http://www22.verizon.com/about/privacy/policy/
* See the section titled “How to Limit the Sharing and Use of Your Information”.  You’ll have to call the CPNI phone number for your state from each phone that you want to opt-out

AT&Thttp://www.att.com/gen/privacy-policy?pid=2566
* See the section titled “Restricting our use of your CPNI” for the contact number to call.

T-Mobile: e-mail privacy@t-mobile.com (Please reply if you find a better opt-out URL.)

Remember, the “C” in CPNI stands for Consumer – remind your carrier that you’re paying for the services and believe their re-sale of our information is irresponsible.

Podcasts from the command line.

I work from my home office, so I don’t have to listen to what the guy in the cubicle next to me likes.  That’s good and bad, but in my case it’s a moot point – my office in the basement can barely pick up any local radio stations.  Just a few short years ago I would have had to resort to a collection of CDs or tapes (or running a long set of speaker wires from the livingroom radio down to the office).  Thankfully, the technology came about and rescued me from boredom of the same CDs on endless repeat – enter the Podcast.

From the Wikipedia entry, the term came about in early 2004.  I must have been right on the cusp, because it wasn’t too much after that time I was finishing our basement and ran into the entertainment problem.  Somehow I came across some tech related podcasts (DailySourcecode, TWiT), so I downloaded a few and played them through my laptop.  That all worked well but it meant each time I finished one, I had to take the laptop back up to the network connection (WiFi router died and hadn’t been replaced) and download the next one.  A podcast is nothing more than an MP3 file, so copying the files to the laptop is quick but still another step that I had to do manually to make sure I didn’t re-download a show I had already listened to.  After a couple evenings of this I started searching for a way to download them in the background when I was at work so I could have hours of un-interrupted geek-talk while working in the basement.

A quick bit of Googling lead me to BashPodder.  Since I was running Linux on my home system, this was a great fit.  (Though the BashPodder website says that it runs on many other OS’s including MacOSX, Windows, etc.)  There are only three real files you need to make it all work:

1: The bashpodder.shell script – this is the main program that retrieves the requested podcast files.

2: The parse_enclosure.xsl file – this is used by the script to extract the podcast file names and download URLs.

3: The bp.conf file – This is a simple text file containing a list of URLs pointing to some website feeds for their podcasts.

Download these files from the BashPodder website, or you’re welcome to use my tweaked version here.

Finally, to listen to them from the command line I wrote a script I cleverly call “Play And Delete” or “pad” for short.

Here’s the bashpodder.shell script I am currently using:

#!/bin/bash
# By Linc 10/1/2004
# Find the latest script at http://lincgeek.org/bashpodder
# Revision 1.21 12/04/2008 - Many Contributers!
# If you use this and have made improvements or have comments
# drop me an email at linc dot fessenden at gmail dot com
# and post your changes to the forum at http://lincgeek.org/lincware
# I'd appreciate it!
QUIET=-q
#QUIET=-v

#
if [ -e /var/tmp/bashpodder.FAIL ] ; then
	echo Will not run - /var/tmp/bashpodder.FAIL exists.
fi

# Make script crontab friendly:
cd $(dirname $0)

# datadir is the directory you want podcasts saved to:
datadir=$(date +%Y-%m-%d)

# create datadir if necessary:
mkdir -p $datadir

# Delete any temp file:
rm -f temp.log

# Read the bp.conf file and wget any url not already in the podcast.log file:
date >> ordered.log
while read podcast
	do
	file=$(xsltproc parse_enclosure.xsl $podcast 2> /dev/null | sed 's# #%20#g' || wget -q $podcast -O - | tr '\r' '\n' | tr \' \" | sed -n 's/.*url="\([^"]*\)".*/\1/p')
	for url in $file
		do
		echo $url >> temp.log
		if ! grep "$url" podcast.log > /dev/null ; then
			name=$(echo "$url" | awk -F'/' {'print $NF'} | awk -F'=' {'print $NF'} | awk -F'?' {'print $1'})
			# Fixes for different URLs that parse to incorrect file names.
			# Buzz Out Loud has the name first but it's a redirect URL...
			if [ $( echo $url | grep 'dl_dlnow$' | wc -l ) ] ; then 
				name=$(echo $url | awk -F? '{ print $1 }' | awk -F'/' '{ print $NF }')
				#echo FIXING: $url
				#echo NEWNAME: $name
			fi

			wget -t 10 -U BashPodder -c $QUIET -O $datadir/$name "$url"

			touch $datadir/$name
			echo "$url" >> ordered.log
		fi
		done
	done < bp.conf
EC=0
# Move dynamically created log file to permanent log file:
cat podcast.log >> temp.log || EC=1
cp podcast.log podcast.log.previous || EC=1
sort temp.log | uniq > podcast.log || EC=1
rm temp.log || EC=1
if [ $EC -gt 0 ] ; then
	echo FAILED to update podcast.log file. > /var/tmp/bashpodder.FAIL
	touch /var/tmp/bashpodder.FAIL
	exit 9
fi
# Create an m3u playlist:
ls $datadir | grep -v m3u > $datadir/podcast.m3u

# Misc cleanup
mv */*JPG /home/dan/Pictures/Backgrounds/

Most of the changes I have made were to fix problems on my system.  One update I made was to better handle a filled up my hard drive – this really got the BashPodder script all confused as to what to download.  The script writes a “podcast.log” file that it uses each time it runs to determine if it needs to download a podcast or not. If the podcast URL doesn’t exist in the podcast.log file, it downloads it and adds that URL to the file.  That works great until the drive fills up and it is unable to update this file.  In my case, the log file got erased so when I did free up space, BashPodder had to start over and tried to re-download everything.  (Some day I’ll document how I fixed that, but not today.)

My changes start at line 13. If the ‘magic’ bashpodder.FAIL file exists, it means there was a problem in a previous run and the system needs human intervention.

Line 30 adds a simple date to my log file named “ordered.log”.  I wanted to keep track of when a file was downloaded, so this helped me track that for later review.

Lines 38 through 50 are a mixture of original and new code.

  • Line 38 tries to pull out the file name that will be used later.  Some podcast URLs confuse the parsing done by the parse_enclosure.xsl template, so this helps lines 41 through 45 fix the name if necessary.
  • Line 47 was modified slightly to use the new name if necessary
  • Line 49 makes sure the date of the file matches the current system time.  The ‘pad’ script sorts the files by their timestamp so this keeps them accurate.

Lines 56 through 64 have a lot of additional error checking done on them.  If any one fails, the script creates the bashpodder.FAIL file mentioned earlier, then exits to let a human fix what’s wrong.

Line 69 is a hack, but it works for me.  Some URLs I have BashPodder monitor have backgrounds uploaded to them.  I have these files moved to my Backgrounds folder rather than manually moving them myself.  (I’m lazy, so sue me!)

The parse_enclosure.xsl file I use is un-changed from the official BashPodder version.

My listening is also done at the command line and using VLC to play the video or audio file.  After listening to a nights worth of 20-30 minute podcasts, I could have a number of files and directories to clean up.  I wrote my “Play And Delete” script to take care of tha for me.

#!/bin/sh
# VLC Options:
OPTIONS="--zoom=2"

clear
if [ -z `which vlc` ] ; then
	echo Could not find vlc: `which vlc`
	exit 1;
fi

FILE=$1
EXT=`echo $FILE | rev | awk -F\. '{ print $1 }' | rev`
echo Playing: $FILE \($EXT\)

# Set the size of the new VLC we open.
#
# Note: if file ends in .mp4, use a different size.
if [ "$EXT" = "mp3" ] ; then
  echo Resizing screen for $EXT extension.
  SIZING='0,0,1100,950,100'
  (sleep 2.0 ; wmctrl -i -r `wmctrl -l | grep VLC | awk '{ print $1 }'` -e $SIZING ) &
else
  echo "Not resizing an $EXT file."
fi

echo RUNNING: vlc $OPTIONS $FILE vlc://quit
vlc $OPTIONS $FILE vlc://quit 2> vlc.err
EC=$?
echo Exit code: $EC
if [ $EC -le 0 ] ; then 
    echo Deleting $FILE
    sleep 2
    rm $FILE
fi
rm -f `dirname $FILE`/*.m3u
rm -f `dirname $FILE`/.directory
rmdir --ignore-fail-on-non-empty `dirname $FILE`/../* 2>/dev/null

PAD basically takes a path/filename and tries to play the file with VLC.

Line 6 tries to confirm you have VLC installed and available in the path, otherwise it exits.

Line 12 gets the extension of the file (mp3, mp4, avi, etc) so lines 19-21 can move and re-size the vlc GUI to the lower-left corner of my screen.  I don’t resize video files, so if it isn’t an MP3 I don’t do anything.

Line 27 calls the VLC command to play the file.

Lines 28 through 34 monitor the exit code for VLC, and if it exited normally (i.e. got to the end of the podcast), then the script deletes the podcast from the disk.  This autocleanup is great, especially for some of the larger video podcasts that can be 200+ MB in size.

Lines 35 through 37 try to do some additional cleanup.  Since I don’t use an MP3 player, I don’t need the M3U files, and I also try to remove all of the empty directories.  (BashPodder saves the files into directories named for the year/month/day the download was performed.)

My bp.conf has a lot of additional entries.  I won’t clutter up this page with it, but if you’re interested in what I’m pulling down you’re welcome to contact me for a copy.  (I’ll give you a hint – I’m a big TWiT.tv fan – Hi Leo, Tom, Iyaz, Sarah, and Steve!)

A big thanks to Linc and his work on the initial BashPodder script.  Once I had that framework I was able to add and tweak it to fit my needs – I hope it helps others too.