Free Ansible training videos from RedHat

My RedHat rep sent me a link to this Ansible on-line training.  It’s not the standard 60-90 minute live walkthrough, this is a set of pre-recorded training videos.

Title: “Ansible Essentials: Simplicity in Automation Technical Overview”

Link: https://www.redhat.com/en/services/training/do007-ansible-essentials-simplicity-automation-technical-overview

As I just received this information today I haven’t had time to look at them, but the chapter titles look like a good overview of the entire Ansible suite.

Bad user experience…

My VMware support engineer forwarded on the current VMware knowledge base weekly digest, and one of the new KB article titles caught my eye:

Upgrading VMware Tools using PowerCLI or vSphere API (2147641)

Hey!  That sounds like something to look over and possibly provide to my operations team to help them upgrade older VMs that were now running on newer VMware hosts!  Clicking on the URL for that KB article (KB 2147641) brought up the usual Details and Solution sections, but they were strangely lacking.

The “Details” section usually explains the subject of the KB article in detail.  This one just said:

VMware Tools can be upgraded using vSphere PowerCLI, vSphere API and other methods. For optimum performance, upgrade VMware Tools in not more than five virtual machines simultaneously.

And the “Solution” section was even less helpful:

None

Talking with my support engineer, he thinks that the article may have been posted to note that it is possible to upgrade the VMware tools using either of those methods, just the actual steps how to do this were not documented.

I know from a quick search there are plenty of examples from non-VMware.com sites on how to do this:

 

Ironically, there are some links to VMware.com pages addressing this:

I’ve seen odd VMware KB articles in the past – hopefully the addition of a “Tip of the Week” flag or at least a sentence in the Solution field denoting the article is not fully fleshed out solution would save a lot of confusion.

Bad administrator, no cookie…

Well, it had to happen.  I finally got my new site up and planned to restore the blog posts to the new site.  My backup files from various past sites were all in place – I had setup a backup script to dutifully collect the data monthly (I didn’t update the sites all that often), and also clean up after itself and only keep three months of backups.

The script ran, the backup files appeared and automatically cleaned themselves up after 90 days.  When I first ran it I verified that the files were complete – I didn’t restore them anywhere, but the blog text was there.  Success!  Add it to cron on my desktop and let it run.

And run.   And run.   And run.  Unattended.  For the past couple years.

I had been lax and wasn’t blogging much so I let my SquareSpace site go away a year ago.  Recently I decided to resume my ramblings..er, um, blogging, so I installed WordPress on my site and went looking through the backups.

The good news, my cron job continued to work dutifully backing up all the blog posts.  Except when I canceled my SquareSpace account, it continued to “backup” the site – except this time the files it saved were essentially “the site does not exist” messages.  (Insert sad face here…)  Thankfully I was able to restore some of the older text using Archive.org and I’m still combing through other old sources.  But much of it is a loss.

So, what did I learn (or re-learn) from all this?

  1. A single copy of a backup is not a backup.  Use the “Three/Two/One” rule.
  2. Don’t cleanup archives when it’s not necessary.  The backup files were small enough (less than a megabyte after compressions), so I could have kept many years in couple gigabytes on my server.
  3. Keep track of services and the processes associated with it – I didn’t need to keep the backup script running after cancelling the service. This didn’t have a real expense associated with it, but how often have we looked at our budget and realized that we’ve continued to pay for something well after we stopped using it.

Subversion to Git

I have a Subversion project that I’m migrating to use Git, but I don’t want to loose the history if possible.  I found these steps and they mostly worked for me with one exception (below):

https://git-scm.com/book/en/v2/Git-and-Other-Systems-Migrating-to-Git

The only problem was during the export I got the error message:

Author: (no author) not defined in authors file

After a bit of searching I found this workaround:

Author: (no author) not defined in authors file

In short I had to add this line to my users.txt file:

(no author) = no_author <no_author@no_author>

Successful backups in three, two, one…

Let me start off by saying that I didn’t come up with this backup mnemonic, rather Peter Krogh first wrote this up (to my knowledge) in this blog post.

As I recently re-learned, backups even done right are hard to do well.  In my case there’s a chance that I still would have lost my data, but there’s no accounting for human error in every case.

The “Three, Two, One” backup strategy is pretty simple:

  • Three – A file isn’t backed up until there are at least three copies of it, the original and two other copies not on that machine.
  • Two – The backups must be on two different media types.  For example, a hard drive and a DVD drive, or a tape backup.
  • One – Finally, one of those copies should be stored off-site or at least off-line.  A cloud storage service such as Carbonite, Amazon Cloud Drive, Google Drive, or even storing it at a friends house.

In my case (backing up my website), one version would have been the site itself, a second would have been a copy stored on my home computer, and a third would have been stored on a DVD (probably not every month, but probably once every six months or so) or I would have copied it up to my Google Drive.

Sadly, I didn’t take those precautions and now I’m paying the price (thankfully a small one).

And I’ll add one more thing – be sure to VERIFY the backup you created periodically.  It does you no good if the restore process fails or isn’t documented for someone else to perform.

An upgraded Apple for sale.

This article hit home for me:

https://medium.com/charged-tech/apple-just-told-the-world-it-has-no-idea-who-the-mac-is-for-722a2438389b#.6q18so27a

I haven’t been a Mac user for a long time.  I worked on early (1990’s vintage) Macintosh “box” computers, but never really wanted much to do with them until they ditched the “System” operating system and went to “OSX” (now “MacOS”).  I’ve been a long-time Unix guy – I really wanted a NeXT computer back in the day, but didn’t have near the cash for it, and now that OSX had a nice user interface with the power of a Unix command line for all the tools and automation/scripting goodness I really wanted one.

I was lucky that my sister gave my dad her previous MacBook Pro 15″ (Mid 2010) when she upgraded a couple years later.  He liked it, but he is a Windows guy through-and-through so when BootCamp started giving him fits he was about to toss it in the dumpster.  I offered to trade him my 2-year-old HP laptop (which runs Windows just fine) and he took me up in a heartbeat.

In the past 18 months I’ve really grown fond of the MBP and OSX.  The software upgrades have gone fine with me, the battery life is still excellent, the hardware fit and finish is still solid and continues to look “current” even being 6 years old.

Knowing that the entire system is getting a bit long in the tooth and it has the occasional unexplained power issue, I was waiting for the next Apple MacBook Pro announcement.

In a word, I was disheartened.

My 6 year old 15″ MBP has an Intel Core i7 @ 2.66GHz, 8GB RAM and a dedicated NVidia GeForce GT 330M video card.  The latest MBP has an upgrade i7 CPU, but the performance compared between the two is barely noticeable in regular use.   Same goes for video – I don’t do any high-end editing or video processing, nor do I do much gaming anymore.  What I was looking forward to was a system that officially supported 32GB RAM and had a SSD drive that I could upgrade over time.  My primary tasks I do on my current MBP are playing with virtual machines (VirtualBox or VMWare), and RAM is the biggest constraint.

Instead, the big “new things” that the MBP brings us are that it is

  • “Thinner” – Really?  You couldn’t have made it the same thickness and given us 25-50% more battery life?  And keep at least one USB3 port?
  • “Touch function row buttons” – Ok, neat, but it’s not a good solution if you touch type (which I do), or if you’re vision impaired.
  • And if touch is such a good thing, why not make a touch-screen?  After-all, it’s been such a failure for everyone else…not.
  • Removed the “MagSafe” power – Really?  As a parent, that was one of the great things I like about my current MBP.  I’ve fixed a few laptop screens when a child or pet ran by and caught the power cord, sending the unit flying to the floor.
  • And apparently those that are knowledgeable about USB-C, their ports are good, but the clean lines of the Apple brand are all lost when I have multiple dongles for my other devices.  I’ll have a clean looking laptop, but my laptop bag will be a jumble of adapters.

The good thing is that there are enough people that will jump on the “latest, greatest” train and resell their older MBP.  I just hope I can luck into one for a decent price, especially when they realize that their devices need all those adapters.

Maybe I’ll be lucky and my sister will want to dump hers on me again.  No, she’s too smart to fall for that again.

(Edited Nov 1, 2016)

Resurrecting a lost hard disk… The Sequal.

In a previous post I documented the use of ddrescue to recover the data from a failing hard drive.  I follow-up a few months later noted that the second drive had started failing but this time I was able to copy the data before needing to resort to the rescue tools.  As I promised, here’s a follow-up.

After using the second replacement drive for just a couple weeks, I started noticing the same errors creeping into the “dmesg” output.  Though I know some manufacturers have a bad line, I’ve never experienced any that failed that rapidly, especially when the production runs and the differences were getting to be substantial.

My first thought was that the motherboard might be failing, unfortunately I wasn’t able to find an inexpensive SATA disk controller so I did the next best thing and move the disk to a different SATA port.  The move helped a bit but the errors still came back after a bit of hard disk activity.  On a whim I decided to change out the SATA cable with a different one in my collection.  Neither cable was especially “high quality” compared to the other, but when I put the new cable in on Dec 2 I haven’t seen an error since.

I’m at a loss as to what has happened to the old cable – the drive and cable are well inside the case and not touching anything so I don’t think it’s a problem with wear, but it’s possible there is some oxidation/rust on the cable that I can’t see.

I hope this is my last update on this issue, but I’ll continue the saga if it’s necessary.

Programming the ATTiny chips using an Arduino Duemilanove and the Arduino IDE.

My two girls and I are making personalized home-made “Arduino Blinkies” this year.  We’re making the “64 pixels” display that is written up here:

http://tinkerlog.com/howto/64pixels/

This project only requires three components:

  • An Atmel ATTiny2313 micro controller
  • An 8×8 LED grid
  • A two AA battery holde and two batteries

Up to now, all of my Arduino experience has been playing with a Duemilanove with the Atmel ATMega328 in the socket.  I have seen descriptions of how to use the chip “bare”, but at $3-$5 I didn’t really feel like experimenting with them that much.  (Plus if I did use one in a project, I would have to flash the bootloader onto it’s replacement and I haven’t tackled that yet, either.)

While poking around on the Internet looking for a fun project to introduce my girls to the other side of computers and how they work, I came across the 64pixels project, and that introduced me to the ATTiny2313.  This chip (also by Atmel) is on the small end of their line of compatible chips, and costs a whopping $0.95 per chip!  The entire cost of the 64pixels project is below $5 each, so I can afford to let the girls experiment a bit and not break the bank.

So, the first thing I had to do was determine how to program the ATTiny chips on my Duemilanove.  The pins on the ATTiny aren’t the same as the ATMega so I can’t just plug it in.  Terms such as ISP (In-System Programming) and JTAG (Joint Test Action Group) were tossed around and friends on my mailing lists offered to loan me theirs – but that was like loaning a pair of snow skis to a Texan.   I didn’t know how to use it, or if I even really did.

Thankfully a few nights of searching the Internet found people had documented bits and pieces of it.  Through a lot of reading and trial-and-error, I’ve put together my notes on how to flash a common Arduino Processing-based program onto any Atmel AVR-based chip.

  1. Downloaded latest Arduino IDE (1.0.3).
    1. On my system, I’m running Linux, so I extracted it in $HOME/arduino-1.0.3/.  On a Windows system, you will install it as normal (presumably to the C:\Program Files\ directory).
    2. The “Arduino IDE” is the “Integrated Development Environment” that can be used to write, debug, and upload Arduino programs (called “sketches”) to the chips.
  2. Downloaded latest “arduino-tiny” library files to add the necessary support files to the Arduino IDE so it knows how to create the proper code for the ATTiny line of processors.
    1. http://code.google.com/p/arduino-tiny/
    2. Followed readme.txt in …/tiny/readme.txt
      1. Extracted ZIP file into ~/arduino-1.0.3/hardware/
      2. Confirmed the boards.txt “upload.using” lines all read “arduino:arduinoisp”
  3. Setup Duemilanove to act as an ISP which will forward the programming the IDE does across to the ATtiny processor.
    1. http://hlt.media.mit.edu/?p=1706
    2. Basic steps:
      1. Connect the Duemilanove to my computer
      2. Start the Arduino IDE
        1. For me I ran “~/arduino-1.0.3/arduino”
      3. Confirmed the Duemilanove was seen and communicating with the Arduino IDE
      4. Opened the “ArduinoISP” example program
        1. File -> Examples -> ArduinoISP
      5. Uploaded this program to my Duemilanove
      6. Leave the ATMega chip in the Duemilanove
        1. This step wasn’t clear in many on-line tutorials.  Given that you have to upload a bit of code to the ATMega328 chip, leaving it in the Duemilnove programming board makes sense.
  4. Chose the correct ATTiny chip you wish to program from the Tools -> Board menu within the IDE.
    1. I tried both 8MHz and 1MHz, both with success.
  5. Connected the header pins on the Duemilanove to the pins of the ATTiny2313
    1. This is another step that wasn’t clear in the other on-line tutorials.  Most walked you through what jumper wires went where for a specific chip, but no-one ever really explained what each wire was going to.  In short, there are four programming pins (plus GND and VCC) on the ATTiny chips that need to be connected: SCK, MISO, MOSI, and Reset.  If you have a different chip (“Introducing the NEW ATTiny9876”), as long as you match the “SCK” port from the Duemilanove to the SCK port on the new chip, and do the same for MISO, MOSI, and Reset, thse steps shoudl work.  (Assuming the “arduino-tiny” library has been updated, too.)
    2. Here’s a quick grid showing the connection from the Duemilanove header ports to the ATTiny2313 pins:
      • The text on the right is from the ATTiny2313 data sheet describing the pinouts of the chip.  The bolded works should match the pins on the Duemilanove headers.
      • Duemilanove header <–> ATTiny2313 pins
      • Pin 13 (SCK) <–> Pin 19 (USCK/SCL/SCK/PCINT7)
      • Pin 12 (MISO) <–>Pin 18 (MISO/DO/PCINT6)
      • Pin 11 (MOSI) <–>Pin 17 (MOSI/DI/SDA/PCINT5)
      • Pin 10 (Reset) <–>Pin 1 (PCINT10/RESET/dW)
      • 5v <–>Pin 20 (VCC)
      • Ground <–>Pin 10 (GND)
    3. Again, for non-ATTiny2313 chips, find the SCK/MISO/MOSI/Reset pins and connect them the same.
  6. Upload a test program
    1. For my first test I use the basic Arduino “blink” program.  The program performs a digitalWrite to output #13, but pin 13 on the ATTiny didn’t blink my LED.  After some poking around on the chip, I found that it was actually “pin 16” on the ATTiny2313.  Based on my testing I made a quick map of the “pinMode()” pin number to the actual pin on the chip.
      1. Outputs 0 through 7 map to pin (output+2)
        1. ex: output 3 -> pin 5
      2. Outputs 8 through 16 map to pin (output+3)
        1. ex: output 11 -> pin 14
  7. To run the chip standalone, supply appropriate voltage and ground to pins 20 and 10 of chip
    1. It may need to have the reset pin (pin 1) pulled high.
Sample test code:
int pin = 8;
int value = HIGH;

// the setup routine runs once when you press reset:
void setup() {                
  // initialize the digital pin as an output.
  pinMode(pin, OUTPUT);     
}

// the loop routine runs over and over again forever:
void loop() {
    digitalWrite(pin, value);
    if (value == HIGH) {
      value = LOW;
    } else {
      value = HIGH;
    }
    delay(20);
}
I was also interested in fading an LED in/out using PWM output.  From what I can deduce, the Arduino standard “analogWrite(pin, value)” only works on specific PWM pins that are marked with “OCxx” on the datasheet.  On the ATTiny2313, these are pins 14,15, and 16.
Sample test code:
// Define the pin to test for analog features
int anapin = 13;
// Define a digital pin to flash each time the 0..255 analog cycle has completed.
int digipin = 2;
int value = 0;

// the setup routine runs once when you press reset:
void setup() {                

  // initialize the digital pin as an output.
  pinMode(anapin, OUTPUT);     
  pinMode(digipin, OUTPUT);
}

// the loop routine runs over and over again forever:
void loop() {
    analogWrite(anapin, value);
    value += 2;
    digitalWrite(digipin, LOW);
    if (value >= 255) {
      value = 0;
      digitalWrite(digipin, HIGH);
      delay(5);
    }
    delay(1);
}

Cell phone companies double-dipping into my personal life.

Dear Verizon – I know my monthly $150 donation is barely adequate for you to resolve the spotty reception and poor data connection quality I experience, so please make additional money by selling my private calling and location information.

I don’t mind companies making a profit, even when they are profiting from my personal information.  Case in point Facebook, Google search, GMail, YouTube, Yahoo, CNN, HotMail, etc.  All of these “free” sites have a hidden cost – when we enter our information (name, age, email, address) or even use it (thus supplying them with our “usage pattern” information, possibly location, etc), they can then collect that information to start making highly intelligent facts about us known.  For example, “Dan checks his e-mail and Facebook over lunch while he’s sitting in Burger King 90%, so lets put ads for ‘weight loss’ and ‘Subway’ along the side.”

But, there is also some more devious information contained within our on-line checkups.  “At 12:15, Dan logged into his personal e-mail and facebook pages from the Burger King at 114th and Dodge, and will probably be there for the next 40 minutes.  He is currently 20.1 miles and 24 minutes away from home.”  Based on that bit of information, it would be extremely easy to break in to our house and be highly certain that I wouldn’t return.  Thankfully, this type of location information is restricted to the sites marketing departments….yeah, right. Google and Facebook sell this information en-mass – it’s a big portion of their business model.

As I said, I don’t mind the free sites I use paying their bills by selling ad space – in this case, I’m the ‘product’ being sold.  But, when I pay for a service, I don’t want them double-dipping and selling my personal information on top of charging me for their services.  Case in point, the cellular telephone industry.

Quick question: Who wants to sign up to let a large company track our every move 24 hours a day for two years?  This may include information about our web browsing history, private communications via voice, text messages and e-mail, exact location, etc.  Sounds like a dictator state dream situation?  Me too, but I signed up anyway…and I see you’ve joined too.  You’re carrying the only piece of equipment necessary to do this – your cell phone.  In my case, I’ve even opted into the advanced photo documenting feature since I take most of my family photos with the integrated camera – each of them are geo-tagged with the location, and Google does a good job of facial recognition.

Again, I’m ok with Google doing this since I use their service to store and share the photos with friends and family far away.  I’m sure they could scan for a child in a birthday hat, and put up ads for toys to send.  Now to my main beef and the subject of this post.  I feel that a service that I pay for should’t be reselling my private information, too.  Case in point, the industry ‘Customer Proprietary Network Information’ (CPNI).

For my family, we pay Verizon wireless over $150 a month for three phones (two smart-phones, and a feature phone for our daughter).  Contained within our CPNI information are nuggets of valuable information such as who we contacted (via voice and text), how long we talked, where we were when we used these services, etc.  Unfortunately the CPNI information pages at Verizon and AT&T aren’t specific in the exact details, but one can surmise that there is other additional information contained that would be valuable to better “know us”  for marketing purposes.  (And by “know us” I’m not meaning they want to give us gifts…)

I’d suggest everyone with a cellphone go to your providers site and update your CPNI options so that it is kept private.  Here are the links I’ve been able to dig up:

Verizon: http://www22.verizon.com/about/privacy/policy/
* See the section titled “How to Limit the Sharing and Use of Your Information”.  You’ll have to call the CPNI phone number for your state from each phone that you want to opt-out

AT&Thttp://www.att.com/gen/privacy-policy?pid=2566
* See the section titled “Restricting our use of your CPNI” for the contact number to call.

T-Mobile: e-mail privacy@t-mobile.com (Please reply if you find a better opt-out URL.)

Remember, the “C” in CPNI stands for Consumer – remind your carrier that you’re paying for the services and believe their re-sale of our information is irresponsible.