My resolution

So I promised myself that I would start updating my blog on a weekly basis. Nothing major, no earth-shaking thoughts or deep introspective writings, just a basic update to get the habit started.

At least that is the plan…

For now, I’ll start out simply by noting that my **plan** is to add this every Sunday night, and of course this first one is being written Monday. 🙂 I won’t cop-out and schedule this for next Sunday, and be transparent with my failings, too.

So, we’re continuing to help clean up our parents house getting it ready to sell. Kate has been going out there nearly every weekend (quite often both days), taking Dad and Nick with her and occasionally getting a couple friends to help. To her I tip my hat with a big note of thanks. I’ve tried to keep up with her on those weekends, but have had to beg forgiveness and sit out a few times. And of course Amanda has been doing what she can remotely – she’s our backup for both Kate and I. Kate is helping navigate all the various health-related events with Mom and Dad, and I’m taking on keeping their bills paid and going through all the paperwork and either keeping the important pieces, or getting the remaining stuff to a shredder.

This past weekend we hit the garage attic and got it cleaned out. Many of those boxes had been packed and never opened when they moved from Columbus. Dad was proactive and put labels on each box, then noted in a document what was in each box. Unfortunately, the boxes and the document are both 20 years old, and we’re unable to find the list anymore. It was fun to open them and dive into the contents, but at the same time it was borderline overwhelming. Some of the findings did bring back some happy memories; Jilli found a small bottle of gemstones. They were (we presume) some natural garnets that her Great Grandfather Linder (William Linder, father of Gary) found when he, Dad, and I were hiking one summer in the Rocky Mountains (either near Estes Park, CO or in the Medicine Bow, Wyoming area). He suspected they were garnet, and since that is my gemstone he gave me the bottle of them to keep. Jilli might take them back to SDSM and see if one of the mineralogy professors might be able to determine what they are.

What this work has impressed upon me is to clean out my office a bit. I really doubt that anyone will ever crack open the “Windows 95 Unleashed” book, let alone many of the other related books. If I really get an urge to read it, I’ll find a PDF version but for now the paper version can go into the recycling. (I might donate it, but I don’t think Salvation Army nor Goodwill would have a need for them either.)

A good friend of ours gave us a DeeBot robotic vacuum cleaner for a family Christmas gift. It is amazing what volume of pet hair it picks up, as well as bits of paper, small stones, misc cat/dog food, etc. Our main floor is mostly wood, with only a small bit of carpet and it doesn’t have a problem with either location. What it does have a problem with is going under the couch and chairs. It just barely fits, but occasionally it gets stuck and can’t make it out. I’m brainstorming on how to add a small “guard rail” underneath the furniture, or the alternative is to add little 3/4″ risers under each leg. Since I’m the one with the long legs, I don’t think anyone else wants the chair raised so the guard rail is probably the best option.

Git branching basic workflow

To better handle multiple people working on a project in a Git repository, using branches and reviewing pull requests before committing to the master branch is strongly suggested.

Basic steps:

  1. Change to your home directory and clone the git repository
    1. cd $HOME
    2. git clone [email protected]/MyRepo.git
  2. Change into the new project directory
    1. cd ./MyRepo
  3. Create a branch to work on the new code
    1. git checkout -b MyNewBranch
  4. Verify you are working in the branch
    1. git branch
    2. Note: The branch will have a “*” to the left of the branch name denoting the active branch
  5. Update code, test, repeat
  6. Review and add any missing files
    1. git status
    2. git add <file_name>
  7. Push the code into the repository
    1. git push –set-upstream origin MyNewBranch
      1. This is only necessary for the first ‘git push’
    2. git commit -v {list of changed files}
      • Note the response from the system:
      • remote: Create a pull request for 'MyNewBranch' on GitHub by visiting:
      • remote:
  8. Open the pull request (PR) in
    1. Add other repository contributors to request a code review before merging.
  9. Repeat the edit/test/PR cycle as necessary until merge is accepted
    1. edit code … test … git status … git add … git commit … git push
  10. When it has been accepted, clean-up your work area:
    1. cd $HOME/MyRepo/
    2. git checkout master
    3. git pull
    4. git branch –delete MyNewBranch
  11. Celebrate on a successful pull request!

IBM Model-M and the Windows Key

I really love my IBM Model-M keyboard, but one frustration is the more frequent need for the Windows key. Even some Linux desktops are using it. Oh I get it, it’s a handy “meta” key that helps differentiate keyboard tasks so it’s lack is annoying at times.

For the few times I am in Windows 10 using my Model-M keyboard, I found this answer on to re-map the caps-lock key to the Windows key with a simple registry hack:

Just in case that link goes away, here’s the text:

Anyway, using SharpKeys I found the correct map for CAPS LOCK to Win is this:
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Keyboard Layout]
"Scancode Map"=hex:00,00,00,00,00,00,00,00,02,00,00,00,5b,e0,3a,00,00,00,00,00

Just save those lines as presented into a file, “c:\WinKeyRemap.reg”, then use File Manager to find it, then double-click to open it with the Registry Editor. (You’ll need to accept the warning.) Assuming the file is correct, Registry Editor will report that the values have been added to the registry.

Reboot to make the changes take effect.

Ansible and third-party Python modules

A co-worker wanted a third-party Python modules installed onto the Ansible Tower servers that I maintain.  I don’t like installing any and all packages that people ask for since this is a shared system, but I had to bring myself up to speed on how he could get his module installed to use by his playbooks.

  • The machine that the Ansible playbook is executed from only needs to have Ansible and a set of pre-requisite Python modules installed on it.  For this document, we’ll refer to that system as the “Ansible Server” or just “the server”.
  • The machine(s) that the Ansible playbook make work on and perform changes to need ssh and a small sub-set of Python modules (usually the core Python packages).  We’ll refer to those systems as “the clients”.

To make things more confusing there are two types of “modules” that will be referenced:

  • An “Ansible module” is a package that Ansible uses on the server to execute steps on the clients.  These are usually written in Python and the “core” Ansible modules are included and maintained by the Ansible developers.
    • There are some Ansible modules that may be included with Ansible but they are maintained by the community.  These are usually specialized modules specific to a hardware or software vendor and are usually maintained by the vendor or others interested in automating that vendors tools.
  • The other “module” referenced in this document are add-ons to Python and are called “Python modules”.
    • A Python module may perform some low-level task (e.g. network connection, DNS lookup, etc) and are NOT Ansible specific.

Documentation for the Ansible modules are located here:

The request mentioned the need for the “” Python module so the “github_release” Ansible module would work.  The documentation for the “github_release” module has a requirements section and it notes that also.  The documentation page also notes that this is a preview module (“This module is not guaranteed to have a backwards compatible interface.”) and it is also maintained outside of the Ansible core develoers (“This module is maintained by the Ansible Community.”).

So, how do we add this module?  I’m glad you asked!

The first thing to understand is that all the requirements for this module have to be installed on the clients, not on the Ansible servers.  While this sounds like more work, it really isn’t and it keeps the Ansible servers free from conflicts that different users might have requiring different Python module versions.  The key to all this is the use of Python “Virtual Environments” (or “venvs”).  These virtual environments are walled-off areas that have their own Python executable and associated modules; it’s even possible to have different versions Python installed in different venvs for testing.

In the playbook that needs to use an Ansible module that has special Python module dependencies, there are a few steps to take that we’ll go over in detail below:

  1. Ensure pip is installed
  2. Install Python virtual environment packages into the venv
  3. Setup the virtual environment
  4. Install base Tower packages into venv
  5. Install the Python module specifically needed into venv
  6. Use the new venv to execute the Ansible module

Step 1 – Ensure pip is installed

This is a basic step and will vary by OS but the “pip” package is needed for the “pip:” modules later.

  - name: "Ensure pip is installed"
      name: python2-pip
      state: installed

Step 2 – Install Python virtual environment packages

This is also OS dependent, but it installs the “python-virtualenv” package so Python can build venvs.

  - name: "Install Python virtual environment packages"
        - python-virtualenv
      state: installed

Step 3 – Setup the virtual environment

This step does the initial work to build the virtual environment.  The venv is just a directory structure (in this case “/tmp/test01.venv”) that contains helper files and some wrapper scripts and configuration defaults.

  - name: "Setup the initial virtual environment"
        - setuptools
      extra_args: --upgrade
      virtualenv: "/tmp/test01.venv"

Step 4 – Install base Tower packages into venv

Strictly speaking, if you’re not using this venv with Ansible Tower it is not necessary, but it will make the playbook usable in more places.

  - name: "Install base Tower packages"
        - python-memcached
        - psutil
      umask: "0022"
      state: present
      virtualenv: "/tmp/test01.venv"
      virtualenv_site_packages: yes
      extra_args: --ignore-installed

Step 5 – Install the Python module specifically needed into venv

Finally we’re at the step where we’re installing the Python module we need.  This Python module (like the others earlier) are only installed into the venv directory structure.

  - name: "Install module"
      virtualenv: "/tmp/test01.venv"

Step 6 – Put the new venv with the Python module to work

The key at this step is the “vars:” section that tells the Ansible execution environment to use the “python” binary found in the “venv” on the remote system, “/tmp/test01.venv/bin/python” in this case.

  - name: "Download latest relase of ISOwithKS"
      ansible_python_interpreter: "/tmp/test01.venv/bin/python"
      user: "dglinder"
      repo: "ISOwithKS"
      action: latest_release

PLEASE NOTE: The “github_release:” example above does NOT work due to something un-related to the venv created.

How does this work?

When the playbook runs it connects to all of the clients and makes sure the “python2-pip” and “python-virtualevnv” packages are installed, it then builds the bare virtual environment into “/tmp/test01.venv/” and populates that venv with additional modules, then installs the Python modules necessary to execute the Ansible module.  The Ansible module is executed using the “python” executable in the newly built venv.

Note that ALL of these steps are preformed on the Ansible clients, no changes are made to the Ansible server.  In testing, the initial execution of these steps took about 40-50 seconds to get to the final step – most of that time was due to downloading packages from the Pip repository (on the Internet).  Subsequent runs that were able to re-use the venv directory took 20-25 seconds to get to the same location.


One big shortcoming of this process is the necessity of the Ansible clients to have access to download the packages from an Internet location.  If the clients are shielded from the Internet, it may be necessary to setup a proxy server they can use (if permitted).

It might be necessary to perform the venv build on a single server with Internet access, then replicate that venv directory structure to each of the clients.  (These workarounds have not been validated, so test and report back any success or failures.)

Docker on Windows Subsystem for Linux to play with RedHat 8

Ok, so this is kind of long but neat too!

A co-worker asked about using a Docker image for a project he’s working on and I suggested that he use the RedHat 7/8 based “Universal Base Image” that they announced at Summit. (Our company has a large installed base of RedHat, so there is a big advantage being to tap into that internal knowledge.)


If you have a machine with Docker setup, then doing a pull of “” will pull down the RHEL-8 version.

–> $ docker run –rm -it /bin/bash

But I don’t have a Docker system, I only have Windows 10!” No fear, you can install Docker on Windows:


From there you can kick off Docker from PowerShell or the command prompt with the exact same command prompt as shown above.

But I want to do this in a Linux environment on my Windows workstation!”  Use the “Windows Subsystem for Linux” feature of Windows 10:


Here’s a screen shot of a RHEL-8 container running under WSL showing that “yum install …” works as expected:

And here it is running under PowerShell:

When is a disk space problem not a disk space problem?

A co-worker setup an Ansible playbook to update some packges but it kept erroring out. The error that Ansible reported from “yum” was “No space left on device“. He had jumped onto the system and saw that this partition had plenty of space left so asked if I could look into it.

I got on and confirmed that when I ran a simple “yum update” it showed this:

[[email protected] ~]# echo n | yum update

Loaded plugins: product-id, rhnplugin, search-disabled-repos, security, subscription-manager

[Errno 28] No space left on device: ‘/var/run/rhsm/’

This system is receiving updates from RHN Classic or RHN Satellite.

Could not create lock at /var/run/ [Errno 28] No space left on device: ‘/var/run/’

Hmm, no disk space still. Looking at the “df /var” output looks good:

[[email protected] ~]# df /var

Filesystem           1K-blocks   Used Available Use% Mounted on


                       2514736 914948   1468716  39% /var

Suspecting other resource issues I checked the inode availability using “df -i:

[[email protected] ~]# df -i /var

Filesystem           Inodes  IUsed IFree IUse% Mounted on


                     163840 163840     0  100% /var

A ha! No inodes left. I’ll let you use your favorite search engine to look up details, but an easy way to think of “inodes” is as space on the first few pages of a book dedicated to being the “table of contents.” If you have a book with a few chapters, you only need a single page for the table of contents (the inodes). If you have a book with lots of chapters and sub-chapters, you might need a lot of pages (more inodes). By default Unix systems have a forumla on how much of the filesystem to dedicate to being “inodes” and how much is left for actual data storage. Usually this is fine for most systems.

To find them we want to look for directories which have chewed up the 163K files:

for i in /var/*; do echo $i; find $i |wc -l; done

This pointed to the “/var/spool/app01/” directory – it has over 160K small files.  The owner of the system was able to clean up some old files there and the “yum update” worked as expected.

It’s possible to override the inode settings when the filesystem is formatted, so if you know this ahead a time you can do this. If you run into this after the fact, the usual resolution is to backup the data, reformat the filesystem with more inodes allocated, then restore from backup.

SELinux and NFS $HOME directories

Recently we re-installed a common server with RHEL-7 and that went well.  But after a couple days I noticed that I was unable to login with my personal ssh key but I had before. It was a minor annoyance and didn’t pursue it … until today.

It turns out that the /home/ directory on this system is an NFS mount, and in RHEL-7 we have set SELinux to default to enforcing.  There is an SELinux boolean flag, “use_nfs_home_dirs” that needed to be set to “1” (true).  Running the “setsebool -P use_nfs_home_dirs 1” on this system was the fix and now we/I can resume logging in with the SSH key instead of typing in my passwordeach time.

Some were reluctant to fix this as they always typed in their password. While typing in your password over the SSH login connection is encrypted, but it does present the possibility that your password could get copied given a compromised endpoint, plus we are trying to use longer passwords so typing this in multiple times per day was frustrating and slowed workflow.  Using SSH keys eliminates this risk and provides for other features such as scheduled/scripted command execution and file transfers.

Delete yourself from Medium

I’ve been on the Internet for years. The openness of the whole environment was refreshing. Want to find out something obscure or highly technical, you could head over to a university website and find an article, or over to a vendors site to get details on something you were looking for..

Then came web-rings and blogs. A webring was a simple “next page” link on a site that would take users from one persons page to another, usually the pages had a common theme (a hobby, an interest, etc). Later there were blogs (like this WordPress site) that were more dynamic. You could write multiple pages on varying topics, or you could work with others to share access and create a source of information about your topics.

This wasn’t free, but many of us kept our sites up out of the love for the art we were discussing, or out of the feeling of providing back to the wider world. For times when sites got too big to support by one persons budget, there were advertisers who would trade some of the blog page real estate to host adds in return for a small bit of money back to the owner of the site. And for some sites, they turned to user supported options so that people who paid a small periodic fee could in turn get access to other articles earlier, or possibly in-depth that weren’t public. Many newspapers have turned to this – the general public gets the first paragraph of a story, but subscribers could get the entire article and additional features.

But over time the increase of the web as a “social medium” platform took root. After a while, the need to drive more and more eyeballs to a website took on a more “closed off” approach. I’m sure there are many more that I are out there, but the one that finally got to me and made me say “enough is enough” is the website ““.

They are a hosted blogging site and that’s nice. They also have good search-engine-optimization features so a well written article gets found easily on Google and other search engines. Many times when I’m searching for some additional information on a news topic, I will come across something hosted on a page and click the link. Or at least only up to a threshold of free posts per month then you have to wait until the next month. Or you can sign up for a monthly subscription to access more articles.

All that is good and I will not begrudge them for collecting a fee to offset their designers and staff who keep the website running smoothly, as well as the curation of articles they are performing. But I can’t justify spending another $5 a month on yet another blogging site.

And something about the whole “pay to see anything” mentality seems to be antithetical to what really makes the Internet ‘work’, and honestly what was the groundwork that permitted sites such as Medium, Facebook, Google, Amazon, etc. to thrive. It feels like all the good works that were poured into the initial Internet (open networking standards, operating systems, email, web servers and browsers) and kept free all this time are being clear-cut by these new digital locusts.

But they aren’t listening. Instead they are using the subscription fees to lock more and more content behind their closed doors. If this continues, we’ll have an entire Internet made up of toll roads and not the wide-open digital universe we have today.

If you agree, please take some time and remove yourself from the Medium website. It’s very easy – under your profile, the very bottom option is “delete from Medium”. I don’t need their constant barrage of articles that don’t provide back to the greater good, at least not without a fee and tracking to better serve up my time and attention to their article writers for another piece of fluff.

Do it – it felt good!

The humble check-list

Driving home last night I listened to the next episode of the NPR “Hidden Brain” podcast titled “Check Yourself”. The topic of this was around the humble “checklist” that we’ve all made but never gave much thought to.

Two parts of the story surprised me:

  • They came to be a requirement in the airline industry after a crash of a new and highly complex aircraft in 1939.
  • The very recent addition of checklists to hospital procedures.

Link to text of this podcast:

General podcast link with option to listen to MP3 in browser or download: