Git branching basic workflow

To better handle multiple people working on a project in a Git repository, using branches and reviewing pull requests before committing to the master branch is strongly suggested.


Basic steps:

  1. Change to your home directory and clone the git repository
    1. cd $HOME
    2. git clone [email protected]/MyRepo.git
  2. Change into the new project directory
    1. cd ./MyRepo
  3. Create a branch to work on the new code
    1. git checkout -b MyNewBranch
  4. Verify you are working in the branch
    1. git branch
    2. Note: The branch will have a “*” to the left of the branch name denoting the active branch
  5. Update code, test, repeat
  6. Review and add any missing files
    1. git status
    2. git add <file_name>
  7. Push the code into the repository
    1. git push –set-upstream origin MyNewBranch
      1. This is only necessary for the first ‘git push’
    2. git commit -v {list of changed files}
      • Note the response from the system:
      • remote: Create a pull request for 'MyNewBranch' on GitHub by visiting:
      • remote: https://git.repo.example.org/MyRepo/pull/new/MyNewBranch
  8. Open the pull request (PR) in git.repo.example.org
    1. Add other repository contributors to request a code review before merging.
  9. Repeat the edit/test/PR cycle as necessary until merge is accepted
    1. edit code … test … git status … git add … git commit … gitpush
  10. When it has been accepted, clean-up your work area:
    1. cd $HOME/MyRepo/
    2. git checkout master
    3. git pull
    4. git branch –delete MyNewBranch
  11. Celebrate on a successful pull request!

IBM Model-M and the Windows Key

I really love my IBM Model-M keyboard, but one frustration is the more frequent need for the Windows key. Even some Linux desktops are using it. Oh I get it, it’s a handy “meta” key that helps differentiate keyboard tasks so it’s lack is annoying at times.

For the few times I am in Windows 10 using my Model-M keyboard, I found this answer on SuperUser.com to re-map the caps-lock key to the Windows key with a simple registry hack: https://superuser.com/a/1228990/101577

Just in case that link goes away, here’s the text:

Anyway, using SharpKeys I found the correct map for CAPS LOCK to Win is this:

https://superuser.com/a/1228990/101577
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Keyboard Layout]
"Scancode Map"=hex:00,00,00,00,00,00,00,00,02,00,00,00,5b,e0,3a,00,00,00,00,00

Just save those lines as presented into a file, “c:\WinKeyRemap.reg”, then use File Manager to find it, then double-click to open it with the Registry Editor. (You’ll need to accept the warning.) Assuming the file is correct, Registry Editor will report that the values have been added to the registry.

Reboot to make the changes take effect.

Ansible and third-party Python modules

A co-worker wanted a third-party Python modules installed onto the Ansible Tower servers that I maintain.  I don’t like installing any and all packages that people ask for since this is a shared system, but I had to bring myself up to speed on how he could get his module installed to use by his playbooks.

  • The machine that the Ansible playbook is executed from only needs to have Ansible and a set of pre-requisite Python modules installed on it.  For this document, we’ll refer to that system as the “Ansible Server” or just “the server”.
  • The machine(s) that the Ansible playbook make work on and perform changes to need ssh and a small sub-set of Python modules (usually the core Python packages).  We’ll refer to those systems as “the clients”.

To make things more confusing there are two types of “modules” that will be referenced:

  • An “Ansible module” is a package that Ansible uses on the server to execute steps on the clients.  These are usually written in Python and the “core” Ansible modules are included and maintained by the Ansible developers.
    • There are some Ansible modules that may be included with Ansible but they are maintained by the community.  These are usually specialized modules specific to a hardware or software vendor and are usually maintained by the vendor or others interested in automating that vendors tools.
  • The other “module” referenced in this document are add-ons to Python and are called “Python modules”.
    • A Python module may perform some low-level task (e.g. network connection, DNS lookup, etc) and are NOT Ansible specific.

Documentation for the Ansible modules are located here: https://docs.ansible.com/ansible/latest/modules/modules_by_category.html

The request mentioned the need for the “GitHub3.py” Python module so the “github_release” Ansible module would work.  The documentation for the “github_release” module has a requirements section and it notes that also.  The documentation page also notes that this is a preview module (“This module is not guaranteed to have a backwards compatible interface.”) and it is also maintained outside of the Ansible core develoers (“This module is maintained by the Ansible Community.”).

So, how do we add this module?  I’m glad you asked!

The first thing to understand is that all the requirements for this module have to be installed on the clients, not on the Ansible servers.  While this sounds like more work, it really isn’t and it keeps the Ansible servers free from conflicts that different users might have requiring different Python module versions.  The key to all this is the use of Python “Virtual Environments” (or “venvs”).  These virtual environments are walled-off areas that have their own Python executable and associated modules; it’s even possible to have different versions Python installed in different venvs for testing.

In the playbook that needs to use an Ansible module that has special Python module dependencies, there are a few steps to take that we’ll go over in detail below:

  1. Ensure pip is installed
  2. Install Python virtual environment packages into the venv
  3. Setup the virtual environment
  4. Install base Tower packages into venv
  5. Install the Python module specifically needed into venv
  6. Use the new venv to execute the Ansible module

Step 1 – Ensure pip is installed

This is a basic step and will vary by OS but the “pip” package is needed for the “pip:” modules later.

  - name: "Ensure pip is installed"
    yum:
      name: python2-pip
      state: installed

Step 2 – Install Python virtual environment packages

This is also OS dependent, but it installs the “python-virtualenv” package so Python can build venvs.


  - name: "Install Python virtual environment packages"
    yum:
      name:
        - python-virtualenv
      state: installed

Step 3 – Setup the virtual environment

This step does the initial work to build the virtual environment.  The venv is just a directory structure (in this case “/tmp/test01.venv”) that contains helper files and some wrapper scripts and configuration defaults.

  - name: "Setup the initial virtual environment"
    pip:
      name:
        - setuptools
      extra_args: --upgrade
      virtualenv: "/tmp/test01.venv"

Step 4 – Install base Tower packages into venv

Strictly speaking, if you’re not using this venv with Ansible Tower it is not necessary, but it will make the playbook usable in more places.


  - name: "Install base Tower packages"
    pip:
      name:
        - python-memcached
        - psutil
      umask: "0022"
      state: present
      virtualenv: "/tmp/test01.venv"
      virtualenv_site_packages: yes
      extra_args: --ignore-installed

Step 5 – Install the Python module specifically needed into venv

Finally we’re at the step where we’re installing the Python module we need.  This Python module (like the others earlier) are only installed into the venv directory structure.

  - name: "Install github3.py module"
    pip:
      name:
        - github3.py
      virtualenv: "/tmp/test01.venv"

Step 6 – Put the new venv with the Python module to work

The key at this step is the “vars:” section that tells the Ansible execution environment to use the “python” binary found in the “venv” on the remote system, “/tmp/test01.venv/bin/python” in this case.

  - name: "Download latest relase of ISOwithKS"
    # https://github.com/dglinder/ISOwithKS.git
    vars:
      ansible_python_interpreter: "/tmp/test01.venv/bin/python"
    github_release:
      user: "dglinder"
      repo: "ISOwithKS"
      action: latest_release

PLEASE NOTE: The “github_release:” example above does NOT work due to something un-related to the venv created.

How does this work?

When the playbook runs it connects to all of the clients and makes sure the “python2-pip” and “python-virtualevnv” packages are installed, it then builds the bare virtual environment into “/tmp/test01.venv/” and populates that venv with additional modules, then installs the Python modules necessary to execute the Ansible module.  The Ansible module is executed using the “python” executable in the newly built venv.

Note that ALL of these steps are preformed on the Ansible clients, no changes are made to the Ansible server.  In testing, the initial execution of these steps took about 40-50 seconds to get to the final step – most of that time was due to downloading packages from the Pip repository (on the Internet).  Subsequent runs that were able to re-use the venv directory took 20-25 seconds to get to the same location.

Caveats

One big shortcoming of this process is the necessity of the Ansible clients to have access to download the packages from an Internet location.  If the clients are shielded from the Internet, it may be necessary to setup a proxy server they can use (if permitted).

It might be necessary to perform the venv build on a single server with Internet access, then replicate that venv directory structure to each of the clients.  (These workarounds have not been validated, so test and report back any success or failures.)

Docker on Windows Subsystem for Linux to play with RedHat 8

Ok, so this is kind of long but neat too!

A co-worker asked about using a Docker image for a project he’s working on and I suggested that he use the RedHat 7/8 based “Universal Base Image” that they announced at Summit. (Our company has a large installed base of RedHat, so there is a big advantage being to tap into that internal knowledge.)

–> https://www.redhat.com/en/blog/introducing-red-hat-universal-base-image

If you have a machine with Docker setup, then doing a pull of “registry.access.redhat.com/ubi8/ubi:latest” will pull down the RHEL-8 version.

–> $ docker run –rm -it registry.access.redhat.com/ubi8/ubi:latest /bin/bash

But I don’t have a Docker system, I only have Windows 10!” No fear, you can install Docker on Windows:

–> https://docs.docker.com/docker-for-windows/install/

From there you can kick off Docker from PowerShell or the command prompt with the exact same command prompt as shown above.

But I want to do this in a Linux environment on my Windows workstation!”  Use the “Windows Subsystem for Linux” feature of Windows 10:

–> https://medium.com/@sebagomez/installing-the-docker-client-on-ubuntus-windows-subsystem-for-linux-612b392a44c4

Here’s a screen shot of a RHEL-8 container running under WSL showing that “yum install …” works as expected:

And here it is running under PowerShell:

When is a disk space problem not a disk space problem?

A co-worker setup an Ansible playbook to update some packges but it kept erroring out. The error that Ansible reported from “yum” was “No space left on device“. He had jumped onto the system and saw that this partition had plenty of space left so asked if I could look into it.

I got on and confirmed that when I ran a simple “yum update” it showed this:

[[email protected] ~]# echo n | yum update

Loaded plugins: product-id, rhnplugin, search-disabled-repos, security, subscription-manager

[Errno 28] No space left on device: ‘/var/run/rhsm/cert.pid’

This system is receiving updates from RHN Classic or RHN Satellite.

Could not create lock at /var/run/yum.pid: [Errno 28] No space left on device: ‘/var/run/yum.pid’

Hmm, no disk space still. Looking at the “df /var” output looks good:

[[email protected] ~]# df /var

Filesystem           1K-blocks   Used Available Use% Mounted on

/dev/mapper/rootvg-varlv

                       2514736 914948   1468716  39% /var

Suspecting other resource issues I checked the inode availability using “df -i:

[[email protected] ~]# df -i /var

Filesystem           Inodes  IUsed IFree IUse% Mounted on

/dev/mapper/rootvg-varlv

                     163840 163840     0  100% /var

A ha! No inodes left. I’ll let you use your favorite search engine to look up details, but an easy way to think of “inodes” is as space on the first few pages of a book dedicated to being the “table of contents.” If you have a book with a few chapters, you only need a single page for the table of contents (the inodes). If you have a book with lots of chapters and sub-chapters, you might need a lot of pages (more inodes). By default Unix systems have a forumla on how much of the filesystem to dedicate to being “inodes” and how much is left for actual data storage. Usually this is fine for most systems.

To find them we want to look for directories which have chewed up the 163K files:

for i in /var/*; do echo $i; find $i |wc -l; done

This pointed to the “/var/spool/app01/” directory – it has over 160K small files.  The owner of the system was able to clean up some old files there and the “yum update” worked as expected.

It’s possible to override the inode settings when the filesystem is formatted, so if you know this ahead a time you can do this. If you run into this after the fact, the usual resolution is to backup the data, reformat the filesystem with more inodes allocated, then restore from backup.

SELinux and NFS $HOME directories

Recently we re-installed a common server with RHEL-7 and that went well.  But after a couple days I noticed that I was unable to login with my personal ssh key but I had before. It was a minor annoyance and didn’t pursue it … until today.

It turns out that the /home/ directory on this system is an NFS mount, and in RHEL-7 we have set SELinux to default to enforcing.  There is an SELinux boolean flag, “use_nfs_home_dirs” that needed to be set to “1” (true).  Running the “setsebool -P use_nfs_home_dirs 1” on this system was the fix and now we/I can resume logging in with the SSH key instead of typing in my passwordeach time.

Some were reluctant to fix this as they always typed in their password. While typing in your password over the SSH login connection is encrypted, but it does present the possibility that your password could get copied given a compromised endpoint, plus we are trying to use longer passwords so typing this in multiple times per day was frustrating and slowed workflow.  Using SSH keys eliminates this risk and provides for other features such as scheduled/scripted command execution and file transfers.

Delete yourself from Medium

I’ve been on the Internet for years. The openness of the whole environment was refreshing. Want to find out something obscure or highly technical, you could head over to a university website and find an article, or over to a vendors site to get details on something you were looking for..

Then came web-rings and blogs. A webring was a simple “next page” link on a site that would take users from one persons page to another, usually the pages had a common theme (a hobby, an interest, etc). Later there were blogs (like this WordPress site) that were more dynamic. You could write multiple pages on varying topics, or you could work with others to share access and create a source of information about your topics.

This wasn’t free, but many of us kept our sites up out of the love for the art we were discussing, or out of the feeling of providing back to the wider world. For times when sites got too big to support by one persons budget, there were advertisers who would trade some of the blog page real estate to host adds in return for a small bit of money back to the owner of the site. And for some sites, they turned to user supported options so that people who paid a small periodic fee could in turn get access to other articles earlier, or possibly in-depth that weren’t public. Many newspapers have turned to this – the general public gets the first paragraph of a story, but subscribers could get the entire article and additional features.

But over time the increase of the web as a “social medium” platform took root. After a while, the need to drive more and more eyeballs to a website took on a more “closed off” approach. I’m sure there are many more that I are out there, but the one that finally got to me and made me say “enough is enough” is the website “Medium.com“.

They are a hosted blogging site and that’s nice. They also have good search-engine-optimization features so a well written article gets found easily on Google and other search engines. Many times when I’m searching for some additional information on a news topic, I will come across something hosted on a Medium.com page and click the link. Or at least only up to a threshold of free posts per month then you have to wait until the next month. Or you can sign up for a monthly subscription to access more articles.

All that is good and I will not begrudge them for collecting a fee to offset their designers and staff who keep the website running smoothly, as well as the curation of articles they are performing. But I can’t justify spending another $5 a month on yet another blogging site.

And something about the whole “pay to see anything” mentality seems to be antithetical to what really makes the Internet ‘work’, and honestly what was the groundwork that permitted sites such as Medium, Facebook, Google, Amazon, etc. to thrive. It feels like all the good works that were poured into the initial Internet (open networking standards, operating systems, email, web servers and browsers) and kept free all this time are being clear-cut by these new digital locusts.

But they aren’t listening. Instead they are using the subscription fees to lock more and more content behind their closed doors. If this continues, we’ll have an entire Internet made up of toll roads and not the wide-open digital universe we have today.

If you agree, please take some time and remove yourself from the Medium website. It’s very easy – under your profile, the very bottom option is “delete from Medium”. I don’t need their constant barrage of articles that don’t provide back to the greater good, at least not without a fee and tracking to better serve up my time and attention to their article writers for another piece of fluff.

Do it – it felt good!

The humble check-list

Driving home last night I listened to the next episode of the NPR “Hidden Brain” podcast titled “Check Yourself”. The topic of this was around the humble “checklist” that we’ve all made but never gave much thought to.

Two parts of the story surprised me:

  • They came to be a requirement in the airline industry after a crash of a new and highly complex aircraft in 1939.
  • The very recent addition of checklists to hospital procedures.

Link to text of this podcast: https://www.npr.org/2018/08/27/642310810/you-2-0-check-yourself

General podcast link with option to listen to MP3 in browser or download: https://www.npr.org/podcasts/510308/hidden-brain

Come’on RedHat!

Come’on RedHat – give us the MSDN version for home labs and training.

I recently signed up for the RedHat Developer Program and have setup my lab system with a fully updated RedHat Enterprise Linux 7 operating system with all of the bells-and-whistles the OS provides.

The Developer Program not only gives you a fully licensed RHEL system, it also provides access to JBoss, and the RedHat Container Development Kit which contains a set of development tools and additional resources (Python, PHP, Ruby, OpenJDK, etc).  All great stuff, especially for developers looking to hone their OpenSource development practices.

Unfortunately, for those of us on the systems side of the world (not hard-core developers) the package omits a couple of great RedHat products: Cloudforms and RedHat Virtualization.

I know that I could attempt to knit together the upstream versions of each of these (ManageIQ and oVirt), but there is a reason that the Cloudforms and RHV exist to save us from the complexity of configuring all of the components together.  As a learning platform, the effort of researching numerous blog posts and reams of documentation for each project (and their pre-requisites) might be worthwhile, but for some one like me who is interested in learning more about Cloudforms/ManageIQ itself (automations, hooking into a lab VM environment, etc) it severely skews away from the objective I’m trying to educate myself on.

Microsoft has their MSDN – Microsoft Developer Network – that provides a license for a nominal fee (at the low end) that provides licensing for Windows OS installations (five licenses, many versions, both standard and Server), SQL, Exchange, SharePoint, Office and Office365, Azure, etc.  Windows Server has Hyper-V as an option – it’s answer to RHV/oVirt.  (I don’t believe they have a Cloudforms/ManageIQ alternative, though one might exist.)

I understand that MSDN is not free ($0.00 expense) like the RedHat Developer Program is, so it’s not a perfect comparison but Microsoft has a range of offerings price points.  If a small ($99/year) cost would expand the RedHat option to include Cloudforms, RHV, and multiple server subscriptions (not the single RHEL instance), I could greatly expand my experience with the options RedHat provides.

So RedHat, what do you say?  Expand the program for those of us who want to learn about the full breadth of RedHat products!