Applying design patterns to write maintainable Ansible

Writing Ansible code typically involves a procedural mindset. However, as the codebase expands, the lack of adequate abstractions can slow down modifications. Drawing from design pattern lessons can enhance the codebase’s resilience and adaptability, allowing for quicker responses to business requirements. In this article, I’ll discuss how adopting the principles of three design patterns has improved our Ansible codebase, accommodating an increasing diversity of hardware, operating systems, network configurations, and external services.
First, let’s review three design patterns for better understanding and application.

Strategy Pattern

The Strategy Pattern enables the dynamic selection of algorithms at runtime. For example, consider reaching the fourth floor of a building with both staircases and an elevator. This pattern offers the flexibility to choose the elevator on days when your legs are sore, or opt for the stairs to get some cardio in if you’ve skipped the gym recently.

Dependency Injection

Dependency Injection is a technique where dependencies are provided (injected) at runtime rather than being hardcoded within the component itself. This promotes loose coupling, making the software easier to extend, maintain, and test. A relatable example is home renovation; instead of directly managing individual contractors for electrical, plumbing, and carpentry work, you entrust a general contractor with the task. This contractor then coordinates all the necessary resources, akin to how dependency injection manages component dependencies.

Dependency Inversion

Dependency Inversion emphasizes that high-level modules should not depend on low-level modules, but both should rely on abstractions. Moreover, abstractions should not be dependent on details, but rather, details should depend on abstractions. To illustrate, consider the electrical system in a house: the power outlets or the appliances you plug in do not need to be modified if you change your electricity provider. The outlets are designed to a universal standard (an abstraction), not to the specifics of any single provider.

Applying Design Pattern Lessons in Ansible

Let’s envision a scenario where we’re tasked with provisioning a web development stack. Initially, we create a straightforward Ansible playbook to install Apache and MySQL:

- name: Setup a webserver
  hosts: localhost
    - name: Install Apache
        msg: "Apache will be installed"
      tags: apache
    - name: Install MySQL
        msg: "MySQL will be installed"
      tags: mysql

Later, a request arrives to support a project using a MERN stack, necessitating the setup of MongoDB and Nginx. One approach could be to create an additional playbook:

- name: Setup a webserver
  hosts: localhost
    - name: Install Apache
        msg: "Apache will be installed"
      when: web_server_type == "apache"
      # Additional task details...
    - name: Install Nginx
        msg: "Nginx will be installed"
      when: web_server_type == "nginx"
      tags: Install nginx

However, as projects evolve to support multiple operating systems, data centers, software versions across various environments, and the management of numerous roles, it becomes clear that our approach needs refinement.

Refactoring with Design Patterns in Mind

Before we proceed with refactoring, let’s consider how design pattern lessons can be applied in Ansible:

  • Reduce Specificity: Instead of relying on detailed checks, aim to build abstractions that encapsulate the variability.
  • Depend on Abstractions: Ensure that abstractions do not hinge on specific details, but rather, that details derive from these abstractions.
  • Runtime Flexibility: Allow for the selection of specific implementations at runtime to accommodate varying requirements.
  • Externalize Dependencies: Move dependency management from tasks or roles to a higher level, utilizing variables for greater control and flexibility.
  • Component Swappability: Enable easy replacement of components with alternatives, minimizing the need for extensive refactoring.

Leveraging Ansible Constructs for Design Patterns

Let’s delve into how Ansible’s features support the application of design pattern principles, making our automation solutions more adaptable, maintainable, and scalable.

Ansible Inventory

Ansible Inventory enables the organization of your infrastructure into logical groups and distributes configuration data hierarchically through group or host variables. This structure allows for precise control without the need to specify conditions for each usage explicitly.

Consider the following inventory structure as an example:

          ansible_user: ubuntu
          ansible_ssh_private_key_file: /path/to/ssh/key
          ansible_user: ubuntu
          ansible_ssh_private_key_file: /path/to/ssh/key
          ansible_user: ubuntu
          ansible_ssh_private_key_file: /path/to/ssh/key

For each group, we define a corresponding variable file. Note that the variable names are consistent across different implementations, promoting abstraction and reusability.

# mean_stack.yml
web_server_type: nginx
web_server_version: 9.1
db_server_type: postgres
db_server_version: 9.1
# lamp_stack.yml
web_server_type: apache
web_server_version: 2.1
db_server_type: mysql
db_server_version: 9.1

By using the ansible-inventory command, we can observe how Ansible parses and merges these variables, providing a clear, unified view of the configuration for each host within the specified groups:

(venv) ➜  homelab ansible-inventory -i inventory/dev --list -y
          ansible_ssh_private_key_file: /path/to/ssh/key
          ansible_user: ubuntu
          db_server_type: mysql
          db_server_version: 9.1
          web_server_type: apache
          web_server_version: 2.1
          ansible_ssh_private_key_file: /path/to/ssh/key
          ansible_user: ubuntu
          db_server_type: mongodb
          db_server_version: 11
          web_server_type: nginx
          web_server_version: 9.1
          ansible_ssh_private_key_file: /path/to/ssh/key
          ansible_user: ubuntu
          db_server_type: mongodb
          db_server_version: 11
          web_server_type: nginx
          web_server_version: 9.1

The Limit Flag

The limit flag (-l) in ansible-playbook command is an effective method for specifying which host groups should be targeted when executing a playbook. In my view, this represents a shift in control from the code to the operator, streamlining the execution process. It negates the need for additional conditional statements such as when within the code, instead leveraging the data defined in the inventory to dictate behavior.

Here’s an example of using the -l flag to target a specific high-level group:

ansible-playbook -i inventories/homelab/dev -l lamp_stack deploy_stack.yml
ansible-playbook -i inventories/homelab/prod -l lamp_stack deploy_stack.yml

ansible-playbook -i inventories/homelab/dev -l mean_stack deploy_stack.yml
ansible-playbook -i inventories/homelab/prod -l mean_stack deploy_stack.yml

Note we are applying the same playbook but controlling which set of variables Ansible finally picks to pass to the playbooks and roles.

include_tasks and include_role

The include_tasks directive in Ansible allows for the segmentation of playbooks into smaller, more focused components, facilitating the separation of concerns. Similarly, include_role enables the construction of higher-level abstractions.

Consider the following example of a deploy_stack.yml playbook:

- name: deploy stack
hosts: all
  - role: webserver
  - role: dbserver

This playbook is designed to be generic, capable of deploying a stack without specifying the particular technologies—such as which database or web server to use. The selection of specific technologies is driven by the -l limit flag and the corresponding data in the inventory, which determines the applicable variables.

For instance, we can define a high-level webserver role that remains agnostic of the specific web server being implemented. The dbserver role follows a similar pattern. Below is an example where the webserver role dynamically includes a specific web server setup based on the web_server_type variable:

- name: Include web server setup role
  name: "{{web_server_type}}"
    - nginx
    - apache

Moving on to a concrete implementation, let’s examine an nginx role. The roles/nginx/tasks/main.yml file might contain tasks like the following, demonstrating the role’s specific actions and the inclusion of additional tasks:

- name: Task in nginx role
    msg: nginx will be installed
  tags: nginx
- name: A group of tasks separated by duty
  include_task: demo.yml
  tags: nginx
- name: a task in a playbook
  msg: included to help installation of nginx

This structure allows for modular playbook design, where roles and tasks can be dynamically included based on the deployment’s requirements, enhancing flexibility and maintainability.

Putting it all together

├── bin
│   └──
├── deploy_stack.yml
├── inventory
│   └── dev
│       ├── group_vars
│       │   ├── lamp_stack
│       │   └── mean_stack
│       ├── host_vars
│       └── hosts
└── roles
    ├── apache
    │   └── tasks
    │       └── main.yml
    ├── nginx
    │   └── tasks
    │       ├── demo.yml
    │       └── main.yml
    └── webserver
        └── tasks
            └── main.yml

13 directories, 9 files

(venv) ➜  homelab ansible-playbook -i inventory/dev -l mean_stack deploy_stack.yml

PLAY [setup a webserver] ***************************************************************************************************

TASK [Gathering Facts] *****************************************************************************************************
ok: [mean_server_2]
ok: [mean_server_1]

TASK [Include web server setup role] ***************************************************************************************

TASK [nginx : Task in nginx role] ******************************************************************************************
ok: [mean_server_1] => {
    "msg": "nginx will be installed"
ok: [mean_server_2] => {
    "msg": "nginx will be installed"

TASK [nginx : include_tasks] ***********************************************************************************************
included: /Users/t/projects/ansible/homelab/roles/nginx/tasks/demo.yml for mean_server_1, mean_server_2

TASK [nginx : a task in a playbook] ****************************************************************************************
ok: [mean_server_1] => {
    "msg": "included to help installation of nginx"
ok: [mean_server_2] => {
    "msg": "included to help installation of nginx"

PLAY RECAP *****************************************************************************************************************
mean_server_1              : ok=4    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
mean_server_2              : ok=4    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

(venv) ➜  homelab ansible-playbook -i inventory/dev -l lamp_stack deploy_stack.yml

PLAY [setup a webserver] ***************************************************************************************************

TASK [Gathering Facts] *****************************************************************************************************
ok: [lamp_server_1]

TASK [Include web server setup role] ***************************************************************************************

TASK [apache : Task in apache role] ****************************************************************************************
ok: [lamp_server_1] => {
    "msg": "apache will be installed"

PLAY RECAP *****************************************************************************************************************
lamp_server_1              : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0 

Honorable mention

--tags and --skip-tags are two flexible options that allow selecting tasks across your roles and playbooks from the ansible-playbook command. An practice that has been useful is to tag every task in a role with the name of the role. This is due to the fact that ansible-playbook command does not allow you to run a role adhoc, but if the tasks in your role are tagged with the role name, it helps run multiple roles by providing a list of tags.

ansible-playbook -i inventory/dev --tags nginx,mongodb deploy_stack.yml

Concluding remarks

Design patterns are language neutral and the lessons we learn from them can be useful even beyond object oriented design paradigms. Despite Ansible being a procedural configuration management tool, being aware of those lessons help us write cleaner maintainable code that can be changed easily to respond to changing business needs. In this article I reviewed some of the lessons I have learned trying to adopt the spirit of a few useful design patterns while writing Ansible code. While the example is a bit facetious, I hope this would be useful. Leave a comment if there other patterns that you have come across that makes it easier to write cleaner Ansible.

Three suggestions for daily git workflow

In our quest for productivity, it’s easy to be lured by the allure of sophisticated tooling. However, complexity often comes at a cost, sometimes sidelining our actual projects for the sake of toolchain tweaks. That’s why I try to keep my setup minimalistic, aiming for the sweet spot where simplicity meets functionality. In this spirit, I want to share three adjustments I recently integrated into my workflow, offering a blend of improved practices without much overhead.

Start with the Commit Messages

Let’s face it, a bad commit message can be a real downer, especially when it’s your own! It’s crucial not just to document what was changed but to capture the why behind it. Despite the abundance of advice on crafting meaningful commit messages, many of us still fall short. Why? It often boils down to timing – we start writing the commit messages too late in the game.

What if I did the opposite:

  1. Draft Beforehand: Start by drafting your commit reason in a file (like CHANGELOG or .git_commit) using your favorite IDE, not a cramped text editor.
  2. Keep It Private: Add this file to your .gitignore to ensure it stays out of version control.
  3. Code With Purpose: With your intentions clearly outlined, proceed with your changes, then add your files.
  4. Commit With Clarity: Use git commit -F CHANGELOG to pull your polished message into the commit, enhancing both documentation and your focus on the task at hand.

This method not only improves your commit messages but also primes your mindset for the changes ahead.

Streamlining Commits with Git Alias

It is unlikely you can get your changes to be working in the first go. Your linting, testing etc will point out, your gaps. It is also unlikely, your coworkers will appreciate a review request with multiple of commit in it saying, “bug fixes”. And then if we forget squashing the commits before merging … there goes the project’s git history.

To simplify, consider git commit --amend. It has the useful --no-edit and -a flags to help tidy up your follow up edits beyond first commit. However, to keep your remote happy, you need to force push your changes. Summing it up, after every effort to fix your change is followed by

git commit -F CHANGELOG -a --amend --no-edit
git push origin BRANCH -f

This is where the git alias comes in. Run the following command

git config --global alias.cp '!f(){ \
    git commit -F CHANGELOG -a --amend --no-edit && \
    local branch=$(git rev-parse --abbrev-ref HEAD) && \
    git push --force-with-lease origin "$branch":"$branch"; \ 
}; f'

This gives us a new git command or alias – git cp, short for commit and push. A little reduction of friction between your iteration cycles that ensures your pull requests are tidy.
P.S. A pit fall to avoid – if you have added a new file that is outside your current working directory in the middle of your change, you’ll need to add it. But hopefully, your command prompt is configured to show the git branch and status, so that you catch that real quick.

➜  cryptogpt git:(main) ✗ git status
On branch main
Your branch is behind 'origin/main' by 13 commits, and can be fast-forwarded.
  (use "git pull" to update your local branch)

Changes not staged for commit:
  (use "git add <file>..." to update what will be committed)
  (use "git restore <file>..." to discard changes in working directory)
	modified:   requirements.txt

Untracked files:
  (use "git add <file>..." to include in what will be committed)

no changes added to commit (use "git add" and/or "git commit -a")
➜  cryptogpt git:(main) ✗ 

The ✗ tells me that I have un-commited changes. will do this right out of the box for you.

Git pre-commit hooks

And lastly, we have everyone’s favorite pre-commit hooks. Pre-commit hooks allows easy management of your commit hooks, has great documentation and all the hooks to satisfy the needs of a polyglot code base. It is great way to minimize noise in code reviews and standardize the whole team into best practices. Sometimes you have needs that are custom to your projects. And pre-commit-hooks makes it really easy to introduce and distribute such custom hooks.

Custom hook example – Ansible Inventory generator

Ansible inventories provide a great way of abstracting your implementations in a way that is similar to dependency injection. It lets us organize configuration data from higher level groups of servers, down to then more host specific configurations. This eliminates unnecessary when conditions (equivalent to if-else checks) and encourages abstractions for resiliency to changing business needs.
However, this results in the variable values to be distributed across multiple files all, group_vars/ , host_vars/ making it difficult to get a complete view of configuration values for each server until runtime. If you are deploying to a big fleet, incorrect values could get very costly. A shift-left solution here is to use the ansible-inventory command to generate all the values for each server in your inventory. Here is an example command that dumps the values to a file that can be easily reviewed whenever you are refactoring the inventory variables.

### contents of bin/ 
ansible-inventory -i inventories/dev --list -y 2>&1 > sed 1d dev_inventory

### example output
              variable: &id001
              - a
              - b
            server-02: *id001

Checking the file into the git repo helps set a base level which you can compare anytime the variables are moved around. This is where we bring in a custom pre-commit hook.

To call the script at a pre-commit stage, add a section to your.pre-commit-config.yaml. This will now regenerate and overwrite the dev_inventory file every time we try to commit.

 - repo: local
     - id: inventory
       name: Ansible inventory generation
       entry: bin/
       language: system

If the generated file differs from the existing one, git will see an un-staged file preventing the commit. This gives you the opportunity to review the diff and gives you early feedback before executing the code. Shift left and catch the drift or bugs even before it reaches CI.
If you want multiple teams or projects to share this, you can push it to a git repository and point repo to it.

Some times, you need an escape hatch to just commit without evaluation, git commit --no-verify will bypass the hooks and let you commit.

Wrapping Up

Retooling can be a constant chase for productivity. Instead an approach could be evidence based. Asking does this help us move left, fail early and fail fast is a powerful guiding principle to evaluate if this investment of time will compound over time to help us ship software faster. Hoping this helps someone, and they leave behind what they have found useful in improving their git workflow. Here’s to continuous improvement and the joy of coding!

Using Cloudflare’s Free SSL Certificate

In the year 2024, there can be only one reason for not using SSL everywhere – laziness. Laziness has sabotaged the migration of my blog from an overpriced shared vps to AWS. But this time, when the cert expired and Cloudflare stopped routing traffic, the SRE inside me have had it enough with the downtime. While still in bed, I started clickops-ing this cert to be SSLlabs poster child.

I soon found out that since I do not use the hosting provider as my domain registrar, my site is not eligible for a proper free certificate, and all they can offer are self-signed certificate. Cloudflare’s Full (strict) SSL validation test requires a valid SSL certificate on the origin server for end-to-end encryption. Since there is no root(admin access) on a shared vps server, you cannot install certbot and Letsencrypt yourself out of this.

Luckily generating an Origin Certificate from Cloudflare is easy even with clickops! They provide a validated path between Cloudflare and the origin server without the need for a publicly trusted certificate on the origin.

Generating a Cloudflare Origin Certificate

  1. Log into your Cloudflare dashboard.
  2. Navigate to the SSL/TLS section and select the Origin Server tab.
  3. Click on Create Certificate. For most users, the default settings provided by Cloudflare will suffice. These include a wildcard certificate that covers your domain and any subdomains.
  4. When prompted, choose the validity period for your certificate. Cloudflare allows you to select a duration of up to 15 years, providing long-term coverage.
  5. Cloudflare will generate a private key and a certificate. Copy both, as you will need them for the next steps.

Step 2: Importing the Certificate through cPanel

  1. Log into your cPanel account on your hosting provider
  2. Scroll down to the Security section and click on SSL/TLS.
  3. Under the SSL/TLS section, find and click on Manage SSL sites under the Install and Manage SSL for your site (HTTPS) menu.
  4. Select the domain you’re installing the certificate for from the drop-down menu.
  5. In the fields provided, paste the certificate and private key that you copied from Cloudflare.
  6. Click on Install Certificate. This process installs the Cloudflare Origin Certificate on your InMotion Hosting server, enabling it to pass Cloudflare’s Full (strict) SSL verification.

Step 3: Verifying the Installation

  1. Once the installation is complete, it’s crucial to verify that everything is working as intended.
  2. You can use SSL verification tools like SSLLabs to test your site’s SSL status.
  3. Additionally, check your website’s loading behavior to ensure there are no SSL errors or warnings.


Put your SSL/TLS setting on Cloudflare to Full(strict) and select least supported version as TLS 1.3 for the Edge certificate.


After a couple of hours of clicking through cPanel and Cloudflare menu, I finally feel vindicated.

The main reason I wanted to write this down was firstly I strongly detest remembering UI navigation, but sometimes it buys you time while you are automating the process out of your cognitive boundaries. The other reason is I do not want a clickops post to be the stack top of my blog, and will serve as push for migrating off this fleecing vps to the terraformed AWS nirvana.

Soul of the New Apps

The apps that dominate our daily lives have been smoothly sliding LLMs into our dm-s. ChatGPT, Bard, Microsoft’s has Copilots, WhatsApp’s Meta AI, SnapChat My AI, X permium+’s Grok – while Alexa and Siri are now the boomers in the family. It would not be an overstatement to say we are witnessing a major paradigm shift in the soul of our app driven life style, and in this post I want to document my thoughts, observations and predictions on the changing landscape.

Conversation – The ultimate App UI

At the core of any human computer interaction we have, Input -> Compute -> Output. It has been a long road to reduce the friction at the boundaries of compute – making computers comprehend our intent and respond back meaningfully. First there were command line interfaces; then for decades we have been stuck iterating with the graphical user interface. In the absence of natural language comprehension, these were the only tools to narrow down the users expressions for computation. Take a look at “The Mother of all Demos” in which Douglas Engelbart demonstrated for the first time what a point and click interface would look like and ushered in the era of personal computing.

Fast forward to Steve Job’s 2007 demo of the first iPhone, was a giant leap that showed us a way to touch and manipulate a computation with our finger that actually usable. It introduced a new way to express our intent to the computer and we have been building on top of that for fifteen years.

Numerous attempts all along the way to make the computer understand natural language paved the way for what was coming in November 2022. A year after ChatGPT, we now have a functioning talking computer. LLMs can now see, hear and understand our natural language instructions. We no longer need to wrestle to make the results of computation comprehensible – we can get natural language description of the compute at the right level of compression that reduces the cognitive load for interpretation.
Here’s a screenshot of ChatGPT app on my phone, which would have been considered an impossible dream in 2010.

Every non-conversational apps like banks, brokerage firms, or utility company now are rushing to add LLM based chat bots on their sites.

Support chat windows on sites have been there for a long time, but they hardly worked either handing us over to a human or just giving up with a bad user experience. With LLM powered chatbots, we have entered a new era.

Jokes aside, my prediction is they will gradually move out of one corner of the screen and take over the whole UI relieving the users of pain of clicking through navigation menus and buttons, filling lengthy forms. All the GUI elements, which are really proxies for natural language, are going to melt away in the face of direct conversation.

Messaging apps that put conversation first, like Slack, Teams, Twitter, Whatsapp, Meta’s messenger have a huge advantage to build new platforms for businesses and industries. Back in the day businesses had only the physical platform to build brick and mortar shops. During the dot-com era, they moved to the internet when they realized the convenience of the customer. The next platforms were the App store and Play store as businesses realized customers were carrying a smart phone in their pocket and the desktop was just gathering dust. In 2024, we stand at the cusp of the GPT Store. Businesses need to build RAG (Retrieval Augmented Generation) apps and ditch the whole click driven UI for conversation first UI.
But RAG apps are just the beginning though. The reason a human to human conversation is magical is not only because we can recall information, but we can re-evaluate our positions and adapt to the changed state of the world. This takes me to the more wilder part of my prediction – just in time reprogramming.

Reprogramming – Apps Adapting to the world

Software has already eaten the world. But the world keeps changing. To keep up with the changes, software needs to change. Today, we have teams of developers doing elaborate design, build, test, ship cycles to build complex software. Business analysts, developers, testers, product managers collectively try to understand, implement and evolve the software to meet the demands of the changing world.

ChatGPTs Code Interpreter and Data Analysis have already shown us what just in time computing can look like. Stretch your imagination a little further where code generation, testing and execution all gets done in real time as response to your prompt. The fact that LLMs today can generate code at a speed acceptable to many real-time scenarios is the second big change in the soul of the machine. Niche business logic that earlier took months to code, test and ship could get autogenerated, compiled, tested and finally executed – on the fly based on a user prompt. Take a look at the End of Programming talk by Dr. Matt Welsh which concludes that today the “(Large Language) Model is the computer”.

My prediction is dynamic code generation and just in time execution with the continuous re-authoring of complex business logic based on user feedback will become how apps function in the future. As the landscape will go through tumultuous disruption, the pace at which we write code and ship software will no longer be sustainable. Current development best practices say all development, testing etc needs to be moved left. But as LLMs get better, we might see an absolute paradigm shift where the entire software development life cycle moves all the way to the right, towards the edge. The models will eat the whole software stack, not just code generation. The removal of humans from writing and maintaining code, will lead to evolution of the software stack to focus on quality, safety and security. “Conversation first & LLM inside” becomes the new stack.

The Future of our App centric life

From Gutenberg’s printing press, to the modern web and mobile App Stores – pivotal technology changes have given rise to platforms on which the next chapters of human civilization was written. Unfortunately in modern times pivotal technological changes have only widened the economic gaps between the rich and the poor. The promise of the trickle down economics remains a distant dream that never delivers as gig economy workers get strangled servicing multiple apps. It is evident that the success of the AI arms race is biased towards deep pockets, and the super wealthy tech giants have all the unfair advantage. Our only hope is that in the past, we have been successful in building open technologies that benefit the whole civilization as well. Take the Internet for example which triumphed over the proprietary Information Superhighway that Microsoft wanted to establish in the 90s. Open Source softwares like LAMP stack that got the world to its digital adolescence. We need open standards, protocols, weights, regulations and software for sustainable AI. That way the next generation of computing, it is not owned by a multibillion dollar corporation, but is level playing field that rewards our unique perspectives and helps us progress as a species.

Audacious Aspirations of an AI Optimist

According to Challenger, Gray & Christmas, Inc.’s report last month (May 2023), 3,900 jobs were lost to AI. That makes it the seventh highest contributing factor for layoffs and 5% of total job cuts of the month. This is also the first time AI has been cited as a contributing factor for job loss.
AI eating all jobs doomsday predictions dominate the mainstream media. Our news feed are overflowing with predictions of societal collapse and existential crisis. There is no way to leapfrog the misery of job loses in the coming years. Nineteenth century definition of worker, job and career have reached their expiry date.
But there is another side to the story. New definitions of jobs and careers for the next hundred years are getting written today. And they spell an era of unprecedented economic boom beyond the temporary misery.

The last century

With every breakthrough in technology, we have seen the old jobs and careers vanish. New jobs and careers open the door of opportunities by orders of magnitude. When goods hand-crafted by artisans got replaced by those from the mills and factories, it brought thousands from rural areas to fast growing cities. When internal combustion engines replaced horses, the coachman gave way to thousands of truck drivers who criss-cross the country everyday.

With software eating the world, we have seen people in a variety of new jobs and careers which were unimaginable to their parents. Those with the economic affluence to afford a good education saw a path to flourish as knowledge workers. Those less fortunate became mechanical turks in the gig economy serving just in time needs of the affluent at a narrow margin.

Unfortunately, during these years of progress our view of economic growth became centered around the success of big corporations. This twentieth century model has succeeded by leveraging centralized capital at the cost of human resources. Those in possession of capital enriched themselves, shaped the values of our society, and made participation and success a struggle for the less fortunate. Thus despite these advances the economic rift that bisects our society kept widening.

In the last decade, three new forces have emerged. They are on the verge of alignment to bring the biggest change our civilization has ever experienced.

1. Influencers

Influencers are independent individuals who through their originality and creativity are able to impact the outlook of thousands, sometimes millions. True fans in niche communities are evaluating their content and coronating the deserving creators. They understand how to leverage the new technology and media and ditched the 9-to-5 grind from their parent’s era.
While this rise of individuals was promising, it did not loosen the grip of the big companies. These influencers are chained to the algorithms of the platforms where they publish their product or content. The algorithms control every aspect of a creator’s success and are written behind closed doors to maximize the profit for the companies. The creator and their fans got trapped in the walled garden.

2. Decentralization

The advances in decentralized networks and payment technologies has made it possible to break free of the network effect these platforms. These solutions can go beyond the control of a single company or censorship of a government.

Recent debacles in the cryptocurrency world have surely shattered the confidence and led many to abandon ships. But beyond the hype cycle, the technological breakthroughs that we have already achieved are here to stay. Once they mature to provide the safety of the financial assets and protection against censorship, it will give the creators freedom to focus on their craft and community.

3. Copilots

The third and the final force are AI copilots. The sharper your knife, finer is the cut. With generative AI, we are in the early days of creating a general purpose compilers for building new tools.

The current generation of AI tools have given millions the power to rapidly remix the knowledge curated by humanity over ages. These tools excel in visual, linguistic or computational expressions and can create something that shocks even the most proficient practitioner of the craft. Never before has such a technology been made accessible to human kind at such a scale.

As AI super charges our tools, we will create higher quality results in record time. Our personal AI copilots will evolve with us from every meeting we attend, every email we are cc-ed, every new book or article we read. They will act as a mentor, a career coach, keep our inbox zeroed, and maintain a prioritized list of where to focus on for maximizing our goal.

Humanoid robots managing manufacturing, self-driving distribution fleet delivering goods, a giga factory at a natural language api call away – the grunt work will take a back seat. The focus will be on creative work.

We can imagine four stages of this creative work

Stage 1: identifying a problem and imaging the solution for a community

Stage 2: prompt engineering the implementation

Stage 3: review, feedback and course correct till a steady state is shipped

Stage 4: maintain steady state cruise control

Employee to entrepreneur

In the future, a typical workday of an employee will probably look similar to that of today’s entrepreneur. They’ll act as the conductor orchestrating the autonomous agents by providing higher level decisions and strategies.

Employees thinking and acting like entrepreneurs will change everything. More and more people will find their niche community of true fans and realize their true potential outside the confines of a company. The promise of a life long job and career with the same company will no longer be the norm. Free-lancing, contracting, or consulting will become more dominant model of how people earn their livelihood. Serial entrepreneurship will become the de facto career description in the coming days. Salary to equity transformation will help bring more people out of chasm of poverty.
Quality, originality and creativity will be rewarded than winning the race to be the cheapest. People will be paid for managing the tension to establish steady state control to any problem facing disruptive forces. But the rate of disruption will only accelerate, setting a flywheel effect of unprecedented economic boom.

Weakening of Big Corporations

The success of open source AI development shows how this dream can become a reality. Closed source models have exhausted billions of dollars to define the state of the art. However, in the open source world, new models get published everyday. Their active leaderboard of performance is an existential threat for the big companies – they have no moat. Open source attracts motivated individuals who can learn, teach, build, change and share without permission. That pace of innovation is impossible in the high stake rigid environment of a big company. This development is backed by organizing global community of true fans. They are contributing with human, computing or financial resources in foundational research, implementation and their usage in real world problem solving. Such development will make it harder for traditional big companies to compete and sustain profitably.

Open Regulations for AI

Great power in the hands of the irresponsible is a recipie for disaster.

Regulating AI and figuring out the real world consequences is the challenge of our generation. This is why leaders from big companies and countries are holding emergency meetings. They are scrambling to come up with a playbook that will allow them dictate the ecosystem.
But we cannot rely on them to call the shots with this powerful technology. We have seen too many examples of how the society and nature have suffered when power is centralized behind closed doors. We have to demand laws and regulations to be written, reviewed and modified in the open and driven by the community. Progress in blockchain technologies have laid the foundations for decentralized governance, policy making and enforcement. Cryptocurrencies and smart contracts have shown successful implementation is possible. Whether we will be successful in creating an economically just world will be dependent on our success in effective democratization of AI regulation.

AI Optimist

If you have made this far, I am grateful for your attention to this dream.

I hope the rise of the individuals and weakening of profit driven mega corporations will see the demise of other old societal structures that keep us from achieving our true potential. Decades down the line, we will find the rift between the rich and the poor has shrunk as we continue to move towards a more just society driven by service rather than greed. An era where policies are written to help human species to flourish in this planet and beyond, staying in harmony with nature, will finally begin.

Data pipeline for vacation photos

I take pictures when I am on vacation. Then I throw away 90% of them, some make the cut to end up on Instagram. Instagram is a great platform, however without an official API to upload images, they make it tough for lazy amateurs to publish their content. There is a popular unofficial api, which I intend to give a try.

But even before I get there, I need to get the pipeline ready for getting the data out of the memory card of my DSLR and finding the ones that are worthy of posting. I don’t do any editing what so ever – and proudly tag everything with #nofilter. The real reason is image editing is tough and time-consuming. I doubt anyone would have such a workflow, but I find the existing tooling frustrating makes me do boring manual jobs – that too on vacation.

The workflow

Typically when I am on vacation, I would take pictures all day and as soon as I reach the hotel I want to get the pictures off my camera, group them in keep, discard,maybe buckets, upload them to the cloud, post them to Instagram and finally have the memory card cleaned up for the next day. If I get about an hour to do all this – I’m lucky. The most time-consuming part of the workflow is looking at the images and deciding which bucket it belongs. Rest of the parts are easy to automate. So I wanted to take up that part of the workflow first.

This stage of bucketing a few hundred images into keep,maybe,discardbuckets needed a tool that is more flexible than Photos on mac. Sometimes there are multiple shots of the same subject which needs to be compared next to each other.

After some digging, I found feh. It is a lightweight image browser and offers productivity and simplicity. Installing feh was fairly simple – just install it us if you are on a mac.

brew install feh

Feh acts like any other simple fullscreen image browser, 

feh -t # thumbnails 
feh -m # montage

Other useful keyboard shortcuts

/       # auto-zoom
Shift+> # rotate clockwise
Shift+< # rotate anti-clockwise

There are tonnes of other options and good enough mouse support as well.

Extending with custom scripts

However the real power is unleashed when you bind any arbitrary  unix commands to the number keys. For example:

mkdir keep maybe discard
feh --scale-down --auto-zoom --recursive --action "mv '%f' discard" --action1 "mv '%f' keep" --action2 "mv '%f' maybe" . &

Here is what is going on in the two commands above. First we create three directories. Next we bring up feh in the directory (.the current directory in this case) where we have copied the images from the memory card and use right left keys to cycle through the images.

The recursive flag takes care of going through any subdirectories. The scale-down and auto-zoom handles the sizing the images properly. The action flag allows you to associate arbitrary unix commands with keys 0-9. And that is incredible!

In the example above hitting the 0 key moves it to the directory discard. This is for two reasons – I am right handed and my workflow is to aggressively discard rather than keep. keep-s are less in numbers and easy to decide, so they are bound to 1. maybe-s are time sinks, so I bind it to 2.  I might do a few more passes on each folder before the keep bucket is finalized.

Taking it to the next level

But to take it to the next level, lets bind our 1 (keep) to aws s3 cp command. So we can instantly start uploading them to s3 with one keystroke.  Here’s the basic idea:

bucket=`date "+%Y-%m-%d-%s"`
aws s3 mb s3://${bucket}/keep --region us-west-1
aws s3 mv '%f' s3://${bucket}/'%f' &

Note the ampersand at the end of the command – this helps in putting the upload command in the background. That way the upload is not blocking and you can keep going through the images.

This is reasonably stable – even if feh crashes in the middle of your workflow, the upload commands are queued up and continue in the background.

Here is what the final command looks like. You can put this is a script and add it to your path for accessing quickly.

feh --scale-down --auto-zoom --recursive --action "aws s3 mv '%f' s3://${bucket}/'%f' &" --action1 "mv '%f' keep" --action2 "mv '%f' maybe" . &

This workflow is not for those who do a lot of editing with their pictures.  Overallfeh is fast to load images and provides a lot of extensibility.

Next Steps

The next step would be to configure the lambda function to the S3 upload event and have the unofficial instagram api post the image to instagram. One step remaining would be including the individual hashtags before S3upload. That way from memory card to instagram can be reduced to just a few keystrokes. 

Beyond that, I intend to move feh part of the pipeline to a raspberry pi. I can plug the raspberry pi to the TV of the hotel I am staying at and cut my computer from the loop. Here’s a short post I wrote up for setting up my raspberry pi with a TV. It will probably be a few weeks to get everything together. Till then enjoy a very reticent feed from my instagram .



A few pictures from my first Italy trip. It most certainly, would not be the last. The mesmerizing beauty of the land, the layers of history at every corner, the warmth of people and the sumptuous delicacies – makes you fall in love with the country the very moment you lay your first foot in the country. Hope to be back soon!

ChiPy Python Mentorship Dinner March 2015

Chicago Python Users group Mentorship program for 2015 is officially live! It is a three month long program, where we pair up a new Pythonista with an experienced one to help them improve as developers. Encouraged by the success of last year, we decided to do it in a grander scale this time. Last night ChiPy and Computer Futures hosted a dinner for the mentors at Giordano’s Pizzeria to celebrate the kick off – deep dish, Chicago style!

The Match Making:

Thanks to the brilliant work by the mentor and mentees from 2014, we got a massive response as soon as we opened the registration process this year. While the number of mentee applications grew rapidly, we were unable to get enough mentors and had to limit the mentee applications to 30. Of them, 8 were Python beginners, 5 were interested in web development, 13 in Data Science, and rest in Advanced Python. After some interwebs lobbying, some  arm twisting mafia tactics, we finally managed to get 19 mentees hooked up with their mentors. 

Based on my previous experience at pairing mentor and mentees, the relationship works out only if there is a common theme of interest between the two. To make the matching process easier, I focused on getting a full-text description of their background & end goals as well as their LinkedIn data. From what I heard last night from the mentors, the matches have clicked!

The Mentors’ Dinner:
As ChiPy organizers, we are incredibly grateful to these 19 mentors, who are devoting their time to help the Python community in Chicago. Last night’s dinner was a humble note of thanks to them. Set in the relaxed atmosphere of the pizzeria, stuffed with pizza and beer, it gave us an opportunity to talk and discuss how we can make the process more effective for both the mentor and mentees

Trading of ideas and skills:
The one-to-one relationship of the mentor and mentee gives the mentee enough comfort for saying – “I don’t get it, please help!”. It takes away the fear of being judged, which is a problem in a traditional classroom type learning. But to be fair to the mentor, it is impossible for him/her to be master of everything Python and beyond. That is why we need to trade ideas and skills. Last time when one of the mentor/mentee pairs needed some help designing an RDBMS schema, one of the other mentors stepped in and helped them complete it much faster. Facilitating such collaboration brings out the best resources in the community. Keeping these in mind we have decided to use ChiPy’s discussion threads to keep track of the progress of our mentor and mentee pairs. Here is the first thread introducing what the mentor and mentee are working on.

Some other points that came out of last night’s discussion:

  • We were not able to find mentors for our Advanced Python track. Based on the feedback we decided to rebrand it to Python Performance Optimization for next time.
  • Each mentor/mentee pair will be creating their own curriculum. Having a centralized repository of those will make them reusable
  • Reaching out to Python shops in Chicago for mentors. The benefit of this is far reaching. If a company volunteers their experienced developers as mentors, it could serve like a free apprenticeship program and pave the way in recruiting interns, contractors and full time hires. Hat-tip to Catherine for this idea.

Lastly, I want to thank our sponsor – Computer Futures, for being such a gracious hosts. They are focused on helping Pythonistas find the best Python job that are out there. Thanks for seeing the value in what we are doing and hope we can continue to work together to help the Python community in Chicago. 

If you are interested in learning more about being a mentor or a mentee, feel free to reach out to me. Join ChiPy’s community to learn more about what’s next for the mentor and mentees. 

Chicago Python User Group Mentorship Program

 If you stay in Chicago, have some interest in programming – you must have heard about the Chicago Python Users Group or Chipy. Founded by Brian Ray, it is one of the oldest tech group in the city and is a vibrant community that welcomes programmers of all skill levels. We meet on the second Thursday of every month at a new venue with some awesome talks, great food and a lot of enthusiasm about our favorite programming language. Other than talks on various language features and libraries, we have had language shootouts (putting Python on the line with other languages), programming puzzle night etc.

Chipy meetups are great to learn about new things and meet a lot of very smart people. Beginning this October, we are doing a one on one, three month mentorship program. Its completely free, and totally driven by the community. By building this one to one relationships through the mentorship program, we are trying to build a stronger community of Pythonistas in Chicago.

We have kept it open on how the M&M pairs want to interact, but as an overall goal we wanted the mentors to help the mentees with the following:

1. Selection of a list of topics that is doable in this time frame (October 2014 – January 2014)
2. Help the mentee with resources (pair programming, tools, articles, books etc) when they are stuck
3. Encourage the mentee to do more hands on coding and share their work publicly
It has been really amazing to see the level of enthusiasm among the M&M-s. I have been fortunate to play the role of a match maker – where I look into the background, level of expertise, topics of interests and availability for all M&M-s and try to find out an ideal pair. I’ve been collecting data at every juncture so that we can improve the program in later iterations.

Here are some aggregated data points till now:

# of mentors signed up: 15
# of mentees new to programming: 2
# of mentees new to Python: 16
# of mentee-s Advanced Python: 5
Total: 37

# of mentors with a mentee: 13
# of mentees new to programming with an assigned mentor:1
# of mentees new to Python with an assigned mentor:11
# of mentees with Advanced Python with an assigned mentor:1
# of mentors for newbie mentees without an assignment: 2
# of mentees unreachable: 4
# of mentees new to programming without an assigned mentor:1 (unreachable)
# of mentees new to Python without an assigned mentor:2 (unreachable)
# of mentees with Advanced Python without an assigned mentor:4 (1 unreachable, 3 no advanced mentors)

Other points:
– Data analysis is the most common area of interest.
– # of female developers: 6
– # of students: 2 (1 high-school, 1 grad student)

All M&M pairs are currently busy figuring out what they want to achieve in the next three months and preparing a schedule. Advanced mentees, are forming a focused hack group to peer coach on advanced topics.
We are incredibly grateful to the mentors for their time and the enthusiasm that the mentees have shown for the program. While this year’s mentoring program is completely full, if you are interested in getting mentored in Python, check back in December. Similarly, if you want to mentor someone with your Python knowledge, please let me know. If you have any tips you would want to share on mentoring, being a smart mentee – please leave them in the comments – I’ll share them with the mentor and mentees. And lastly, please feel free to leave any suggestions on what I can do to make the program beneficial for everyone.

Data loss protection for source code

Scopes of Data loss in SDLC
In a post Wikileaks age the software engineering companies should probably start sniffing their development artifacts to protect the customer’s interest. From requirement analysis document to the source code and beyond, different the software artifacts contain information that the clients will consider sensitive. The traditional development process has multiple points for potential data loss – external testing agencies, other software vendors, consulting agencies etc. Most software companies have security experts and/or business analysts redacting sensitive information from documents written in natural language. Source code is a bit different though.

A lot companies do have people looking into the source code for trademark infringements, copyright statements that do not adhere to established patterns, checking if previous copyright/credits are maintained, when applicable. Blackduck or, Coverity are nice tools to help you with that.

Ambitious goal

I am trying to do a study on data loss protection in source code – sensitive information or and quasi-identifiers that might have seeped into the code in the form of comments, variable names etc. The ambitious goal is detection of such leaks and automatically sanitize (probably replace all is enough) such source code and retain code comprehensibility at the same time.

To formulate a convincing case study with motivating examples I need to mine considerable code base and requirement specifications. But no software company would actually give you access to such artifacts. Moreover (academic) people who would evaluate the study are also expected to be lacking such facilities for reproducibility. So we turn towards Free/Open source softwares., Github, Bitbucket, Google code – huge archives of robust softwares written by sharpest minds all over the globe. However there are two significant issues with using FOSS for such a study.

Sensitive information in FOSS code?

Firstly, what can be confidential in open source code? Majority of FOSS projects develop and thrive outside the corporate firewalls with out the need for hiding anything. So we might be looking for the needle in the wrong haystack. However, being able to define WHAT sensitive information is we can probably get around with it.

There are commercial products like Identity Finder that detect information like Social Security Numbers (SSNs), Credit/Debit Card Information (CCNs), Bank Account Information, any Custom Pattern or Sensitive Data in documents. Some more regex foo or should be good enough for detecting all such stuff …

for i in `cat sensitive_terms_list.txt`;do
        for j in `ls $SRC_DIR`; do cat $SRC_DIR$j | grep -EHn --color=always $i ; done

Documentation in FOSS

Secondly, the ‘release early, release often’ bits of FOSS make a structured software development model somewhat redundant. Who would want to write requirements docs, design docs when you just want to scratch the itch? The nearest in terms of design or, specification documentation would be projects which have adopted the Agile model (or, Scrum, say) of development. In other words, a model that mandates extensive requirements documentation be drawn up in the form of user stories and their ilk. being a trivial example.

Still Looking
What are some of the famous Free/Open Source projects that have considerable documentation closely resembling a traditional development model (or models accepted in closed source development)? I plan to build a catalog of such software projects so that it can serve as a reference for similar work that involve traceability in source code and requirements.

Possible places to look into: (WIP)
* Repositories mentioned above

Would sincerely appreciate if you leave your thoughts, comments, poison fangs in the comments section … 🙂