My code snippets

Setting up a wildcard certificate with dnsimple in Kubernetes

To setup a wildcard certificate with let’s encrypt it’s necessary to use a DNS01 challenge as opposed to the simpler HTTP01 challenge. This becomes a…. Challenge! because it requires support from your DNS provider, luckily dnsimple have api support for this and there is a web hook helm chart to get it up and running.

The cert-manager-webhook-dnsimple project let’s you set this up automatically, this will avoid any problems with an expired dns since it will renew itself automatically as long as the setup keeps working. At the moment of configuring it up, sadly I found a few issues myself, here are the details:

Ingress setup

I setup an ingress to generate the certificate (as opposed to the example in the webhook project)

apiVersion: extensions/v1beta1
kind: Ingress
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    ingress.kubernetes.io/secure-backends: "true"
    chart: "-"
    release: ""
    heritage: ""
  - host: '*.emailpref.com'
      - path: /
          serviceName: wizzy
          servicePort: http
  - hosts:
    - '*.emailpref.com'
    - emailpref.com
    secretName: pewpew-cert # < cert-manager will store the created certificate in this secret.

Certificate Generation

This generated automatically the certificate, certificate request and certificate challenges, checking them out helped me debug the issue very well

kubectl get cert
kubectl describe cert

kubectl get certificaterequests
kubectl describe certificaterequests

kubectl get challenges
kubectl describe challenges

At this point I got an important error, the dns was not able to read the TXT challenge from dnsimple, these tools help me greatly on debugging that:

letsdebug.net It’s a website where it checks if your domain is setup properly to be verified by let’s encrypt.

crt.sh Here you can check the certificates that have been requested for your domain

Another guide that should help on debugging is here


Configuring properly DNSSEC did the deal, after that the letsdebug.net tool gave me the thumbs up and all started working properly.

nice networking tools to remember

nslookup is a tool to resolve the IPs of a given dns entry, this is quite handy when you are modifying the dns and want to check if it updated, together with watch is quite nice.

watch -n2 "nslookup mysite.example.com"

telnet the classic way to check if you are able to reach a service on a specific port. For example, to check if you can reach a redis machine.

telnet redis.example.com 6379

traceroute traces and measures hops on the internet, for example, here is a traceroute for www.google.com from spain, which clearly hops over google/telefonica routers to reach the destination.

traceroute to www.google.com (, 64 hops max, 52 byte packets
 1 (  3.364 ms  2.850 ms  4.378 ms
 2  * * *
 3  * * *
 4  150.red-81-46-66.customer.static.ccgg.telefonica.net (  16.320 ms *
    158.red-81-46-66.customer.static.ccgg.telefonica.net (  7.479 ms
 5  * 17.red-81-46-0.customer.static.ccgg.telefonica.net (  9.564 ms *
 6 (  7.814 ms * *
 7  google-be4-grcmadno1.net.telefonicaglobalsolutions.com (  6.385 ms (  5.863 ms
    google-be4-grcmadno1.net.telefonicaglobalsolutions.com (  5.482 ms
 8 (  6.664 ms * *
 9 (  7.615 ms (  7.333 ms
    mad06s25-in-f132.1e100.net (  6.360 ms
shell linux networking 2019-11-03

Python count calls without mocking

Sometimes when you mock in a unit test, you just do it because you want to verify something was called but you don’t want the object itself to be replaced in any way. This can be done by adding the wraps option on the python mocker.

with patch.object(mocked_function, 'func', wraps=mocked_function.func) as func_mock:

Rabbitmq cluster setup guideline

Here is a general guideline of how I setup a rabbitmq cluster to properly scale horizontally. Even if the need is to scale one specific queue this will work.

To scale a specific queue I used the rabbitmq_consistent_hash_exchange plugin, it sends tasks based on a hash, to make it round robin I decided to use a uuid for each task to make it so. For more information check out the documentation.

By solving the issue of a queue horizontal scaling, we can move on to setting up a cluster in rabbit. My main concerns where:

Here is how to do it via the rabbitmqadmin

sudo rabbitmqadmin declare exchange -V rabbit name=<exchange-name> type=x-consistent-hash -u <user> -p <pwd>
for i in $(seq 4); do sudo rabbitmqadmin declare queue -V rabbit name=<queue-name>.$i  -u <user> -p <pwd>; done
for i in $(seq 4); do sudo rabbitmqadmin declare binding -V rabbit source=<exchange-name> destination=<queue-name>.$i routing_key="1" -u <user> -p <pwd>; done

This is required to be able to connection between nodes. It needs to be setup at /var/lib/rabbitmq/.erlang.cookie and to all users that may want to use the cli.

Load balancer behind cluster

To be able to handle traffic to each node, a load balancer is necessary behind the cluster. A HA proxy or an aws ELB is more than enough for the task.

Scalable queue Mirroring

If we mirror on all queues, we are not scaling quite well, to scale it up I setup impair nodes (3, 5, 7…) and add mirroring to 2 nodes, this way we can actually scale the system without touching all nodes for mirror queues. To setup this, we add a policy:

rabbitmqctl set_policy ha-two "^two\." '{"ha-mode":"exactly","ha-params":2,"ha-sync-mode":"automatic"}'

Queue balancing between nodes

There is a plugin to balance nodes automatically, but this is not my aim right now since it shouldn’t be required while everything is stable. To move them manually I can simply add a policy

rabbitmqctl set_policy --apply-to queues --priority 100 my-queue '^my-queue$' '{"ha-mode":"nodes", "ha-params":["rabbit@new-master-node"]}' 
# wait for queues to migrate
rabbitmqctl clear_policy my-queue

Node discovery

Each node needs to be able to contact other nodes, this may sound obvious but some work is needed. There are plugins to handle it like the AWS plugin but I like best the DNS discovery option. By using the RABBITMQ_USE_LONGNAME=true at the rabbitmq-env.conf file so all nodes will be called rabbit@<fullhostname> and if the hostname resolves it will detect the other nodes without issues.

rabbitmq cluster 2019-05-08

how to optimize uwsgi for python

Load test your application as “real” as possible. I like to use siege for this but there are many tools for the job.

Monitor your server, to manage anything you need to measure things up! For this I loved the uwsgitop tool which gives me a very descriptive live data of the state of the server.


If you are able to setup a instrumentation tool like datadog, nagios or prometheus, the better!

Setup a somaxconn size that makes sense, to me a size of 1024 did on my case. I wouldn’t advice to add a huge number here unless you have a good reason. A good fast service is king and having a huge backlog won’t help that.

To measure this on your tests you can inspect the backlog in a top like fashion with:

watch -n1 "ss -l | grep <socket_name>"

To see the somaxconn backlog current status.

Then I played with processess, threads and ugreen and async to find the best bang for my buck.

python uwsgi profiling 2019-02-02

git merges - ours or theirs

There are ways to make some merge or rebase decisions simpler with git, get either our changes or their changes. This is not recommended unless you are really sure, and other times, your file is encrypted and you can’t really handle diffs in the file properly for that I like to do this approach:

  1. Copy the current un-encrypted file in memory on a new file
  2. rebase or merge the wanted branch
  3. execute a checkout git checkout --ours <file>
  4. View the file (un-encrypting it) and issue a diff against the other file
  5. Solve the differences

If you are very sure you just want what the other branch have, there are git strategies now git rebase --strategy-option=theirs for example.

git merge checkout 2018-09-06

Mock request errors

A common use case in unit tests is to test request exceptions, I admit this is a weird one but works very well:

from mock import patch
from requests.exceptions import HTTPError

response_mock = Mock(status_code=406, message='Revoked Token')
error_mock = HTTPError(response=response_mock)
with patch('accounts.models.Profile._refresh_credentials', side_effect=error_mock):
    # Do your test here that raises a request exception
python unit_test mock 2018-08-06

execute parallel commands in the shell

Replace all files in parallel

find . -type f -print0 | parallel -q0 perl -i -pe 's/FOO BAR/FUBAR/g'

It works similar to xargs where each line found is send through the pipe and catch by parallel to be processed.

Also, sometimes you want to run commands with many arguments, for example, an ansible release

cat release
my-playbook1.yml -i my_inventory
my-playbook2.yml -i my_inventory

cat release | parallel --colsep ' ' ansible-playbook
# parallel execution happening here

If we don’t add the --colsep it will treat it as 1 argument which is not really desirable.

shell unix 2018-07-19

Get all your pip dependencies

There is a great tool to get pip package dependencies called pipdeptree.

To see the output of my whole requirements file I did this script:

cat requirements.txt | cut -d "=" -f 1 | cut -d "[" -f 1 | xargs -I{} pipdeptree -p {}

Here is an example result

  - billiard [required: >=,<3.6.0, installed:]
  - kombu [required: >=4.2.0,<5.0, installed: 4.2.1]
    - amqp [required: >=2.1.4,<3.0, installed: 2.3.2]
      - vine [required: >=1.1.3, installed: 1.1.4]
  - pytz [required: >dev, installed: 2018.5]
  - marshmallow [required: >=2.7.0, installed: 2.15.3]
  - read-env [required: >=1.1.0, installed: 1.1.0]
python pip 2018-07-04

Adding nice fonts to vim on terminal

Adding nicer fonts to your terminal vim is not as easy as one would think but here is a good path to follow to make it happen on a mac

brew tap caskroom/fonts
brew cask install font-hack-nerd-font

On your .vimrc

set encoding=utf8
let g:airline_powerline_fonts = 1

And then change it on iTerm2


This has been shamelessly ripped off from here

Now look how beautiful it looks with airline!


vim 2018-05-24

Paste huge payload from clipboard on vim

Pasting huge payload in a vim buffer can take a very long time if not careful. To manage this easily we need:

to have vim compiled with +clipboard

Check it by running :version to see if you have it

magic paste

Use the "*p or "*P and it will instantly paste :)

vim clipboard 2018-03-27

Shoveling tasks in rabbitmq

This curious case happened to me were a lot of tasks end up in a queue called celery because of a misconfiguration when releasing.

Honestly I got very lucky at the time because no task were lost, but the tasks were on the wrong queue. There is were the rabbitmq shovel plugin shined!

In a real world use case the plugin is used to move tasks reliably between WAN separated clusters or things like that, but on my case it came as the perfect ring to the finger.

rabbitmq-plugins enable rabbitmq_shovel
rabbitmqctl set_parameter shovel my-temp-shovel '{"src-uri": "amqp://", "src-queue": "celery", "dest-uri": "amqp://user:password@localhost:5672/rabbit", "dest-queue": "destiny"}'

Boom! Done here!

rabbitmq 2018-02-28

Fix branch created from a rebased branch

Let’s say you have setup these branches:

   B --> C

Sometimes someone coding in branch C will finish it’s development, merge to B and rebase iteractely, creating a new branch of itself and kind leaving D without a real parent branch (not exactly since B still exists in the reflog):



The issue been faced here is that D parent should be B' now, if we want to be able to merge in a fast-forward manner and avoid unnecessary conflicts, we need to fix this. Here is a way to do it:

git branch -m D D-old
git checkout -b D B'
git cherry-pick <All-commits-from old D>
git push -f origin D

We are just getting our commits from old-D into a new D which is based on B'. This may sound unnecessary but when you have a pull request, we don’t want to close it and lose all comments/history of the pull request, this does the trick.

git rebase 2018-01-09

Removing Django signal for tests

I was caught up into a situation where an object had a signal to create in a 3rd party API a connection every time that object was saved into our database. For automated tests this is not ideal since we are not here to test the 3rd party API (always) but our own functionality. I ran into some issues when disconnecting the signal though. Here is what I end up figuring out to fix it:

class MyObj():
    @receiver(post_save, sender="MyObj")
    def create_obj_on_third_party_api(sender, instance, created, **kwargs):
        # Do your thing


from mock import patch

from django.db.models.signals import post_save

from factory import Sequence, SubFactory
from factory.django import DjangoModelFactory

class ObjFactory(DjangoModelFactory):
    class Meta:
        model = MyObj

    user = SubFactory(UserFactory)
    email = "bububibu@gmail.com"

    def _create(cls, model_class, *args, **kwargs):
        post_save.disconnect(model_class.create_obj_on_third_party_api.__func__, sender=model_class)
        with patch.object(Obj, "is_refresh_token_valid", return_value=True):
            obj = super(ObjFactory, cls)._create(model_class, *args, **kwargs)
        post_save.connect(model_class.create_obj_on_third_party_api.__func__, sender=model_class, weak=False)
        return obj

Django signals are handled by using the python id function, which compares identical objects. That means it will not disconnect the signal if we pass a non identical object to which I realised the identical object was the __func__ function.

Also, my object checked if a refresh_token was valid before saving into DB, I ended up mocking that function to always return True.

EXTRA: If you don’t use FactoryBoy for your testing, I highly recommend it!

python django tests 2017-12-13

Count duplicate strings

When you have a file full of duplicates and you want to count them up for any reason, unix like scripts are always there to help! Let’s say we have a file names emails.txt


Now, applying unix magic

sort emails.txt| uniq -c                                                                                                  [~/WebSites/hassek.github.io]
   1 boom@gmail.com
   1 jhon.mcduck@hotmail.com
   3 lewl@gmail.com
shell unix 2017-11-30

Query mongodb by creation date based on objId

var objIdMin = ObjectId(Math.floor((new Date('2017/10/01'))/1000).toString(16) + "0000000000000000")

Based on this post in stackoverflow.

mongodb query 2017-11-04

Run flake8 on recent changes

To verify manually all your recent changes compared to the master branch by name you can run

git diff --name-only master

Because of this we can run any script to it to verify our syntax is proper

git diff --name-only master | xargs flake8
package/file1.py:2:1: F401 'logging' imported but unused
other/file3.py:358:9: F821 undefined name 'logger'
git flake8 2017-09-21

Print data from multiple servers

To check if my servers have any extra process that shouldn’t be running I did a quick script

for i in `seq 1 10`; do ssh my-machine$i.tomtom.com -t "pgrep -f celery -c"; done

Let’s break it down

  • for i in `seq 1 10` is generating a forloop for us from 1 to 10
  • ssh my-machine$i.tomtom.com -t will ssh into each machine from 1 to 10, the -t means it will create a terminal to execute the next command
  • pgrep -f celery -c will count all the processes that have the word celery in it and output the number.
shell ssh 2017-09-14

Mongodb index stats

The easiest way I have found to verify which indexes to clean up is to use the indexStats command

mongo mydb --port 10000  # connect to mongod process
db.collection.aggregate( [ { $indexStats: { } } ] ).pretty()
mongodb index 2017-09-14

Postgres 9.4 systemd os tunning

Tweaking the OS for a database is pretty common and we are used to do a full OS tweak. With systemd this changed (for the best I believe) and now you need to set it up for the process instead. PostgreSQL 9.6 is all ready for these changes but PostgreSQL 9.4 is not, here is how to handle it for the older version:

Apply the change to postgresql@.service

When you install postgres 9.4 on ubuntu 16.04, two files will be created for systemd:

  • /lib/systemd/system/postgresql.service
  • /lib/systemd/system/postgresql@.service

The second file runs before the first one and is the one that actually sets the configuration changes, in fact, if you check postgresql.service status, you will notice is in an active (exited) state, meaning, systemd doesn’t manages the process, it just knows it executed a start to it.

~$ sudo systemctl status postgresql
● postgresql.service - PostgreSQL RDBMS
   Loaded: loaded (/lib/systemd/system/postgresql.service; enabled; vendor preset: enabled)
   Active: active (exited) since Fri 2017-09-01 15:44:28 EDT; 1h 26min ago
  Process: 30546 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
 Main PID: 30546 (code=exited, status=0/SUCCESS)

Sep 01 15:44:28 postgres.db.int systemd[1]: Starting PostgreSQL RDBMS...
Sep 01 15:44:28 postgres.db.int systemd[1]: Started PostgreSQL RDBMS.

When setup on postgresql@.service it will be set on the main pid process which will delegate to all child processes properly.

Description=PostgreSQL Cluster %i

# @: use "postgresql@%i" as process name
ExecStart=@/usr/bin/pg_ctlcluster postgresql@%i --skip-systemctl-redirect %i start
ExecStop=/usr/bin/pg_ctlcluster --skip-systemctl-redirect -m fast %i stop
ExecReload=/usr/bin/pg_ctlcluster --skip-systemctl-redirect %i reload
# prevent OOM killer from choosing the postmaster (individual backends will
# reset the score to 0)
# restarting automatically will prevent "pg_ctlcluster ... stop" from working,
# so we disable it here. Also, the postmaster will restart by itself on most
# problems anyway, so it is questionable if one wants to enable external
# automatic restarts.
# (This should make pg_ctlcluster stop work, but doesn't:)
#RestartPreventExitStatus=SIGINT SIGTERM

# set NOFILE to the maximum amount
LimitNOFILE=infinity   # <<<<< HERE IS WHERE WE TWEAK OUR OS

Now that we have properly setup the OS tweak, lets verify it was applied

Checking a process limit values can be done by cat /proc/<PROCESS PID>/limits or by using prlimit --pid <PROCESS PID>

I would get this info by following this steps:

~# lsof -i -P | grep LISTEN | grep postgr
postgres  12423 postgres    6u  IPv4 880722124      0t0  TCP *:5432 (LISTEN)
postgres  12423 postgres    7u  IPv6 880722125      0t0  TCP *:5432 (LISTEN)

~# prlimit --pid 12423
RESOURCE   DESCRIPTION                             SOFT      HARD UNITS
AS         address space limit                unlimited unlimited bytes
CORE       max core file size                         0 unlimited blocks
CPU        CPU time                           unlimited unlimited seconds
DATA       max data size                      unlimited unlimited bytes
FSIZE      max file size                      unlimited unlimited blocks
LOCKS      max number of file locks held      unlimited unlimited
MEMLOCK    max locked-in-memory address space     65536     65536 bytes
MSGQUEUE   max bytes in POSIX mqueues            819200    819200 bytes
NICE       max nice prio allowed to raise             0         0
NOFILE     max number of open files               65536     65536  <<<< BOOM!
NPROC      max number of processes                64125     64125
RSS        max resident set size              unlimited unlimited pages
RTPRIO     max real-time priority                     0         0
RTTIME     timeout for real-time tasks        unlimited unlimited microsecs
SIGPENDING max number of pending signals          64125     64125
STACK      max stack size                       8388608 unlimited bytes

The main task have the changes setup, but does the child task have them too? Let’s check!

~# ps aux | grep postgres | grep writer
postgres 12427  0.0  0.1 3414332 31908 ?       Ss   15:54   0:00 postgres: writer process
postgres 12428  0.1  0.1 3414200 21108 ?       Ss   15:54   0:14 postgres: wal writer process
~# prlimit --pid 12427
RESOURCE   DESCRIPTION                             SOFT      HARD UNITS
AS         address space limit                unlimited unlimited bytes
CORE       max core file size                         0 unlimited blocks
CPU        CPU time                           unlimited unlimited seconds
DATA       max data size                      unlimited unlimited bytes
FSIZE      max file size                      unlimited unlimited blocks
LOCKS      max number of file locks held      unlimited unlimited
MEMLOCK    max locked-in-memory address space     65536     65536 bytes
MSGQUEUE   max bytes in POSIX mqueues            819200    819200 bytes
NICE       max nice prio allowed to raise             0         0
NOFILE     max number of open files               65536     65536  <<<< BOOM!
NPROC      max number of processes                64125     64125
RSS        max resident set size              unlimited unlimited pages
RTPRIO     max real-time priority                     0         0
RTTIME     timeout for real-time tasks        unlimited unlimited microsecs
SIGPENDING max number of pending signals          64125     64125
STACK      max stack size                       8388608 unlimited bytes

There we go, all set!

postgres systemd kernel 2017-08-31

zsh history tweaking

One of the things I love about tools like zsh, tmux and vim is the amount of configuration that can be set on them to suit your needs. Today I spend some time to enhance my history configuration in zsh and it has paid off.

Let’s start with how much I want to save:

HISTSIZE=1000000       # Set the amount of lines you want saved
SAVEHIST=1000000       # This is required to actually save them, needs to match with HISTSIZE
HISTFILE=~/.zhistory   # Save them on this file

This is a very big history file, be sure it matches your computer capabilities since all of it is saved in memory.

At first this was fine but I notice a few things:

  • My history was not set into the file until I closed the zsh shell.
  • There was a big amount of duplicate commands.

To fix this I added these variables:

setopt EXTENDED_HISTORY          # Write the history file in the ":start:elapsed;command" format.
setopt INC_APPEND_HISTORY        # Write to the history file immediately, not when the shell exits.
setopt SHARE_HISTORY             # Share history between all sessions.
setopt HIST_EXPIRE_DUPS_FIRST    # Expire duplicate entries first when trimming history.
setopt HIST_IGNORE_DUPS          # Don\'t record an entry that was just recorded again.
setopt HIST_IGNORE_ALL_DUPS      # Delete old recorded entry if new entry is a duplicate.
setopt HIST_FIND_NO_DUPS         # Do not display a line previously found.
setopt HIST_IGNORE_SPACE         # Don\'t record an entry starting with a space.
setopt HIST_SAVE_NO_DUPS         # Don\'t write duplicate entries in the history file.
setopt HIST_REDUCE_BLANKS        # Remove superfluous blanks before recording entry.

All of these options should be saved in your ~/.zshrc file for them to work.

Finally I wanted to comment about plugins that do a lot for you, like the famous oh-my-zsh, because they have so much setup already, you will probably end up using a very small set of it, I personally prefer the approach of I have a problem, let’s search/build a fix on which you have time to learn and understand what suits you best.

shell zsh history 2017-08-08

Github prune merged branches

When working on bigger teams with multiple branches is very common that old branches stay created. To detect those branches easily git comes with the command:

git fetch -p
git branch --merged

* master

Be sure to update your local branches! That’s why we are running git fetch -p first.

You can also run the same command for branches on the git server

git branch -r --merged

origin/HEAD --> master

Since we know which branches are already merged, we can prune them up easily. For this I created two script to automatically do this:

git branch --merged | grep -v "master" | parallel -I{} git branch -d {}
git branch -r --merged | grep -v "master" | sed -e "s/origin\\///" \
    | parallel -I{} git push origin :{}

Let’s break the second one up:

  • git branch -r --merged will list all the merged branches on the git server
  • grep -v "master" will remove the master branch from the list, pretty sure we don’t want to delete that one.
  • sed -e "s/origin\///" will remove the origin/ string from the branches so we can delete them.
  • xargs -I{} git push origin:{} will delete them on Github, if you need to use any other command for your git server, change it here.
git github gitconfig 2017-07-26

Reclaim disk space

When you think about reclaiming space on a machine, the first thing that comes to mind is to delete files, but what if you delete files and the space is not yet reclaimed?

This may happen if those files are still open by a process, therefore not releasing the space until they close them. To find these files you can run:

sudo find /proc/*/fd -ls | grep  '(deleted)'

This will show all files that have been deleted but haven’t been released by the system. On my must recent case, rsyslog was the one not releasing any files, restarting it did the job.

shell linux ubuntu 2017-07-16

Auto configuration for apt packages on ansible

When installing packages on debian/ubuntu, there is a chance the package installation will include some questions to be answered.

On ansible you shouldn’t setup any terminal expectation since it will be defeating the purpose of automation, to add automatic answers to apt, you should add them to debconf like this:

- name: debconf pre-selection for postfix install
    name: postfix
    question: postfix/main_mailer_type
    vtype: string
    value: "Internet Site"
  tags: postfix

To read more about debconf, please check this stackoverflow answer.

Smart renaming of multiple files

There are times when you want to rename multiple files by issuing a small modification on all of them. For this case the script rename is quite handy.

rename 's/tag_50_name_ypthon/tag_50_name_python/' tag_50_name_*

Let’s break it down:

  • The first argument follows a familiar syntax with many editors to replace text, it will find the first given text tag_50_name_ypthon and replace it with tag_50_name_python.

  • The second argument will match the files with the given regex and apply the previous command.

There are many other things rename can do, changing text to upper case, lower case and even sanitizing it are supported.

shell rename 2017-06-24

Debug ansible jinja2 template variables loaded

Debugging variables in Ansible can be a pain sometimes, to make it easier there are debug modules to help out with it.

ansible -m debug -a "msg={{hostvars[inventory_hostname]}}" -i \
  inventories/vagrant/hosts mahmachine.localhost

This will print all the variables belonging to the pointed inventory, also, we could add debug strategy on a playbook too:

# playbook.yml
- hosts: mahmachine.localhost
  strategy: debug
    - base

And if there is any issue with the playbook it will open a kind of pdb/gdb so you can interactively investigate!

Apply command to all found files

If you want to execute a command on all filtered files, it can be very easily done with the command find.

For example, we could change all our files to have the yml extension:

find . -type f -exec mv '{}' '{}'.yml \;

Deleting all files with a given name is extremely easy and useful for project cleanups

find . -name "<filename>" -delete

Or the equivalent

find . -name "<filename>" -exec rm '{}' \;

I find this to be very close to the xargs command but simpler.

shell bash zsh 2017-05-17

Find why a process was killed on Ubuntu

When a process has been killed by the kernel, the logs will be found straight away by running the command dmesg.

shell ubuntu linux 2017-05-16

Find the ruby source file of a method

When you are looking for the source code in ruby, the easiest way to find it is on the irb/pry shell!

ruby irb pry 2017-05-16

Git complex aliases

When setting up aliases on git at ~/.gitconfig, most of them are quite simple to speed up your typing

  aa = add --all
  br = branch

But with time I wanted to automate repetitive commands and started to build alias functions

  dt = "!f(){ : git branch ; git pull ; git branch -d $1 ; git push origin :$1 ; }; f"

These type of aliases are pretty cool! I can delete a branch locally and on origin with just 1 command. The first line with the colon tells git to autocomplete as if I were doing a git branch command, the autocompletion becomes very handy.

What I don’t like about these functions is how unreadable they become on more complex functionalities. For those, git scripts are the best, for example, I use my git-nbr everyday!

# Create new branch and setup upstream right away
set -e

git fetch -p
git co -b $1
git push --set-upstream origin $1

Just be sure to call it git-<COMMAND_NAME> and set it up in your shell path and it will work by calling it as git <COMMAND_NAME>.

Double ssh automation

When you have a 1 door entrance for your data center you will need to ssh into the door or proxy machine and then ssh to the wanted machine. To ease the pain, you can execute just one command instead.

$ ssh -tt door-machine.com ssh the-real-deal-machine.com

Let’s break it down:

  • the -t flag forces a pseudo-terminal allocation and if we add multiple t’s it will force a tty allocation, meaning, a terminal on the door machine.
  • Once the terminal is created, you execute another ssh as if you would normally.

Because this is still painful, we can configure it into our ~/.ssh/config file instead:

Host door-machine
  User tomas
  Hostname door-machine.com

Host the-real-deal-machine
  User tomas
  IdentityFile ~/.ssh/id_rsa_for_real_machine
  ProxyCommand ssh door-machine nc the-real-deal-machine.com 22

I had a special issue when setting this up, I needed to use the -v flag to debug it, my door machine has a tomas user with a different identity file than I normally use. To fix this I needed to add locally that file and point to it in the configuration. Let’s break these commands down:

  • First we create the door-machine configuration which is pretty basic.
  • IdentityFile points to the id_rsa from the door machine.
  • ProxyCommand is the real magic, it enters into the door machine and extends the connection with the nc command to the target machine.

If you want to read more about it, please check this and this post.

shell linux ssh 2017-05-07

Delete Rabbitmq queues/exchanges by api

When you have a lot of unwanted queues/exchanges on rabbitmq, either by legacy or spawned by a bad configuration, deleting them by hand is very painful. The rabbitmq management plugin offers an API where you can delete all those queues automatically, here is the command I ended up with:

rabbitmqctl list_queues -p rabbit |\
grep -v "top\|medium\|low" |\
tr "[:blank:]" " " |\
cut -d " " -f 1 |\
xargs -I{} curl -i -u guest:guest -H "content-type:application/json" -XDELETE http://localhost:15672/api/queues/rabbit/{}

Let’s break it down:

  • rabbitmqctl list_queues -p <VHOST_NAME> lists the queues existing on the specified vhost.
  • grep -v "queue\|another_queue\|etc" filters some queues that we don’t want to delete.
  • tr "[:blank:]" " " |\ normalizes the delimiter in the list_queues print.
  • cut -d " " -f 1 |\ picks the queue name (first column) removing the other data we don’t need.
  • xargs -I{} curl -i -u <user>:<password> -H "content-type:application/json" -XDELETE http://localhost:15672/api/queues/<VHOST_NAME>/{} This is the actual call to delete the queue, the -I let’s us pick where we want to put the queue name in the call (which is at the end).

This same command can also be applied to delete exchanges by changing the initial command to rabbitmqctl list_exchanges.

rabbitmq amqp api shell bash zsh 2017-04-27

Check all changes on a file

If you want to look for a specific change in an area of a file you are working on or even the whole file, This can be done with fugitive, a vim plugin for git, which makes it extremely useful.

To do this you need to use the Glog command, it will search for all changes on the file and set it up in your quickfix list.


To move from change to change, the plugin unimpaired.vim adds some default key binds that are very useful

unimpaired vim action
[q :cprev Jump to previous quickfix item
]q :cnext Jump to next quickfix item
[Q :cfirst Jump to first quickfix item
]Q :clast Jump to last quickfix item

For more information look here

git fugitive unimpaired 2017-04-23

Signing your data for secure sharing with gpg2

The first step would be to create your secret/public key, to do this just run gpg2 and start answering all the questions.

$ gpg2 --gen-key
gpg (GnuPG/MacGPG2) 2.0.30; Copyright (C) 2015 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Please select what kind of key you want:
   (1) RSA and RSA (default)
   (2) DSA and Elgamal
   (3) DSA (sign only)
   (4) RSA (sign only)
Your selection?
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048) 4096
Requested keysize is 4096 bits
Please specify how long the key should be valid.
         0 = key does not expire
      <n>  = key expires in n days
      <n>w = key expires in n weeks
      <n>m = key expires in n months
      <n>y = key expires in n years
Key is valid for? (0) 2y
Key expires at Sat Apr 20 17:12:01 2019 -04
Is this correct? (y/N) y

GnuPG needs to construct a user ID to identify your key.

Real name: Tomas Henriquez
Email address: xxxxx@xx.com
Comment: gpg2 remember
You selected this USER-ID:
    "Tomas Henriquez (gpg2 remember) <xxxx@xx.com>"

Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O
You need a Passphrase to protect your secret key.

We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
gpg: key C1F666CC marked as ultimately trusted
public and secret key created and signed.

gpg: checking the trustdb
gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
gpg: depth: 0  valid:   3  signed:   1  trust: 0-, 0q, 0n, 0m, 0f, 3u
gpg: depth: 1  valid:   1  signed:   0  trust: 0-, 0q, 0n, 0m, 1f, 0u
gpg: next trustdb check due at 2019-04-20
pub   4096R/C1F666CC 2017-04-20 [expires: 2019-04-20]
      Key fingerprint = 72A6 BE3E 6D6B 927F 82F1  02E2 3D2D 1039 C1F6 66CC
uid       [ultimate] Tomas Henriquez (gpg2 remember) <xxxxxx@xxx.com>
sub   4096R/4A71231A 2017-04-20 [expires: 2019-04-20]

I don’t care sharing this because I already invalidated the key, but is good to see all the options that I picked:

  • Please select what kind of key you want: I choose the default algorithm, unless you have specific requirements, that should do.
  • What keysize do you want? - 4096 bits long, why the hell not?
  • Key is valid for? - I like it for 2 years, but again, it depends if there are any special requirements.
  • Finally there will be a pop-up to pick your passphrase, when picking a passphrase take this tips into account:
    • You should never forget your passphrase ever.
    • A passphrase can be as long as you want, it’s asking for a phrase and not a word after all.

After we have created our own key, we need others public key to share our message to them. To do this, we need to download their public keys.

$ gpg2 --search-keys my-coworker-email@bububibu.com
gpg: searching for "my-coworker-email@bububibu.com" from hkps server hkps.pool.sks-keyservers.net
(1)     Co Worker <my-coworker-email@bububibu.com>
          4096 bit RSA key XXXXXXXX, created: 2017-04-13, expires: 2021-04-13
(2)     Co Worker <my-coworker-email@bububibu.com>
          4096 bit RSA key XXXXXXXX, created: 2017-04-07, expires: 2021-04-07
Keys 1-2 of 2 for "my-coworker-email@bububibu.com".  Enter number(s), N)ext, or Q)uit > q

In this case, is common to just pick the most recent one.

OPTIONAL: Is a good idea to sign their key, only if you are sure they are the person that claim to be

d:dev $ gpg2 --edit-key CoWorker

gpg (GnuPG/MacGPG2) 2.0.30; Copyright (C) 2015 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

pub  4096R/XXXXXXXX  created: 2017-04-11  expires: 2021-04-11  usage: SC
                     trust: full          validity: unknown
sub  4096R/XXXXXXXX  created: 2017-04-11  expires: 2021-04-11  usage: E
[ unknown] (1). CoWorker <xxxxxxx>

gpg> sign

pub  4096R/XXXXXXXX  created: 2017-04-11  expires: 2021-04-11  usage: SC
                     trust: full          validity: unknown

     My Coworker <xxxxxxxxxxxxxxxxx>

This key is due to expire on 2021-04-11.
Are you sure that you want to sign this key with your
key "Tomas Henriquez <xxxxxxxxxxxxxxxxxxxxx>" (XXXXXXXX)

Really sign? (y/N) y

You need a passphrase to unlock the secret key for
user: "Tomas Henriquez <xxxxxxxxxxxxxxxxxxxxx>"
4096-bit RSA key, ID XXXXXXXX, created 2017-04-19

gpg> trust
pub  4096R/XXXXXXXX  created: 2017-04-11  expires: 2021-04-11  usage: SC
                     trust: full          validity: unknown
sub  4096R/XXXXXXXX  created: 2017-04-11  expires: 2021-04-11  usage: E
[ unknown] (1). My Coworker <xxxxxxxxxxxxxxxxx>

Please decide how far you trust this user to correctly verify other users' keys
(by looking at passports, checking fingerprints from different sources, etc.)

  1 = I don't know or won't say
  2 = I do NOT trust
  3 = I trust marginally
  4 = I trust fully
  5 = I trust ultimately
  m = back to the main menu

Your decision? 4

pub  4096R/XXXXXXXX  created: 2017-04-11  expires: 2021-04-11  usage: SC
                     trust: full          validity: unknown
sub  4096R/XXXXXXXX  created: 2017-04-11  expires: 2021-04-11  usage: E
[ unknown] (1). My Coworker <xxxxxxxxxxxxxxxxx>

gpg> quit
Save changes? (y/N) y
d:dev $ gpg2 --list-keys
pub   4096R/192A9CD8 2017-04-11 [expires: 2021-04-11]
uid       [  full  ] CoWorker <xxxxxxx>
sub   4096R/98289174 2017-04-11 [expires: 2021-04-11]

Now that we have their public key and we have signed it for trust worthiness we can sign and encrypt our message to them. Pick user public keys that you want and you are done

# -se equals sign + encrypt message
$ gpg2 -se -r my-coworker-email@bububibu.com lewl.pw

You need a passphrase to unlock the secret key for
user: "Tomas Henriquez <thenriquez@ebates.com>"
4096-bit RSA key, ID XXXXXXXX, created 2017-04-19

A file called <file>.pgp will be created and you can share it with the specified user as you wish.

If you want to read more, the official docs are pretty good! Please check here and here for more information.

shell security gpg2 2017-04-23

Repeat every X seconds a query in postgresql shell

Sometimes you want to repeat a query constantly to see updated data, here is a way to do it on the postgresql shell

# Execute query
SELECT count(*) FROM table;

# Tell postgres to repeat it every X seconds
\watch 5

And that’s it!

postgres query 2017-04-23

Detach users in tmux

Sometimes in a server there are many users attached to a tmux session, because screens can be smaller on their terminal, this tends to happen:


You can detach users that are making this happen by issuing the command <PREFIX> D. It will show you their screen size and you can detach them.

screen size

Or you could be a full fledged __ and kick everyone out when you are attaching to the session with tmux a -d.

shell tmux 2017-04-23

Exit insert mode on a `norm!` command in vim

When you want to do some format changes in vim via the norm! command, instead of doing them in 2 different commands, you can actually exit insert mode inside norm! by doing Ctrl-V and then <ESC>.

Let’s say I want to get all these emails into a list so I can do a query with them


The general format is :[range]g/<pattern>/<cmd>. By default the range is the whole file.

We can execute this command :g/^/norm!I"\?A", to get what we want; let’s break it down:

:g This specifies that we want to execute a command.

/^/ Match all lines (since all lines do have a beginning of line).

norm! Means we will execute a normal! Command, which means vim will run things you would do to edit a file and execute them on matched lines.

I"\?A", I means go to the beginning of the line in insert mode and insert the " character, then comes the Ctrl-V + <ESC> that looks like \? to exit insert mode, A goes to the end of the line and enters insert mode and finally add ", characters.

Now the file would be like this:


To finish, just run J on all lines and add the brackets at the beginning and end of the line and you are ready to do that query!

("pewpew.lazor@gmail.com", "null.personality@outlook.com", "your.grandpa@aol.com", "angry.at.life@hotmail.com",)
vim command 2017-04-23

Delete all s3 branches in aws

With awscli you can access your aws account and manage everything from there. I needed to remove all buckets from S3 so I did this:

aws s3 ls | cut -d " " -f 3 | xargs -I{} aws s3 rm s3://{} --dryrun --recursive

Let’s break it down:

  • aws s3 ls will show all the bucket names with their creation time (i.e. “2011-10-18 17:48:34 mah_bucket”)
  • cut -d " " -f 3 will split the result and get the 3rd column, which is the name
  • xargs -I{} aws s3 rm s3://{} --dryrun --recursive deletes all the buckets recursively

In this case, notice I added the flag --dryrun so we can test it knowing it will do exactly what we want before executing it.

After deleting all objects in the buckets, let’s delete the buckets!

aws s3 ls | cut -d " " -f 3 | xargs -I{} aws s3 rb s3://{}

Boom! Done!

shell aws s3 2017-04-23

Git diff file against another branch

:Gdiff <branch>:% -- folder/

% expands to the file name on vim. -- folder/ will show only the changes inside that folder

git diff fugitive 2017-04-23

Python list flatenning

To flatten a list in a simple way we can use the operator.concat and a reduce function.

In [1]: import operator

In [2]: my_list = [[1, 2], [3, 4, 5]]

In [3]: print reduce(operator.concat, my_list)
[1, 2, 3, 4, 5]
python algorithm 2017-04-23

Vim un-match

Sometimes I want to delete lines that doesn’t match a particular pattern, for those you can do a regex


Breaking it down

^ means start of the line \(.*WORD\) it’s the atom word been searched \@! it’s the important command that negates the atom word (it’s not exactly a negation, please do a :help @! for more info). .*$ everything else until the end of line.

Now I could do this to delete them:


There are other very interesting use cases for this, for example, you could match words that are not followed by some other pattern:


This will find all cases of foo not followed by bar.

vim search regex 2017-04-23

Biggest disk space offenders

When you want to cleanup your server you usually want to find the biggest offenders in your system, the command line tool ncdu is just the tool for that on Linux!

For mac the storage management tool is amazing also.

shell linux ubuntu mac 2017-04-23

Delete up to a character in vim

By using dt<char> you can delete up to a character in vim. So

I am here to Stay!

by using dtS since the start of the line will end up in

vim keystroke 2017-04-23

Show alias definition

If you write the command alias it will print all the aliases definitions on your .bashrc or .zshrc file.

shell bash zsh 2017-04-23

Use previous IPython command

By using the _ keyword you can use the previous command executed!


python ipython 2017-04-23

IPython immediate execution

When I use IPython sometimes I want to just press enter and execute my code instead of creating a new line. To do that I can simply do Alt-<Enter> instead.


python ipython 2017-04-22