Bug Troubleshooting w/ New Tools

Tech: Ruby, TigerConnect, Splunk, SSH, CSSHX, grep, curl, HTTParty

Challenge: an application is configured to log an event in Splunk after a successful send of a TigerConnect HIPAA-compliant alert message. The entry point is working, however the alerts are not being sent AND the event is not being logged. After successful troubleshooting to determine what’s NOT wrong, I turned to CSSHX and grep to scour the logs. Why CSSHX… our production instances run on 4 servers concurrently. I want to quickly navigate in 1 terminal tab w/ 4 windows and grep the the logs!


CLI (for 4 instances) => csshx username@dserver.location.extension username@dserver.location.extension username@dserver.location.extension username@dserver.location.extension


grep for various strings => grep -A 10 "error" log_file.log

-A # is for number of lines after the string

-B # is for number of lines before


I also wanted to check my HTTParty gem code that transmits the event from app to Splunk. I used a curl statement to mimic the HTTParty call.

CLI => curl https://url.extension:####/services/collector -k -H 'content-type: application/json' -H 'authorization: XXXXXX' -d '{"event":{"app": "data"}, "sourcetype": "_json"}'

-k (–insecure) allows for insecure server connections

-H header key, values


I was quite proud to implement the use of these tools when troubleshooting. I have used them in different contexts and it was cool to bring everything together to discover the issue. However (and sadly), the issue was much simpler.

Ruby ENV variable are strings. Even if the string is a “boolean”. 

in .env … VAR=false

if ENV['VAR'] == false then <do something> end

=> equates to FALSE… because ENV['VAR'] exists as a string, it is TRUE.


Reference: Git

Challenge: sticky post for Git!


git tag -a v1.4 -m "my version 1.4"

git show v1.4

git push origin v1.5 OR git push origin --tags

View Diff in Past Commits

git show -commit #-

Create & push repo locally to remote

git init

git commit -m "msg"

git remote add origin https://github.com/git_user/repo_name.git

git push -u origin master

Update author of commits

git author

What remotes do I have?

git remote -v

Grab all branches from remote

git fetch --all

Reset the index and working tree

git reset --hard *branch*

Redo commit message (prior to push to remote)

git commit --amend -m “new msg"

View # of commits and author

git shortlog -sn

Grab remote branch and force overwrite locally


{{ng}} Fork & Deploy

Tech: Angular, Kinvey, AWS S3, NPM, Gulp, CLI, Git

Challenge: take an existing Angular 1.5 repo (with Kinvey Data Store, hosted by AWS S3), fork it, and deploy! Luckily, I forked this project approximately 10 months ago. Some of the work will just be a memory jog. The challenge here is connecting the fork to Kinvey & AWS, which was previously done by another developer.


First, we use git and create a branch (git checkout -B new_app_instance master)  for the new project instance. I used the term ‘fork’ above incorrectly. In order to simplify the maintenance of multiple live, hosted instances of this “To Do List” application, we branch off of master and deploy the various instances separately to unique AWS S3 buckets.

Once on the new branch, let’s add in environment files. Three in this instance: local, dev, prod. Here is the sample format env.local.js.example:


The local file allows localhost to connect to whatever instance you’d like, without having to muddle with official dev or prod files.


For dynamic elements, we use env variables and conditionals in Angular to display the appropriate text/div per instance.

ENV Variable:

<span flex>{{$ctrl.$state.current.data.title}}</span>


<div ng-if="$ctrl.envConfig.env == 'appInstance2' || $ctrl.envConfig.env == 'appInstance3'">


Next, we setup the AWS S3 bucket. Pretty simple because we clone an existing instance. The key here is to turn on static hosting! The S3 bucket will look for Angular’s index.html file as the entry point and Angular will take over.



Ok, the env variables, Kinvey credentials, and AWS instance are in place. Let’s add an npm command to our package.json file for easier deployment (and for this specific app instance):

"deploy:prod": "gulp build --environment prod && pushd dist && aws s3 cp --recursive . s3://to-do-app --acl public-read && popd"


Ready to deploy from the cli… npm run deploy:prod … aaaannnndddd error!



Looks like there is a region mismatch. I want to check with region I am trying to deploy to:

aws s3api get-bucket-location --bucket to-do-app


And the same error rolls in! Something is amiss… time for some googling… I think I have to upgrade my aws version:

aws --version

pip install awscli --upgrade --user


Ok, once the awscli is updated, I rerun my bucket location command and realize I am trying to deploy to the wrong branch. Location:

{ "LocationConstraint": "us-east-2" } (and I want to deploy to 1)


So, I need to VIM into the aws config file and make an update. Since I have never done this before, I wonder where the file is!

find '.aws'


Vi into the .aws file, make the region update, and deploooooooooooooyy!


Note: spinning the instance up took 2-3 hours, and this post attempts to hit the highlights for easier fork & deploy in the future. 

Error! Is port 5432 allocated?

Tech: PostgreSQL, Docker, Rails, Ruby, grep

Challenge: seeing error messages when launching PostgreSQL. I have been seeing this in a few different scenarios. One is when Docker is/was running a PG container, and the computer has slept or been closed. 

HINT:  Is another postmaster already running on port 5432? If not, wait a few seconds and retry.

LOG:  could not bind IPv4 socket: Address already in use</code>

HINT:  Is another postmaster already running on port 5432? If not, wait a few seconds and retry.

WARNING:  could not create listen socket for "localhost"

FATAL:  could not create any TCP/IP sockets

LOG:  database system is shut down


In the case of a Docker/Rails Server conflict, I have been grepping for PG instances and shutting them down.

ps -ef | grep postgres

What’s going on here? …

ps = process select

-ef = every process

Let’s take out PG and start over…

sudo pkill -u postgres


Challenge: another time, a similar error when the Rails database file was not fully configured.

could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/pgsql_socket/.s.PGSQL.5432"?


I tried the above, encountered the same problem, and then looked to the database.yml file:

simply add… host: localhost






Reference: Command Line Interface (CLI)

Challenge: quick reference for useful CLI commands. (Note: in creation of this post, many of these commands became ingrained, and as a result, used more frequently. Learning!)

look at CLI history

mv old_directory_name new_directory_name

look at CLI history


and run one of the cmds


view file sizes

ls -sh

jump word navigate

esc + f, esc + b

ctrl + [, release, b or f

discover internet

sudo ifconfig

view logs

tail [options] [filename]

tail --lines=1000 file.ext

tail -f thin.log

view Heroku logs

Heroku logs --tail

test URL accessibility

ping URL


multiple terminals


search package manager brew

brew search app_name_string

view/edit bash profile

open ~/.bash_profile

editing configs

nano ~/.gitconfig

nano ~/.bash_profile

reload after edit

source ~/.gitconfig

exec bash

show current pathway (print working directory)


jump to home directory


create new file

touch file_name

rename file

mv old_name new_name

delete file

rm file

delete directory with files

rm -rf directory

copy / secure copy

cp /tmp/file_name .

scp file_name server.location:/tmp


iTerm2 Shortcuts

split screen vertically (two panes)

cmd + d


~$ CLI, PostgreSQL, Brew Adventure!

Tech: CLI, PostgreSQL, Brew, Rails, Ruby, Heroku

Challenge: while developing Embiid21.com and SPECTRUM, I wanted to pull the production database down from Heroku to use locally. This was a classic development case… get an error, fix an error… yeah! success… wait, there is a new error. Let the adventure begin!


Heroku provides great, easy-to-read tutorials, which is exactly what I experienced in this case. I googled my intent… “pull db from heroku to local”… and quickly found the exact tutorial and code.

heroku pg:backups:capture

heroku pg:backups:download


The download command creates a file in the root of the project directory, latest.dump. Now to restore my local db with this data:

pg_restore --verbose --clean --no-acl --no-owner -h localhost -d development latest.dump


Error! My CLI could not find the pg_restore command, even though I knew I was regularly using PostgreSQL. Based on my adventures here, I figured this was a case of the CLI path not knowing where to find PostgreSQL commands. Back to google, and I landed here: How to Modify the Shell Path in macOS. I temporarily added the path in my project directory:


and verified… echo $PATH


No luck. I knew I had PostgreSQL up and running, so the solution must be something else. This led me to brew…  I installed brew’s PostgreSQL package and refreshed my terminal:

brew search postgres

brew install postgresql

exec bash


Now, the pg_restore command works and looks successful. Let’s take a look at the Rails console to ensure the data came in properly. Error!

Reason: image not found


I landed here from my googling efforts: rails console not loading.

First, I tried linking brew’s readlines, but still having the same issue:

brew link readline --force


Finally, adding the Ruby gem for readline did the trick. Rails console back up and running; data properly imported from Heroku PostgreSQL database.

gem 'rb-readline'


Takeaway: although this solution is working, I can’t help but feel it’s hacky. There must be a cleaner way to use brew to link the readline or have the CLI recognize the pg commands.

Cron Job: Runaway Ruby Script

Tech: Ruby, Cron, gems Whenever, Chronic, CLI

Project: a Ruby web-scraper script scheduled/cronned (sp? hah) with gem Whenever. The Ruby script runs through several loops to ensure accurate and valid data, with a one-time upload to a Rails project hosted by Heroku.

Challenge: during development, I noticed the Heroku PostgreSQL table Statistics kept growing with new entries, after I **supposedly** killed the Ruby script. Very confusing!



It took a few attempts & white-boarding to work out the boolean switches for data validation and one-time-only upload of new data. There must be a process stuck in a loop. My first thought was to kill the cron job using crontab -r. That didn’t solve the issue, which left me very confused. So, I figured the removal of crontab either didn’t take or I was missing removing the right process. I tried ps on the cli and got:



Not seeing any cron job or Ruby script led to further confusion. Where is the runaway script!? Time to ask for help. A senior colleague almost immediately solved the issue. Killing a cron job doesn’t kill the forked script it kicked off. So, let’s look for running Ruby jobs!



Solution: Voila! Not just one runaway script/loop, but several. Time to… killall ruby !


Reference: Docker

Challenge: creating a “sticky” post, like timezones … a quick reference for useful, but not-as-frequently used Docker commands. 

View Docker logs:

docker logs running_container_name

Container size / memory usage:

docker system df

docker ps -s

CLI in running container:

docker exec -it running_container_name bash

View dettached container log output:

docker attach --sig-proxy=false running_container_name

List specific # of running containers, newest on top:

docker ps --last #

Filter for unused images:

docker images --filter "dangling=true"

Remove all unused images:

docker rmi -f $(docker images --filter "dangling=true")

Remove all non-running containers:

docker rm $(docker ps -a -q) 

Port syntax:

-p exposed_container_port_in_Dockerfile:live_url_access_point_port





Reference: Time Zones

Challenge: TimeZones and TimeStamps! When coding & configuring different systems, you have the potential of mismatched timezones. The aim of this post is to provide an ongoing guide to discovering your environments’ timestamps and updating to reflect the proper zone.


get: sudo systemsetup -gettimezone

see options: sudo systemsetup -listtimezones

set: sudo systemsetup -settimezone "America/New_York"


get container date/time: docker exec running_container_name date

set (add to Dockerfile): 

ENV TZ=America/New_York

RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone

Rails / ActiveRecord:

set (in /config/application.rb):

config.time_zone = 'Eastern Time (US & Canada)'

config.active_record.default_timezone = :local

get (in rails console):



set: heroku config:add TZ="America/New_York"


set: sudo raspi-config, International Options… … reboot





Linking Rails, PostgreSQL Docker Containers

Tech: Docker, PostgreSQL, Rails, Ruby

Challenge: create, launch, and link Docker containers… Rails and PostgreSQL. This setup is an alternative to using docker-compose.


setup Rails database.yml file


build postgres image => docker pull postgres

start postgres => docker run –name postgres -h localhost -p 5432 -e POSTGRES_DATABASE=db_name -e POSTGRES_USER=db_user -d postgres

start app => docker run –link postgres –name container_nickname -p 3000:3000 -v $(pwd):/opt/app_name -d app_name

rake cmds:

docker exec app_name rake db:create

docker exec app_name rake db:migrate

docker exec app_name rake db:seed


Note: this Rails repo was developed locally and pushed to a Docker server. I initially ran into the error, /tmp/pids server file exists already…. make sure to delete this file before building the image to push to server