I’m 39++

I’m turning 39++ and in my 20 years of IT I have seen and did many things

 

  • My first day on the job I broke the SCM. Destroyed completely
  • I sent 3 people for a week to another country to fix something for a boolean I forgot to reset
  • I forgot to invoice USD 50k after a fix to an invoicing software
  • Did UI skins in java in year 2001
  • Worked with IBM visual age for java!
  • I’ve seen XML born
  • Worked in IBM
  • I coded in SmallTalk, FrontPage, Java AWT , Textile apps, and IoT
  • I coded SOA , REST, SOAP services and microservices
  • I coded a POS ,  backoffice and mobile apps
  • I have seen OS2 (looks like this)
  • We wrote SQL in the JSPs
  • I interviewed many people, few good
  • Visited India, US, Chile , Peru, Uruguay for work.
  • Saw James Gosling in person! this is the guy
  • Got fired from a startup in the year 2000 bubble
  • Deployed java apps in AS400! ( I bet just a few did that)
  • Worked remotely with people in Italy, India, Denmark, Argentina, Brasil, Chile, Uruguay, United States, Spain, New Zeland, Dubai, England
  • My first programming book was Javascript in 1996
  • I coded a call center app
  • Tried college 3 times
  • Had Gmail since 2004
  • Used ICQ, MSN Messenger and IE 4
  • Download 4MB of Internet Explorer 4 thru the whole night connected to a modem.
  • Learned with the IBM Red books
  • Played the first version of DOOM
  • Coded in a Commodore 64!
  • I got evacuated for fire many times
  • I got into the office and it was robbed thru the night
  • I got fired and I got another job the same day

 

Happy coding !

Poor’s Man Monitoring & Recovery

When you are not willing to pay for monitoring services like AppDynamics (80k /y ) or UptimeMonitor (monthly fee) you still have a cheap shell script based solution.

Shells scripts and Slack are your friends.

Scenario

You have a Tomcat webserver running and you want to take action if it dies and notify of the action to a group of people.

Tools

Crontab, Shell script and Slack

Step 1: Setup crontab

In your linux server add a new crontab crontab -e and edit the following check every minute.

*/1 * * * * /opt/scripts/check.sh

Now, restart your cron service
service crond restart

Step 2: Create the check

Create a new shell script in /opt/scripts/check.sh with the following code:

#! /bin/sh
SERVICE=/etc/init.d/tomcat8

if $SERVICE status | grep -q "not running"; then
    $SERVICE start
    /opt/scripts/slackpost.sh "https://hooks.slack.com/services/your-hook" "#monitor" "Tomcat Automatic Restart"
fi
if $SERVICE status | grep -q "stopped"; then
    $SERVICE start
    /opt/scripts/slackpost.sh "https://hooks.slack.com/services/your-hook" "#monitor" "Tomcat Automatic Restart"
fi

 

This will find if the service crashed or was not started and it will start it. After staring the service it will send a notification to your slack channel (you can strip this piece if no slack desired)

Step 3: Notify to Slack

Create another script in /opt/scripts/slackpost.sh with the following content:

#!/bin/bash

# Usage: slackpost "" "" ""

webhook_url=$1
if [[ $webhook_url == "" ]]
then
        echo "No webhook_url specified"
        exit 1
fi

shift
channel=$1
if [[ $channel == "" ]]
then
        echo "No channel specified"
        exit 1
fi

shift

text=$*

if [[ $text == "" ]]
then
        echo "No text specified"
        exit 1
fi

escapedText=$(echo $text | sed 's/"/\"/g' | sed "s/'/\'/g" )
json="{\"channel\": \"$channel\", \"text\": \"$escapedText\"}"

curl -s -d "payload=$json" "$webhook_url"

 

And finally chmod +x /opt/scripts/*sh to make both scripts runnables.

Happy coding!

How to Copy all the Jenkins Jobs programmatically

We are moving our projects from dev to qa and production. It is kind of painful to redo all jobs in jenkins or copy them individually. So using the scripting capabilities of Jenkins (groovy) we can copy the jobs with a new name and move them if you want to another view.

How To Force SSL for Tomcat with AWS ELB in Front

The problem

You have an awesome Java app that is growing like crazy and you need to be on top of it. You will start spawning servers to scale horizontally and putting a reliable balancer in front. AWS ELB is a good one but it will not solve all your needs out of the box. You need to tweak it a little bit to fit your needs.

Your app is secure, you have a SSL certificate installed but the problem is how do I redirect or force all HTTP traffic to HTTPS ?

The approach

Put an NGINX in each Tomcat instance. You will say.. another webserver ? yes, another one. Another point of failure but a very reliable one. Nginx is super reliable and has the smallest footprint I ever seen in a serious web server. (NodeJS is not a serious one, that is why people puts NGINX in front of it)

NGINX Config

NGINX will rewrite all requests to the ELB calling the HTTPS port utilizing status 301.

server {
  listen 80;
  server_name myhost.com;
  # add ssl settings
  return 301 https://myhost.com$request_uri;
}

Tomcat config

Now you need to touch the server.xml configuration of Tomcat (located @ $TOMCAT/conf/server.xml) .

<Connector scheme="https" secure="true" proxyPort="443"
  port="8080" protocol="HTTP/1.1"
  connectionTimeout="25000"
  URIEncoding="UTF-8"
  redirectPort="8443" />

Amazon Elastic Load Balancer

You are not done yet. You have to configure in the AWS ELB the following listeners.

 HTTP 80 -> HTTP 80 (nginx)
 HTTPS 443 -> HTTP 8080 (tomcat)

I hope it works for you. It did for me.

Track Functionality Usage With Splunk

I have been playing lately with Splunk and let me tell you… WOW! Awesome tool. You can have N servers forwarding logs to a main server where you can search using SPL to query all over those inputs.

Here I will demostrate how to make a simple stat usage of your website that you can aggregate later with Splunk to check how your users are using your stuff.

First you have to change the way you log. Splunk likes eating key=values all over the place. So feed them


action=user_searching_stuff , age=Some , email=some@email.com , gender=M... etc

Now that you know how to log lets imagine a scenario like this. You have a screen with a few filters and you want to know what filters are used the most. So your log will be something like this..

User searching by email. Only email field was filled.


action=user_searching_stuff , name= , email=some@email.com , gender= , ... etc

User searching by gender. Only gender field was filled.


action=user_searching_stuff , name= , email= , gender=M , ... etc

So after a while of users using it you come up with the following Splunk search string.


* index="test" statistic "action=user_search_stuff" | stats count(eval(name!="")) as name, count(eval(email!="")) as email, count(eval(gender!="")) as gender

This will give you a table with counts per event per this user_searching_stuff action. Super useful info to keep track of how your users use your product.