- Tags : event, coding

Some weeks ago I had the opportunity to speak in the Python Vigo meetup. It was just a lightning talk, 5 minutes. Why?. I really enjoy Python Vigo meetups, they are useful, fun and I always learn something new. So, when I read an email asking for speakers, I proposed the only Python related subject I know something about: Jython.

I have been using Jython for years to manage Weblogic servers. Being honest, I don’t like it, without any doubt I would prefer to use Groovy, in fact, that’s what we usually do. But I thought it may be interesting for someone in the Python community.

It was my first time doing a talk so short. Also I’m more used to do them in English. It was clear to me how hard it’s to do a 5 minutes talk, even for an easy subject like this.

I just wanted to say three things, if possible, without slides:

  • Don’t use Jython if you are looking for Python performance improvement.
  • Use Jython to explore Java classes or change them dinamically using Python syntax.
  • Use Jython to combine it with Java programs, to load configuration dinamically, something like a DSL (Domain Specific Language) but using Python as language.

Probably, the talk wasn’t clear for most people… Also I miss 15 seconds more to finish my example. Yet, some people asked me some things later related to it… so I’m not completely sad with the result and I learnt how hard it’s to do a good lightning talk.

In case you are insterested, in this GitHub repo you can find the things I did and also a Readme with the explanation. The talk is recorded so you can watch it here:

Notes for the future:

  • Be careful with short talks, they are harder than the long ones.
  • Use introductory slides, at least one.
  • It’s better to say less and clear than more and hard to understand.

- Tags : hackathon, event, coding

This weekend I’ve participated in the Refugal, a hackathon for refugees. It isn’t the fist time I participate in a hackathon but this one has been special in some way. Do you know those videos of strangers who doesn’t know each other but they start to play together in the middle of the station?. Something like this one:

Well, this is exactly what I felt some times during the weekend.

Let me start in the beginning. I arrived soon and started to speak with people. A nice introduction to the hackathon by David and Edo, a game to improve creativity and we started with the ideas.

I know I’m more a doer than a thinker. But I like to collaborate so I proposed two ideas: a bot to help communicated refugees and volunteers and a network of home webcams interconnected to provide real-time information to the refugees and also to people at home. My presentation of the ideas was quite bad, mainly I was speaking about a bot… but not all people were developers so later some people told me they didn’t know what a bot is. Big fail.

My “bot” idea was the third one with more votes. It was another one I liked a lot more (the first one) but I stayed with the bot. Very soon the team was formed: four developers (2 web, 1 mobile and myself) and one designer. It was the more technical team, probably because of my bad explanation of the idea.

We moved to a room. We discussed the idea for a while in front of a whiteboard and we separated the bot in different sections: Facebook Connector, Telegram Connector, Bot Logic, Design (logo, web, slides, etc.) and DevOps (my part: server, docker, Apache Kafka, etc.). In less than 30 minutes, everything was ready and we were working on the project.

Hard to explain what happened after that, four guys and a designer working like one person. Lines of code, commands, systems, drawings… everything happening so fast, 133 commits in less than 10 hours. I remember the designer stopping her drawing and asking: “Ey, guys, I know some development but I’m not understanding what are you are talking about. Is it complex what you are doing?”. The four developers in the room said Yes! at the same time. You may take a look to github repo: Docker, Docker Compose, AWS, Apache Kafka, four nodeJS and one SQLLite… Everything integrated and working!. And the designer, what a great work she did. You may have the best backend in the world, but without a good designer, people isn’t going to appreciate it. The bot web is done by her, nice work!

Working as a team

From time to time someone proposing something: always accepted or postponed to the end if there was time for that. And back to work, not even a discussion in the whole weekend. Sometimes we even forget go to eat (but a great organization and the impacthub Vigo is so cool!). Also we had some funny moments, smiles and lot of complicity.

I even had time to add a SMS connector using Nexmo, the best 15 euros spent in my life. First time I develop something using nodejs. I don’t like it yet but I have to say it was a good decision for the project: lot of libraries and SDKs, easy to deploy and fast to develop.

Of course, we run the demo during the presentation, it was great to see people sending messages from different networks and speaking with our bot. So fun!. I will pay good money for a video of that moment but everybody was too busy sending messages.

RefuBot presentation

This morning, when I terminated the EC2 instance hosting the bot, I felt some type of sadness. Yet it has been great to meet such great team this weekend. And who knows?. RefuBot works and it scales. I can assure that. Maybe someone will discover it in the future and RefuBot becomes real. If it’s really useful, that would be fair.

Photos are from this Picassa album.

- Tags : java, vim, neovim, groovy, gradle

I’ve tried so many times change Intellij or Eclipse by vim.. But when it’s related to Java is really hard to find a real alternative to those IDEs. And when we speak about Groovy, it’s even worse. Yet I use vim a lot: edit files, write blog posts, etc. Also my Chrome and Thunderbird configuration uses Vim shortcuts, so I keep myself more or less trained.

Some weeks ago I’ve discovered this blog post Use Vim as a Java IDE and I want to give it another opportunity. Let’s start.

Neovim in Fedora

This is straight-forward:

sudo dnf -y copr enable dperson/neovim
sudo dnf -y install neovim
sudo dnf -y install python3-neovim python3-neovim-gui

For Fedora 25 is even easier:

sudo dnf -y install neovim
sudo dnf -y install python2-neovim python3-neovim

For other systems, just check the official Neovim documentation.

We’ll need some other plugins:

sudo dnf -y install astyle

Install vim-plug

Again, this is straight-forward following the official instructions:

curl -fLo ~/.local/share/nvim/site/autoload/plug.vim --create-dirs \
    https://raw.githubusercontent.com/junegunn/vim-plug/master/plug.vim

Install plugins

This is when things become messy. Start editing ~/.config/nvim/init.vim to add the plugins:

"""""""""""""""""""""""""
""""    vim-plug     """"
"""""""""""""""""""""""""
call plug#begin('~/.local/share/nvim/plugged')

" Others

Plug 'scrooloose/nerdtree', { 'on':  'NERDTreeToggle' }
Plug 'majutsushi/tagbar'

" Java development

Plug 'sbdchd/neoformat'
Plug 'artur-shaik/vim-javacomplete2'
Plug 'Shougo/deoplete.nvim', { 'do': ':UpdateRemotePlugins' }
Plug 'neomake/neomake'

" Initialize plugin system
call plug#end()

"""""""""""""""""""""""""
""""    deoplete     """"
"""""""""""""""""""""""""
let g:deoplete#enable_at_startup = 1
let g:deoplete#omni_patterns = {}
let g:deoplete#omni_patterns.java = '[^. *\t]\.\w*'
let g:deoplete#sources = {}
let g:deoplete#sources._ = []
let g:deoplete#file#enable_buffer_path = 1


"""""""""""""""""""""""""
""""  Java Complete  """"
"""""""""""""""""""""""""
autocmd FileType java setlocal omnifunc=javacomplete#Complete

"""""""""""""""""""""""""
""""     neomake     """"
"""""""""""""""""""""""""
autocmd! BufWritePost,BufEnter * Neomake

"""""""""""""""""""""""""
""""     neoformat   """"
"""""""""""""""""""""""""
augroup astyle
  autocmd!
  autocmd BufWritePre * Neoformat
augroup END

Open nvim and type :PlugInstall.

Done!

Now, if you open a Java project, you should have auto completion, auto format and lint capabilities.

I will update this blog post with new things as soon as I have them.

TODO

  • [ ] Add Groovy support.
  • [ ] Add some screenshots or recording.

- Tags : humble, OptareSolutions, management

Last year I wrote a post with my goals for first time in my life. It was something new and I must say I’ve read it many times to see how things are going. It has been a good experience and I want to repeat it this year.

2016 year has been, in the professional aspect, the best year of my life. Probably also in the personal aspect with a new kid. I’m not going to speak too much about the last year, it was just great for me. I’ve achieved some of the goals, failed in others, and there was new ones… That’s the life.

Beginning of 2017 has been very stormy, even hard. Lots of problems, work and changes. That’s the reason I’ve waited so much to write this blog post. The good point when things are difficult is you are going to learn a lot. I have discovered things about my environment and myself which I’ve never thought about before.

The first thing I’ve discovered is the engineers “ego problem”. When you are fighting hard to make a project success, when you solve difficult problems, when you have to make your customers understand the consequences of their decisions or you are training younger engineers… Slowly but relentless the ego problem is growing on you. And let’s make this clear: I’m guilty. My motto always was: “No project where I work can’t fail”. Now I understand how wrong I was. Project can be a huge success for the customer, but a big fail for your company or the team because the lost of trustiness or the casualties.. And now I know I had some failed projects in my life.

Other good sign of this is my proposals for the last year. All of them are about me. Again, the ego problem. The reason was my fear to be out of the IT jobs market. Be without a project or a salary really scares me. I think this is irrational, I have a B plan just in case I loose my job and I wasn’t able to find a new one. Also I’m receiving job offers every week, some of them really good ones (that does not help with the ego problem either). Probably it’s more related to spend days without to do something useful and because I feel lucky to do what I do.

I deserve a punishment and 2017 will be the year to start with it.

First goal: repeat every morning in front of the mirror “Be humble” three times. If in some point of the day I fail on that, I will go to the involved person and I will ask for forgiveness. It doesn’t matter if it’s a small fail or the involved person is a jerk, I will do it anyway.

Second goal: make the company where I work a better place to work. This means stop claiming and start to actively work in the solution to solve the problems I’m claiming about. Let’s make this clear: companies must earn money. But at the same time, employees must be feel involved, happy and positive. Both things are related. I’m not sure how to do this or the real impact which I can have with my actions in a big company. I’m just one, I’m not a superhero and the company where I work is good but not perfect, there is always room for improvement. It doesn’t matter my role, I will just try to do as much as possible. Also I want to focus in the youngest engineers. In my opinion senior engineers should be more involved in professional evolution of the younger ones and I want to lead by example on that.

Third goal: return something to your local community. I can’t believe this is a goal I have when I’m 36 years old. Start so late on this is probably the biggest failure in my career. It was even more clear to me after an event last week for students. Two of the speakers made great points on the importance of contribute to your local community. I admire those two guys. I don’t pretend to be like them but at least I want to do something. Start the VigoJUG is a beginning. I hope to find other ways to contribute during the year. If you are reading this and you have any idea, please, let me know.

The last goal isn’t really a goal but a fear. Yesterday a fellow said I’m a Project Manager. It has been very painful for me because I realize he was partially right and I don’t want that role anymore.

Dilbert Has Management Potential

Last year I’ve been developing a lot but 2017 has been crazy and I’ve been more focused in solve problems and get things done… So my technical work has been reduced to Pull Request reviews and some time developing in the weekend. Rest of the time: mails, phone calls, write documents and meetings. I don’t want to spend the rest of my life doing only those things. I know they are useful and they are the way to achieve the goals I’ve described before. But I’ve spent 20 years of my life dealing with computers, I’ve trained myself to learn vim shortcuts, researching how to write clean code, use the command shell, etc. and I don’t want to loose that. It makes me feel happy and full. If I don’t do it, I will end being sad and frustrated, so I have to found a way to conciliate both things. Any advise will be welcomed.

I want to finish this post asking for forgiveness. Specially during my time in Accenture I did a lot of wrong things. Not even close but I pushed too hard some times in Optare. If I did something which has troubled you, I’m very sorry. I promise: I will be humble, I will be humble, I will be humble.

- Tags : development, devops, gke, terraform

In my previous post, Deploy on Kubernetes GKE with Terraform, we’ve seen how to start to use kubernetes but in a very simple way. The next thing we would like to do is persist the configuration, so we don’t need to reconfigure our bot each time we start the cluster. This post explain how to do it from the configuration created in the previous one.

Again we’ll use Leanmanager bot but everything applies to any other system which needs to store configuration or data in a database. In the case of Leanmanager we are using Boltdb, a pure Go key/value store. Boltdb is great for development but it doesn’t support to have more than one process opening the same database file, so it may be problematic if we want to have more than one Docker instance at the same time. Yet it’s enough for our purposes and the process is similar for Consul which it’s already in the Roadmap.

Create your persistent disks

If we want to persist data, we are going to need a disk, that’s common sense. In GCE we can do it very easily:

gcloud compute disks create --size 1GB leanmanager-disk

But again, we want to do it in an automated way with Terraform. Use the following file leanmanager-disk.tf:

variable "disk_name" {
  default = "leanmanager-disk"
}

resource "google_compute_disk" "default" {
name  = "${var.disk_name}"
  type  = "pd-ssd"
  zone = "${var.region}"
  size  = "1"
}

If you want to know more about it, visit the Terraform Google_Compute_Disk reference documentation.

Tell the container about the disk

In our previous post, we’ve launched the bot using kubectl run. This is OK for simple configuration but if we need to have something more complex, it doesn’t scale. We can create a pod, a group of one or more containers, using a YAML file like this:

apiVersion: v1
kind: Pod
metadata:
  name: leanmanager
  labels:
    name: leanmanager
spec:
  containers:
    - image: antonmry/leanmanager:latest
      name: leanmanager
      env:
        - name: LEANMANAGER_TOKEN
          value: LEANMANAGER_TOKEN_TEMPLATE
        - name: LEANMANAGER_PATHDB
          value: /mnt
      volumeMounts:
          # This name must match the volumes.name below.
        - name: leanmanager-persistent-storage
          mountPath: /mnt
  volumes:
    - name: leanmanager-persistent-storage
      gcePersistentDisk:
        # This disk must already exist.
        pdName: leanmanager-disk
        fsType: ext4

The file is auto-explanatory except the value LEANMANAGER_TOKEN_TEMPLATE. I don’t want to hardcode the Token here because the file will be uploaded to Github. Instead of that, I want to use my local environment variable LEANMANAGER_TOKEN but this isn’t supported yet in K8s, see Kubernetes equivalent of env-file in Docker.

So I’ve created a YAML template and in the Terraform file changed the last local-exec to:

  provisioner "local-exec" {
    command = "cp leanmanager-pod-template.yaml leanmanager.tmp.yaml && sed -i -- 's/LEANMANAGER_TOKEN_TEMPLATE/${var.LEANMANAGER_TOKEN}/g' leanmanager.tmp.yaml"
  }

  provisioner "local-exec" {
    command = "kubectl create -f leanmanager.tmp.yaml"
  }

  provisioner "local-exec" {
    command = "rm -f leanmanager.tmp.yaml"
  }

Basically, I’m replacing strings with sed. Other more sophisticate approaches are possible as K8s secrets or Ansible, but this is simple and enough for the task we want to do.

Create the pod and test

Time to create the cluster and the pod:

terraform plan
terraform apply -var LEANMANAGER_TOKEN=$LEANMANAGER_TOKEN

The bot should connect. Now we can do some changes in the configuration, delete the pod:

kubectl delete pod leanmanager

Create it again:

kubectl create -f leanmanager.yaml

And check the status with the following commands and, once it’s in Running state, see if everything has been persisted:

kubectl get pod leanmanager
kubectl logs leanmanager

Conclusion

Persist data in Kubernetes is quite easy, even if you are going to do it automatically.

If you want to check all the files, the full project and the associated PR are in Github.

Not already linked but useful resources

- Tags : development, microservices, go, go-kit

Microservices don’t fit all use cases

When I’ve started my Leanmanager bot, I’ve chosen to use the same approach (well, really it’s a framework but that word seems to be censured if you are a gopher so I will use approach) as the kubernetes project: if it’s good for k8s, it should be good for me and also a good way to learn a bit more about kubernetes. So I’ve implemented all the REST APIs using github.com/emicklei/go-restful.

Then I had the opportunity to work a bit with go-kit. A framework, ups, no, sorry, a toolkit to create microservices. My initial opinion was too complicated and too much boilerplate. Yet I would like to give it another opportunity, there are some interesting things useful for bots (support many transports, RPC approach, instrumentation) and my main motivation with the bot is to test new technologies and ideas so… why not?

If you visit the Go-kit website, and then, you will jump very soon into the stringsvc tutorial. The tutorial is awesome but it isn’t a five minutes read and it’s a bit too complicated to start to work with Go-kit. I recommend an approach a bit different. First, watch Go + Microservices = Go Kit from Peter Bourgon. This is an awesome talk explaining microservices and their use cases. Clearly Peter knows a lot about the subject: microservices aren’t for everyone.

If after the video, you think microservices fit your case, go ahead and may the Force be with you.

Go-kit addsvc example

First step, download Go-kit:

go get github.com/go-kit/kit

Then, just copy the addsvc example and download the dependencies (this may take a while depending of what you’ve already in the GOPATH).

cp -r ../../go-kit/kit/examples/addsvc/ .
go get ./...

The plan is simple, modify it to fit your use case while you are becoming more familiar with go-kit. But first, let’s try the example. Launch the server:

cd cmd/addsvc/
go run main.go

And in a different shell, launch the client:

cd cmd/addcli/
go run main.go -http.addr=:8081 1 2

If everything goes well, you will obtain something like:

1 + 2 = 3

Or you can use curl directly:

curl -H "Content-Type: application/json" -X POST -d '{"A":"xyz","B":"abc"}' http://localhost:8081/concat

There are some things to note now. First, the example has two methods, one for add numbers and another one to concat strings. Also, it supports many transport protocols, not only http, so you can launch the client using gRPC:

go run main.go -grpc.addr=:8082 1 2

Show me the code!

It’s time to go deeper. Open service.go. This is the file where the service definition is described and also implemented for this example.

Note: I may continue this in the future if I resume my work with go-kit.

- Tags : kubernetes, gke, terraform, devops

Writing a new post after six months and in Christmas… New year, new promises, old projects. I’ve been quite busy the second half of 2016, but also very happy and satisfied with some personal and professional projects. No more excuses and let’s focus in this post.

Introduction

I want to deploy my leanmanager Docker image so the bot is available all the time for the team, but you can choose any Docker image you want to use. I want to use Google Container Engine Kubernetes implementation and do it everything as much automatic as possible using Terraform.

GCE installation

First step, make sure you’ve created previously a project in the Google Cloud console. If you don’t have the Cloud SDK, you are going to need it. It’s quite easy to install following the Google instructions:

cd ~/Software
curl -O https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-138.0.0-linux-x86_64.tar.gz
tar -zxvf google-cloud-sdk-138.0.0-linux-x86_64.tar.gz
rm google-cloud-sdk-138.0.0-linux-x86_64.tar.gz
./google-cloud-sdk/install.sh

Note: be careful, last command modifies your .bashrc and it may cause problems.

Now, it’s time to log in:

gcloud init

And now, install kubectl, the client to manage kubernetes:

gcloud components install kubectl

Launching the service

The first step is to create the cluster. It may take some time.

gcloud container clusters create leanmanager-cluster

Ensure kubectl can access to the service:

gcloud auth application-default login

And now, it’s time to launch the leanmanager image:

kubectl run leanmanager-node --image=antonmry/leanmanager:latest --env="LEANMANAGER_TOKEN=$LEANMANAGER_TOKEN"

Note: I have an environment variable LEANMANAGER_TOKEN with the token to authenticate to Slack. The bot automatically connects using Websocket but if you want to expose any service, add --port=8080 to allow access to it. You will need also to create a Load Balancer, the steps are explained here.

Clean the service

To stop the service and delete the cluster:

gcloud container clusters delete leanmanager-cluster

Install Terraform

Our next step it’s going to be to automate all the process. To do it, we’ll use Terraform.

If you don’t have it, first step is download it from here and install it. For linux:

curl -O https://releases.hashicorp.com/terraform/0.8.2/terraform_0.8.2_linux_amd64.zip
unzip terraform_0.8.2_linux_amd64.zip

Now move it to a folder which is in your PATH, in my case:

terraform ~/bin/
echo terraform >> ~/bin/.gitignore

Last command is executed because I’ve ~/bin in github but I don’t want upload a so big file as terraform executable.

Now you should be able to use terraform in your system. If you’ve never used before, it’s a good moment to read the Getting started guide.

Download GKE credentials

Follow these instructions to download the credentials file:

  1. Log into the Google Developers Console and select a project.
  2. Click the menu button in the top left corner, and navigate to “IAM & Admin”, then “Service accounts”, and finally “Create service account”.
  3. Provide a name and ID in the corresponding fields, select “Furnish a new private key”, and select “JSON” as the key type.
  4. Clicking “Create” will download your credentials.
  5. Rename it to account.json. Make sure you don’t publish this file, for instance in Github (add it to .gitignore).

Create the cluster using Terraform

In the same folder you have your account.json, create a Terraform file like leanmanager.tf:

variable "region" {
  default = "europe-west1-d"
}

provider "google" {
  credentials = "${file("account.json")}"
  project     = "wwwleanmanagereu"
  region      = "${var.region}"
}

resource "google_container_cluster" "primary" {
  name = "leanmanager-cluster"
  zone = "${var.region}"
  initial_node_count = 1

  master_auth {
    username = "mr.yoda"
    password = "testTest1"
  }

  node_config {
    oauth_scopes = [
      "https://www.googleapis.com/auth/compute",
      "https://www.googleapis.com/auth/devstorage.read_only",
      "https://www.googleapis.com/auth/logging.write",
      "https://www.googleapis.com/auth/monitoring"
    ]
  }
}

Check what it’s going to create:

terraform plan

Review the output and if it’s ok, launch it!.

terraform apply

If everything goes well, you will see a message like this:

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

And you can check it with:

gcloud container clusters list

If you want to access with kubectl you need to login first:

gcloud container clusters get-credentials leanmanager-cluster --zone europe-west1-d
kubectl cluster-info

This step can be added to the leanmanager.tf inside the resource block:

provisioner "local-exec" {
    command = "gcloud container clusters get-credentials ${var.cluster_name} --zone ${google_container_cluster.primary.zone}"
}

Launch our Docker instance

Once you are logged in with kubectl, it’s exactly the same as before:

kubectl run leanmanager-node --image=antonmry/leanmanager:latest --env="LEANMANAGER_TOKEN=$LEANMANAGER_TOKEN"

But you can also do it with Terraform adding this snippet in the beginning:

variable "LEANMANAGER_TOKEN" {
      default = "USE YOUR OWN TOKEN"
}

And after the previous local-exec:

provisioner "local-exec" {
    command = "kubectl run leanmanager-node --image=antonmry/leanmanager:latest --env=LEANMANAGER_TOKEN=${var.LEANMANAGER_TOKEN}"
}

And executing terraform passing the variable:

terraform apply -var LEANMANAGER_TOKEN=$LEANMANAGER_TOKEN

Other option would be read the variable directly but you have to change the name to fit the terraform requirements and I’m using it for other things. More info here.

Clean everything

With Terraform is really easy, just:

terraform destroy

Not already linked but useful resources


Older posts are available in the archive.