- Tags : career, Optare

Today it’s a very sad day for me: it’s time to say goodbye to all my colleagues in Optare. I internally announced it four weeks ago so I should have enough time to process it, but I didn’t yet :-(

I started to work in Optare in 2007 as an intern. It was a great experience. I learnt so much. I always will be thankful for that. After 4 years and 5 months, I left the company to work as a project manager in a big consultancy firm. I didn’t like it. That experience put my ethics and professionalism beyond my limit so, after that, I was completely lost.

And again, Optare came to the rescue offering me to be the tech lead of a new team: NetApps. Cool people called it intrapreneurship or something like that. That has been my major professional success in my career and I don’t ever deserve the merit. I resigned the first day as the tech lead to be just another software engineer in the team. It was a team effort from the very beginning and, because of that, it’s so special. It isn’t one person against the world but a team of highly-skilled engineers collaborating and doing great things.

Once I discovered it was possible to have a happy team, a happy customer and earn money at the same time, I started to evangelize about it. And that’s how I ended as technical director. I learnt a lot in this role too. It’s easier to build things from scratch than change them once they are built but, sometimes, small impacts are enough to change things in the long term.

So, after 6 years and 1 month, I decided I would look for new challenges again. 10 years and 6 months in total. Wow! I changed a lot. Also the company. But I’m very proud of both things. In the case of a company, it’s amazing how it has evolved. In my case, It’s amazing how my wife and my manager have been able to make me stay in the correct path for so long.

And no! I wasn’t unhappy in Optare. That isn’t the reason to leave. In fact, I am extremely happy with this company. Honestly, I can’t think in any place better to work. Maybe there are some teams which I wouldn’t recommend for a specific person, but most of them are great. It’s a friendly environment, modern and full of interesting technologies applied to real and challenging architectures.

Also, it has a brilliant future ahead. For a company which has grown so quickly, there will be always up & downs. But Optare is a modern company: medium size, highly-specialized and full of great talent. Any Telecom operator in the world will pay good money once they know what Optare can offer.

But 11 years is a long time, I need a new challenge and I found one where I think I can have a real impact on things I care about: the Tech community and my area. Also the opportunity to work with some cool open-source technologies and continue learning.

So, it hurts a lot but it’s time to say goodbye. I’m really going to miss you.

- Tags : opensource, community, career, jug

As a Software Engineer and usual participant in different Tech communities, it was always a priority for me to involve in the Tech community the company where I work. I really believe it’s a win-win for the company and for the community but I also know it’s quite complex.

Have you ever do it something for the first time? If someone appears from nowhere and says you suck on that, probably you will lose the interest very soon. Even if that person approached with good intentions or saying the truth, the timing is crucial here. This happens all the time to opening companies. They stop the process and it’s a pity.

As co-organizer of local JUGs and conferences, I was asked several times from different companies about this. It’s hard to answer, there are so many things involved. So I prepared a list of possible answers to help you in the path to collaborate with the community.

Have clear your reasons

There a lot of reasons for a company to start collaborating with the Tech community. Just choose whatever makes sense for your company. But they must be valuable. In this case, pro-bono work is fine for individuals but not for companies. There is nothing wrong in analysing it as an investment.

There are mainly three reasons for most of the companies:

  1. Recruiting. If you want to hire the best engineers you can find, go to the Tech community. They aren’t the best only for their technical skills but also for their soft skills: they know how to speak in public, make a successful network of collaborators, organize events, etc. All those things are quite practical in any company.

  2. Help to grow your own engineers. Have your own people attending meetups, conferences, katas, etc. will help them to have different ideas, see what’s working in the industry, etc. If they participate actively (as speakers or organizers), they also will grow as professionals and they will be busy and motivated so it will help with retention.

  3. To find new business opportunities. I lose the count of how many of those I found since I started to collaborate. This is a real thing. Just let your engineers speak with others and they will find solutions to their problems. That’s what engineers do.

You may have others. It doesn’t matter. Just make sure they are clear for your company and you act accordingly to them.

How do we start?

Baby steps. Never start with a big program or a big budget. It’s very important to be cautious in the beginning. Most of the people don’t know your company or they have an idea quite negative about it. That’s normal. You never have collaborated with the community before and probably have rejected candidates, fired employees, etc. and nobody is sharing the good things. It’s time to change it but it will require time. Be patient.

My advice here it’s to find a non-profit local community and ask the organizers. You will be surprised by how helpful they will be without any type of hiding interest or retribution.

It should be local, it’s better to start locally and grow from there instead of jump directly into national or international events. It doesn’t matter your size. People are connected. When you go to an international conference, the organizers will have local contacts and they will ask so it’s better if you make your homework first. Everything will be easier.

Also, choose your first community carefully. It should be related to the technologies you are using and also non-profit. Most of the Tech communities are great and there aren’t hidden interests but it isn’t always the case, especially if they aren’t related only to Technology (entrepreneurship, blockchain, etc.). They aren’t bad but they aren’t a good start because someone may take advantage and have her own interests. If they aren’t helpful or just ask for money, jump to the next one.

You will have time in the future to come back a start a deeper collaboration once you know who is who in the community (and no, sadly Twitter and popularity doesn’t help on that).

Should we organize our own group/event?

Some companies like to start with their own group or conference. It usually doesn’t work well in the short-term. My advice is to start contributing before to create something from scratch. Again, it isn’t about your size or expertise, it takes time to do things well so start small and grow from there. Think about what’s better for your interests. The ego never helps.

It’s a lot better if you name some of your engineers as ambassadors in the Tech community. It’s a partial-time role. His mission is to establish the link between both words: help the communities from the company and promote the community inside. Make sure they can decide and you are listening when they give their opinion. If a possible sponsorship is evaluated only by HR, you are missing important information.

What can we do?

Once you have a connection with the organizers they will start to propose different types of collaboration. If they are good organizers, not only for their own community, for others too. Don’t be surprised if the Java guys ask for sponsorship in a PHP conference ;-)

The easiest thing is event sponsorship but most of the companies are doing it wrong. It isn’t about giving money for a list of services (a talk slot, publish your logo, etc.). Those things are ok but they aren’t enough.

If your company is going to be a sponsor of a conference, there are things you should do:

  • Make sure some engineers of your company attend the conference and organizers know about them so they can introduce everybody to them. It’s sad when a company pay money and there is nobody to ask at the conference. There is no better ambassador than your own employees and social networking is the most important thing in a conference.
  • Ask the organizers if your company can help in some way: find speakers or other sponsors, promote the conference or record the talks. Once you start to collaborate in this way, organizers will start to contribute back and speak nicely about your company to others. That’s the type of publicity you want.
  • Promote the event and talks in your social media. It’s free, easy to do and with a great impact on your Company image.

One of the best things in a conference it’s sending some of your engineers to make a talk and share their experiences. Everybody loves that, people will provide great real feedback and will get interested. It’s a win-win and quite easy.

Social media and blogs

If you don’t show something to the world, it doesn’t exist. Social media is important. Most of the HR departments are quite familiar with Linkedin, but most of the engineers receive their news from Twitter. So start to promote your projects, events, communities, etc. and show a more human view of your company. But be careful: in case of doubt, don’t publish it. Nobody wants to deal with trolls.

If you don’t want to pollute your Social Media mixing messages for engineers and customers, create a new account specifically for that. It would help a lot but make sure you keep it alive!.

There are companies also launching an Engineering blog. It’s a great idea which helps to promote your brand, but also to create some type of shared identity in your company and gives deserved credit to your engineers. The only problem: it takes time. If you do it, plan it carefully for the long-term.

Open Source

Open source is another good way to collaborate with the community. Just open a GitHub organization and let your engineers know they can publish software there. It would need some reviewing and validation process but it’s a great way, especially if they start to work in the open, not just publish the final outcome.

Not only will help to collaborate with the community, but it would also have a positive impact on the company improving quality, performance and talent retention. That deserves another blog post.

Summary

There are a lot of ways and reasons to collaborate with the community. Ask other companies: worth it. But it isn’t as easy as you may think. So start small and ask people with experience and good intentions. They will help.

- Tags : Kubernetes, Amazon, Docker

Introduction

This post explains how to deploy a Kubernetes cluster in Amazon. We want to automatically update Route 53 to use our own domain and use AWS ELB to have Load Balancing to our pods. We’ll use also AWS Certificate Manager (ACM) so our pods open internally HTTP endpoints but externally they expose HTTPS with a proper certificate.

Installation

Install awscli and kops.

export bucket_name=test-kops
export KOPS_CLUSTER_NAME=k8s.test.net
export KOPS_STATE_STORE=s3://${bucket_name}

aws s3api create-bucket --bucket ${bucket_name} --region eu-west-1 --create-bucket-configuration LocationConstraint=eu-west-1
aws s3api put-bucket-versioning --bucket ${bucket_name} --versioning-configuration Status=Enabled

kops create cluster \
--node-count=1 \
--node-size=t2.medium \
--zones=eu-west-1a \
--dns-zone test.net \
--cloud-labels="Department=TEST" \
--name=${KOPS_CLUSTER_NAME}

kops edit cluster --name ${KOPS_CLUSTER_NAME}

Add to the end:

  additionalPolicies:
     node: |
       [
           {
               "Effect": "Allow",
               "Action": "route53:ChangeResourceRecordSets",
               "Resource": "*"
           },
           {
               "Effect": "Allow",
               "Action": "route53:ListHostedZones",
               "Resource": "*"
           },
           {
               "Effect": "Allow",
               "Action": "route53:ListResourceRecordSets",
               "Resource": "*"
           }
       ]

and create the cluster executing:

kops update cluster --name ${KOPS_CLUSTER_NAME} --yes
kops rolling-update cluster

It takes some time. Use kops validate cluster to validate it. More options:

Deploy the dashboard

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml
kubectl proxy &
kops get secrets kube --type secret -oplaintext

Open http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login

Click on Token and introduce the following output:

kops get secrets admin --type secret -oplaintext

Configure DNS

Note: avoid route53-mapper, it’s deprecated. The kops documentation is outdated.

Obtain the zone ID for your Hosted Zone (you should create a new one if you don’t have one, consult
here how to do it):

aws route53 list-hosted-zones-by-name --output json --dns-name "test.net." | jq -r '.HostedZones[0].Id'

In our case, it returns /hostedzone/AAAAAA.

Create a new file external-dns.yml and update your data in the end:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: external-dns
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: external-dns
rules:
- apiGroups: [""]
  resources: ["services"]
  verbs: ["get","watch","list"]
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get","watch","list"]
- apiGroups: ["extensions"] 
  resources: ["ingresses"] 
  verbs: ["get","watch","list"]
- apiGroups: [""]
  resources: ["nodes"]
  verbs: ["list"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: external-dns-viewer
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: external-dns
subjects:
- kind: ServiceAccount
  name: external-dns
  namespace: default
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: external-dns
spec:
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: external-dns
    spec:
      serviceAccountName: external-dns
      containers:
      - name: external-dns
        image: registry.opensource.zalan.do/teapot/external-dns:latest
        args:
        - --source=service
        - --source=ingress
        - --domain-filter=test.net 
        - --provider=aws
          #- --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization
        - --aws-zone-type=public # only look at public hosted zones (valid values are public, private or no value for both)
        - --registry=txt
        - --txt-owner-id=AAAAAA

and deploy it:

kubectl apply -f external-dns.yml

Test your configuration with an example:

Create an AWS certificate for the service:

aws acm request-certificate \
--domain-name nginx.test.net \
--validation-method DNS \
--idempotency-token 1234 

and save the CertificateArn. We’ll use it later.

You will need to validate it. The easier way it’s from the AWS web console as explained in the official documentation.

Create nginx-d.yml:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx
spec:
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx
        ports:
        - containerPort: 80
          name: http

and nginx-svc.yml with the domain you would like to use and the ACM certificate.

apiVersion: v1
kind: Service
metadata:
  name: nginx
  annotations:
    external-dns.alpha.kubernetes.io/hostname: nginx.test.net.
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-west-1:888888:certificate/AAAAAA-BBBBB-CCCCC-DDDDD
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
spec:
  type: LoadBalancer
  ports:
  - port: 80
    name: http
    targetPort: 80
  - name: https
    port: 443
    targetPort: http
  selector:
    app: nginx

and deploy them:

kubectl apply -f nginx-d.yml -d nginx-svc-yml

It would take some minutes. Once the pods are ready, you should be able to open in your browser on http://nginx.test.net
and https://nginx.test.net and see the nginx welcome page.

Clean everything

Delete the ACM certificate and execute:

kops delete cluster --name k8s.test.net --yes

Resources

- Tags : spock, testing

This article has been published on DZone with editor revision so I recommend you to read it there.

Introduction and motivation

Recently I gave a talk in my local Java User Group about unit testing. Some of the content of the talk was about some popular libraries you can use in your Java project. I’ve reviewed JUnit4, JUnit5 and Spock framework. Many of the attendees were quite surprised with the differences. In this post, I will summarize the most commented: assert, parametrized tests and mocking.

I always like to demonstrate the concepts with examples and live coding so I chose a simple algorithm: Fibonacci number calculator. If you don’t know it, it’s just to generate numbers which are the sum of the two previous ones in the series: 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377.

I used the typical (and with very bad performance) implementation:

    private static int fibonacci(int n) {
        if (n <= 1) return n; else
            return fibonacci(n-1) + fibonacci(n-2);
    }

JUnit4

I started explaining JUnit4. It makes sense because it’s the most popular library and the base for many others. I started explaining assertTrue and then, more advances usages, including assertEquals.

    @Test
    public void improvedFibonacciTestSimple() {
        FibonacciWithJUnit4 f = new FibonacciWithJUnit4();
        assertEquals(f.fibonacci(4), 3);
    }

If it’s false, it would give you an error like:

java.lang.AssertionError:
Expected :3
Actual   :2

Not very spectacular but quite useful.

The next thing was to show how to write a parametrized test, a nice feature very useful for test algorithms. In JUnit4 is quite tricky. You need to create a Collection with the annotation @Parameters.

    @Parameters
    public static Collection<Object[]> data() {
        return Arrays.asList(new Object[][]{
                {0, 0}, {1, 1}, {2, 1}, {3, 2}, {4, 3}, {5, 5}, {6, 8}
        });
    }

Then we create some local variables and a constructor:

    private int fInput;
    private int fExpected;

    public ParametrizedFibonacciJUnit4(int input, int expected) {
        fInput = input;
        fExpected = expected;
    }

and finally, we can use the assertEquals:

    @Test
    public void test() {
        FibonacciWithJUnit4 f = new FibonacciWithJUnit4();
        assertEquals(fExpected, f.fibonacci(fInput));
    }

It’s quite verbose and if the test fails, you would obtain a message which doesn’t indicate clearly the order or the parameters used (but your IDE probably will help on that):

java.lang.AssertionError:
Expected :0
Actual   :1

I didn’t explain mocking here because an external library as Mockito is usually required when you want to use mocking with JUnit4. Mockito is great but I didn’t have enough time to explain it.

JUnit5

Since September 2017, JUnit5 is considered stable and it should be your choice over JUnit4 for several reasons. It has better support for Java 8 and Lambdas, it’s compatible with JUnit4 (you can have both which it’s great to migrate in a progressive way) and it provides new runners and better integrations.

I repeated the same process. First, show an assertEquals:

    @Test
    public void bestFibonacciTestSimple() {
        FibonacciWithJUnit5 f = new FibonacciWithJUnit5();
        Assertions.assertEquals(f.fibonacci(4), 3);
    }

There are important advances in other assert, for instance the timeout, but for assertEqual with integers, it’s practically the same. Also the message is similar if there is an error:

org.opentest4j.AssertionFailedError:
Expected :3
Actual   :2

Where we can find important changes is in the parametrized test. First, we need to use the @ParametrizedTest annotation and we can specify a name using { and } to indicate important parameters as index as arguments.

    @ParameterizedTest(name = "run #{index} with [{arguments}]")

Now we can define our test. We start defining the entries to test function with the annotation @CsvSource. Each item will replace the parameters in the test function, in this case, input and expected.

    @CsvSource({"1, 1", "4, 3"})
        public void test2(int input , int expected) {
                FibonacciWithJUnit5 f = new FibonacciWithJUnit5();
             	Assertions.assertEquals(f.fibonacci(input), expected);
         }

This is lot better than the JUnit4 implementation. Also, if there is a fail in the test, we obtain a better message indicating the difference, the index causing the fail and the used parameters.

Finally, it’s the same for mocking as JUnit4: you normally use an external library so I didn’t explained it.

Spock framework

The last one was the Spock framework. It’s based in Apache Groovy. If you don’t know Groovy, it’s a language which you can use with the JVM and it interact very well with Java. It allows write less code in a more clear way. It’s very powerful. Some years ago we started to use it for "non critical" development: tests, dependency management, Continuous Integration, load testing and in any place where we need some configuration file avoiding XML, JSON or any format like that. We continue to develop in Java the core of our software and it isn’t a problem because both languages play very well together. If you know Java, you know Groovy…​ so we have the best of both worlds.

Write a test in Spock is quite different, it would be like this:

    def "Simple test"() {
        setup:
        BadFibonacci f = new BadFibonacci()

        expect:
        f.fibonacci(1) == 1
        assert f.fibonacci(4) == 3
    }

Basically, we can use def where we don’t care about the type. The name of the function can be defined between quotation marks, which allow us to use better naming for our tests. We have some special words as setup, when, expect, and, etc. to define our test in a more descriptive and structured way. And we have a power assert, which is part of the language itself, proving nice messages:

Condition not satisfied:

f.fibonacci(4) == 2
| |            |
| 3            false
BadFibonacci@23c30a20

Expected :2

Actual   :3

It provides all the information: the returned value (actual), the expected value, the function, the parameter, etc. assert is Groovy is really handy.

Now it’s the turn for the parametrized test. It would be something like this:

    def "parametrized test"() {
        setup:
        BadFibonacci f = new BadFibonacci()

        expect:
        f.fibonacci(index) == fibonacciNumber

        where:
        index | fibonacciNumber
        1     | 1
        2     | 1
        3     | 2
    }

After show this I heard some 'oooh' in the audience. The magic of this code is: you don’t need to explain it! There is a table in the where: section and the values in expect: are automagically replace it in each iteration. If there is a fail, the message is crystal clear:

Condition not satisfied:

f.fibonacci(index) == fibonacciNumber
| |         |      |  |
| 2         3      |  4
|                  false
BadFibonacci@437da279

Expected :4

Actual   :2

Then I’ve introduced very shortly mocks and stubs. A mock is a object you create in your test to avoid to use a real object. For example, you don’t want to do real web requests or print a page in your tests, so you can use a mock from an interface or another object.

    Subscriber subscriber = Mock()

    def "mock example"() {
        when:
        publisher.send("hello")

        then:
        1 * subscriber.receive("hello")

Basically, you create the subscriber interface as Mock and then you can invoke the methods. The 1 * is another nice feature of Spock, it specify how many messages you should receive. Cool, right?.

In some occasions, you need to define what return the methods of your mocks. For that, you can create a stub.

    def "stub example"() {
        setup:
        subscriber.receive(_) >> "ok"

        when:
        publisher.send("message1")

        then:
        subscriber.receive("message1") == 'ok'
    }

In this case, with the >> notation we are defining the method receive should return ok independently of the parameter (_ means any value). The test pass without any problem.

Conclusions

I don’t like to recommend one library or another: all of the them have their use cases. It’s pretty clear we have great options in Java and I just give some examples. Now it’s your turn to decide which it’s better for you. The only thing I can say: write test and master your library of choice, it would make you a better developer!

If you want to take a deeper look to the examples, you will find them in this GitHub repository. Enjoy!

- Tags :

Third year in a row writing my proposals for the new year, yeah!. Again I’m not going to spend time reviewing the previous year, I let that for “internal use”.

There is something I’ve discovered in 2017 which wasn’t planned (or correctly estimated): my family require more time from me now than ever. “Family first” is a motto very respected by me (and my employer) so I had to restrict a lot of things of my life to focus on that. Now I’m more aware of it, so I will continue with the “family first” approach but I will try to do it in a more maintainable way: I have to focus more, be more realistic and have more help.

My first proposal for 2018 is recover my healthy style of life. I never in my life did so less sport as the past year. From 4-5 days per week in 2016 to 1 (or even less) in 2017. As a result, I have also to loose weight. I’m even surprised I was able to maintain my positive attitude with that “stress burning” low rate. This proposal isn’t very original but it’s important.

Second goal, my work in Optare Solutions. I will continue for a while (but not forever) as Technical Director. It’s a challenging role which I’ve accepted to make things happen even if, ironically, I see it as step back in my career. I’m very happy with some “small victories” in the past year (which are a team effort, not mine) as open new offices in Ourense, sponsor the VigoJUG and the GDGDevFest, provide the facilities to make a k8s workshop organised by VigoLabs, improvements in the workflow of some teams, the Abbilia initiative and, the most important, start to think as group (or even better, a big family) how to improve the company embracing continuous improvement instead of making individual efforts in silos or too big programs.

Yet, big cultural changes take time. Optare is a great place to work, the bar is really high to make a real difference because it’s a company with a long history of successes and well done projects so change the formula, even to improve it, is a complex task. I think we are very lucky to have a company like this in Galicia and I will continue to help it as much as possible.

I also will continue also contributing to NetApps, an Optare’s team where I have been working for years now and where I can share time and experiences with some of the best engineers I’ve ever met. My work as Technical Manager is quite lonely so this part-time role allows me to continue doing some “real world” work and recharge batteries when needed. I’m very thankful for that. This is going to be a great year for this team, I’m really excited and looking forward to the new challenges.

Last, but not least, I want to continue contributing to the local community:

Co-organize the VigoJUG has been a great experience. I will like to continue it with a clear goal: my vision (I’m not the only organizer) for the JUG is a group of friends who share time and knowledge. I personally would like to keep it in that way. Make something bigger is a temptation but I don’t believe it will work in the long term.

Also, I will like to launch the CoruñaJUG. I will be more present in that city in 2018 but don’t be afraid, I will continue cheering the Celta soccer team.

The previous year I’ve spent some time helping to launch VigoTech Alliance. It seems VigoTech as project is ended, most of the proposals to improve it have been rejected or ignored, which it’s fine: it means it’s good as it is. I will continue supporting the Alliance when needed but I don’t expect to spent too much time on it in 2018 (but I will be happy to be wrong it there are new initiatives).

I have also spoken in several local meetups as ForoDeEmprego, VigoLabs, GDGDevFest, Librecon, PythonVigo, and, of course, VigoJUG, and participated in hackathons (Refugal, GPUL). I’m very happy with this. It’s funny I did the opposite of the normal path: started in international conferences speaking in English, and then, move to local ones speaking in my mother tongue. I met great people and I did things I never through I would do it (as a motivational talk). I will try to continue that path as much as possible but I probably will need to slow down a bit.

Finally, last year I tried to contribute to some big open source projects (Kafka, Elastic, Lucene). It makes sense for several reasons but I failed. It requires some constancy and I just could dedicate time very occasionally. Out of the office, I spent most of the time playing with frameworks, testing new tools or reading books. I also did some MOOCs (Scala and Oracle Cloud). Learn for the simple pleasure of learn is my passion. I would like to do some serious PRs in 2018 but it’s going to be impossible :-(

Instead of that, I will try to update more frequently the blog with technical content. I’ve reading a lot about JVM performance in the last months, I have some experience and it’s a interesting subject for the blog. I will also try to attend a couple of international events related to the JVM. And, only if I have time, try to do some open source contributions in that field.

In summary, my proposal for 2018 is to be more focused in some things and let others pass. Happy 2018!

- Tags : event, coding

Some weeks ago I had the opportunity to speak in the Python Vigo meetup. It was just a lightning talk, 5 minutes. Why?. I really enjoy Python Vigo meetups, they are useful, fun and I always learn something new. So, when I read an email asking for speakers, I proposed the only Python related subject I know something about: Jython.

I have been using Jython for years to manage Weblogic servers. Being honest, I don’t like it, without any doubt I would prefer to use Groovy, in fact, that’s what we usually do. But I thought it may be interesting for someone in the Python community.

It was my first time doing a talk so short. Also I’m more used to do them in English. It was clear to me how hard it’s to do a 5 minutes talk, even for an easy subject like this.

I just wanted to say three things, if possible, without slides:

  • Don’t use Jython if you are looking for Python performance improvement.
  • Use Jython to explore Java classes or change them dinamically using Python syntax.
  • Use Jython to combine it with Java programs, to load configuration dinamically, something like a DSL (Domain Specific Language) but using Python as language.

Probably, the talk wasn’t clear for most people… Also I miss 15 seconds more to finish my example. Yet, some people asked me some things later related to it… so I’m not completely sad with the result and I learnt how hard it’s to do a good lightning talk.

In case you are insterested, in this GitHub repo you can find the things I did and also a Readme with the explanation. The talk is recorded so you can watch it here:

Notes for the future:

  • Be careful with short talks, they are harder than the long ones.
  • Use introductory slides, at least one.
  • It’s better to say less and clear than more and hard to understand.

- Tags : hackathon, event, coding

This weekend I’ve participated in the Refugal, a hackathon for refugees. It isn’t the fist time I participate in a hackathon but this one has been special in some way. Do you know those videos of strangers who doesn’t know each other but they start to play together in the middle of the station?. Something like this one:

Well, this is exactly what I felt some times during the weekend.

Let me start in the beginning. I arrived soon and started to speak with people. A nice introduction to the hackathon by David and Edo, a game to improve creativity and we started with the ideas.

I know I’m more a doer than a thinker. But I like to collaborate so I proposed two ideas: a bot to help communicated refugees and volunteers and a network of home webcams interconnected to provide real-time information to the refugees and also to people at home. My presentation of the ideas was quite bad, mainly I was speaking about a bot… but not all people were developers so later some people told me they didn’t know what a bot is. Big fail.

My “bot” idea was the third one with more votes. It was another one I liked a lot more (the first one) but I stayed with the bot. Very soon the team was formed: four developers (2 web, 1 mobile and myself) and one designer. It was the more technical team, probably because of my bad explanation of the idea.

We moved to a room. We discussed the idea for a while in front of a whiteboard and we separated the bot in different sections: Facebook Connector, Telegram Connector, Bot Logic, Design (logo, web, slides, etc.) and DevOps (my part: server, docker, Apache Kafka, etc.). In less than 30 minutes, everything was ready and we were working on the project.

Hard to explain what happened after that, four guys and a designer working like one person. Lines of code, commands, systems, drawings… everything happening so fast, 133 commits in less than 10 hours. I remember the designer stopping her drawing and asking: “Ey, guys, I know some development but I’m not understanding what are you are talking about. Is it complex what you are doing?”. The four developers in the room said Yes! at the same time. You may take a look to github repo: Docker, Docker Compose, AWS, Apache Kafka, four nodeJS and one SQLLite… Everything integrated and working!. And the designer, what a great work she did. You may have the best backend in the world, but without a good designer, people isn’t going to appreciate it. The bot web is done by her, nice work!

Working as a team

From time to time someone proposing something: always accepted or postponed to the end if there was time for that. And back to work, not even a discussion in the whole weekend. Sometimes we even forget go to eat (but a great organization and the impacthub Vigo is so cool!). Also we had some funny moments, smiles and lot of complicity.

I even had time to add a SMS connector using Nexmo, the best 15 euros spent in my life. First time I develop something using nodejs. I don’t like it yet but I have to say it was a good decision for the project: lot of libraries and SDKs, easy to deploy and fast to develop.

Of course, we run the demo during the presentation, it was great to see people sending messages from different networks and speaking with our bot. So fun!. I will pay good money for a video of that moment but everybody was too busy sending messages.

RefuBot presentation

This morning, when I terminated the EC2 instance hosting the bot, I felt some type of sadness. Yet it has been great to meet such great team this weekend. And who knows?. RefuBot works and it scales. I can assure that. Maybe someone will discover it in the future and RefuBot becomes real. If it’s really useful, that would be fair.

Photos are from this Picassa album.

- Tags : java, vim, neovim, groovy, gradle

I’ve tried so many times change Intellij or Eclipse by vim.. But when it’s related to Java is really hard to find a real alternative to those IDEs. And when we speak about Groovy, it’s even worse. Yet I use vim a lot: edit files, write blog posts, etc. Also my Chrome and Thunderbird configuration uses Vim shortcuts, so I keep myself more or less trained.

Some weeks ago I’ve discovered this blog post Use Vim as a Java IDE and I want to give it another opportunity. Let’s start.

Neovim in Fedora

This is straight-forward:

sudo dnf -y copr enable dperson/neovim
sudo dnf -y install neovim
sudo dnf -y install python3-neovim python3-neovim-gui

For Fedora 25 is even easier:

sudo dnf -y install neovim
sudo dnf -y install python2-neovim python3-neovim

For other systems, just check the official Neovim documentation.

We’ll need some other plugins:

sudo dnf -y install astyle

Install vim-plug

Again, this is straight-forward following the official instructions:

curl -fLo ~/.local/share/nvim/site/autoload/plug.vim --create-dirs \
    https://raw.githubusercontent.com/junegunn/vim-plug/master/plug.vim

Install plugins

This is when things become messy. Start editing ~/.config/nvim/init.vim to add the plugins:

"""""""""""""""""""""""""
""""    vim-plug     """"
"""""""""""""""""""""""""
call plug#begin('~/.local/share/nvim/plugged')

" Others

Plug 'scrooloose/nerdtree', { 'on':  'NERDTreeToggle' }
Plug 'majutsushi/tagbar'

" Java development

Plug 'sbdchd/neoformat'
Plug 'artur-shaik/vim-javacomplete2'
Plug 'Shougo/deoplete.nvim', { 'do': ':UpdateRemotePlugins' }
Plug 'neomake/neomake'

" Initialize plugin system
call plug#end()

"""""""""""""""""""""""""
""""    deoplete     """"
"""""""""""""""""""""""""
let g:deoplete#enable_at_startup = 1
let g:deoplete#omni_patterns = {}
let g:deoplete#omni_patterns.java = '[^. *\t]\.\w*'
let g:deoplete#sources = {}
let g:deoplete#sources._ = []
let g:deoplete#file#enable_buffer_path = 1


"""""""""""""""""""""""""
""""  Java Complete  """"
"""""""""""""""""""""""""
autocmd FileType java setlocal omnifunc=javacomplete#Complete

"""""""""""""""""""""""""
""""     neomake     """"
"""""""""""""""""""""""""
autocmd! BufWritePost,BufEnter * Neomake

"""""""""""""""""""""""""
""""     neoformat   """"
"""""""""""""""""""""""""
augroup astyle
  autocmd!
  autocmd BufWritePre * Neoformat
augroup END

Open nvim and type :PlugInstall.

Done!

Now, if you open a Java project, you should have auto completion, auto format and lint capabilities.

I will update this blog post with new things as soon as I have them.

TODO

  • [ ] Add Groovy support.
  • [ ] Add some screenshots or recording.


Older posts are available in the archive.