Author Archives : erbrito

AWS Services / Amazon Simple Storage Service S3 object tagging

FCB8457B-EE6C-46EE-A248-56C3729C701DS3 Object Tagging is a new feature introduced by AWS on December 2016. A really nice feature that allows you to track your costs on your invoice by your tags.

For example, you can add a tag by client name, and a tag by department: development, qa, or production.

I was migrating some objects from folders; however this feature to upload a folder (or a list of files) with tags was not available throw the “aws/aws-sdk-java version 1.11.87” .

So, the solution is to add an ObjectTaggingProvider (similar to the ObjectMetadataProvider) and modify the TransferManager to accept the ObjectTaggingProvider as a parameter in uploadDirectory method.

Then, on the client, we can implement the taggingProvider:


Boost quality and productivity Using Grunt as a continuous development tool

Continuous development of the Front End Tier based on Grunt (node.js)

The development of the User Interface (Front End Tier) requires science  and art to work together. To make it works, the process and tools are: Grunt continuous integration.

The Benefits:

Using Grunt as a continuous development tool for the front end tier I’ve Improved the quality:

  • Using the best practices
  • allowing to have metrics with static analysis on css, html & js
  • support a BDD (Behavior Driving Development) with unit tests and code coverage of the JavaScript code.

Boost the productivity:

Development Process with Grunt

Development Process with Grunt

1) At any change in the code when saving the files, the files are compiled or processed
  • Stylus -> css files
  • CoffeScript -> JS Files

2) The validation tools are launched

  • Jshint,
  • csslint,
  • lint5

3) The unit tests are launched

4) The files are copied to Development Web Server

5) The browser reloads the changes.

The plugins are:


  1. Stylus
  2. Css lint (static code quality analysis)


  1. CoffeeScript
  2. Jshint (static code quality analysis)
  3. Jasmine (unit tests)
  4. grunt-template-jasmine-istambul (code coverage


  1. lint5 (static code quality analysis)

To handle the development process:

  1. Watch
  2. Connect (enables a webserver)
  3. Proxy-Connect (enables a proxy to invoke remote ajax calls)

To handle the delivery process

  1. htmlmin
  2. Copy
  3. cssmin
  4. regex-replace
  5. uglify

Unit Testing?

Much more better, I followed Behavior-Driven development process. The platform I used over Grunt is Jasmine. Why? It is simple to configure, runs on top of phantomjs browser engine, and enables to implement the code coverage.

Coding JavaScript?

Much more better, I used CoffeScript. The benefits are: better code quality, boost the productivity using the good parts of JavaScript. The generated code is 100% jslint clean code, improves the readability and the code needs to be compiled that guarantees the code is free of syntax and typos errors.

Lessons learned.

  1. Start always with a BDD: testing with Jasmine provides a better design, ensuring the testability of the code and using the best practices with MVC pattern on the frontend.
  2. CSSLint provide a great feedback to improve the quality of the css.

Next steps on my process:

  1. Integrate Grunt with a CI server (Jenkins).
  2. Integrate the metrics reports provided by Grunt on a historical dashboard: SonarQube
  3. Implement applications with I18N (internationalization).


Using Grunt as a continuous development tool, developing the user interface the right process and tools, allows to improve the productivity and the quality. In less than 2 weeks, I implemented the process, I learned 2 new languages (Coffeescript & Stylus) and I deliver a great project. Don’t hesitate to start using Grunt continuous integration of the UI.





Business Model and the Enterprise Motivation Model

The business model tools becomes sometimes very complex.


Can we create a good business model using any of the the following meta-models?  During this post, I will introduce 4 approaches, I’ll show some of the diagrams used by those approaches finally I will conclude with a different approach that provides us the MDA tools.


The Business Motivation Models approaches are:

IBM approach to the logical progression through the Business Motivation Model diagram.

IBM approach to the logical progression through the Business Motivation Model diagram.

Obviously, it is possible to create a good business model using any of those techniques; however, some complexity could be raised if we don’t plan a road map how to accomplish our goal. Following a sequence of steps, it could help us.

TOGAF is more centered on the IT Architecture supporting the business services, on the other side, the OMG specification BMM version 1.1 focus more on the Means and Ends. In the middle, the IBM rational library implementing OMG’s BBM specification, mapping the Ends with the Use Cases and, later on, each use case could be mapped with the implementations (this is the power of the MDA tools).
Those models have a good approach, and could be very useful. However, I prefer the approach from the Enterprise Business Motivation Model. This complex model is strong enough, that it contains the popular Business Model Canvas.
Enterprise Business Model

Below  I will show you some diagrams that represents some different views of the Business Motivation Model. As you will see, the complexity is huge!

[envira-gallery id=”169″]


Don’t underestimate the power of the MDA tools to simplify the process to achieve complex tasks as we can find during the phase to create a Business Model. And please, try to answer (or ask) the question: What fxxk business motivation is solving the use case, task, activity, we are performing?

How to backup uncommitted changes locally using Subversion: svn status, sed

Discover the power of subversion and bash with sed, tar and xargs:

svn status provided the information about the modified files

$svn status

  • using sed, we can get the list of modified files.

$mkdir backup1 $ svn status|sed -e s/”M “// -e s/^?.*// -e ‘/^$/ d’ |cp

$ svn status|sed -e s/”M “// -e s/^?.*// -e ‘/^$/ d’|xargs -n 1 -I {} echo .{} backup1/

  • The final command:

$ svn status|sed -e s/”M “// -e s/^?.*// -e ‘/^$/ d’|sed s/\///g|xargs -n 1 -I {} cp ./{} backup1/

  • or a cleanest:

$ svn status|sed -n s/”M “//p |sed s/\///g |xargs -n 1 -I {} cp ./{} backup2/

  • instead to copy the files loosing the folder structure, we can use the tar program:

$ svn status|sed -n s/”M “//p |sed s/\///g |xargs -I {} tar -u -f backup2/archive.tar {}

  • If I want ot see the time or use it to create the file, I can use the function date: $echo $(date “+%Y-%m-%d@%T”) 2014-01-22@16:50:47

$ svn status|sed -n s/”[M|A] “//p |sed s/\///g |xargs -I {} tar -u -f backup2/$(date “+%Y-%m-%d”)-$(date +%s).tar {}

To revert the changes:

$ svn status|sed -e s/”M “// -e s/^?.*// -e ‘/^$/ d’|sed s/\///g|xargs -n 1 -I {} svn revert {}

  • references:

  • Going further: a script that commit the changes, before commit it, backup the changes, later add the diff and finaly add the logs to the backup file:

#We need the folder where will be saved the backup file first param #$ svn status|sed -n s/”[M|A] “//p |sed s/\///g |xargs -I {} tar -u -f backup2/$(date “+%Y-%m-%d”)-$(date +%s).tar {} #We need the message we will use on the commit #We need to know the revision number: svn info provide it. #We need to know the difference #$ svn diff > diff_2014-01-24-1390590293.txt

fileNameWithoutExt=$(date “+%Y-%m-%d”)-$(date +%s) tarFileName=$(fileNameWithoutExt).tar fullTarFileNameWithPath=$1/$tarFileName svn status|sed -n s/”[M|A] “//p |sed s/\///g |xargs -I {} tar -u -f $fullTarFileNameWithPath {} svn diff > diff_$(fileNameWithoutExt).txt tar -u -f $fullTarFileNameWithPath ./$diff_$(fileNameWithoutExt).txt svn commit -m $2 revisionNumber=svn info|sed -n ‘s/Revision: //p’ svn update svn log -r $revisionNumber > $fileNameWithoutExt.log tar -u -f $fullTarFileNameWithPath ./$fileNameWithoutExt.log


svn status

The cloud computing monitoring performance tools. Java approaches.

The cloud computing monitoring performance tools are relative new, so how to monitor the cloud computing?

cloud computing monitoring performance tools

The end user experience is crucial to keep on live an application hosted on the cloud. So this is the reason to keep an eye on the performance of our applications. Watch up, maybe your application is not completed levered on the cloud; however, some modules, or computing bottleneck are part of the cloud. Monitoring the cloud is a challenge, as the cloud computing is hosted by third parties most of the time.

Monitoring the cloud computing using distributed performance agents is an approach some tools are using.

Java, JEE or J2EE application servers, now are more often hosted on the cloud. We have multiple service providers with huge experience. As an example, we can see the Redhat service branded Openshift, which has by default Jboss applications servers. I’ve been testing and it is possible to install monitoring tools inside the server.

Does the cloud is hidden the sun? A sunshine is appearing, performance tools and cloud computing are now part of the scope on ITSM.

On the IT daily operations, monitoring the performance is moving from in house, to the cloud. Tools vendors are aware, and they are providing better cloud solutions.

ga('create', 'UA-47326504-1', ''); ga('send', 'pageview');





The future of Bitcoins and the virtual currencies.

The future of Bitcoins and the virtual currencies. Bitcoin transactions would be taxed


What is the future for the virutal currency? Does the Bitcoin transactions would be taxed?


The future of the Bitcoins is not defined yet; however, the trends are showing that its use are becoming more popular. People are talking more about bitcoins, hardware is improving to mine it, traders are exchanging it. Does governements will start taxing Bitcoins?
Just take a look at this article where Kelly Phillps exposes the case of UK face to the Bitcoins.


Bitcoin logo

Bitcoin logo