Category Archives : IT

Information technology


AWS Services / Amazon Simple Storage Service S3 object tagging

FCB8457B-EE6C-46EE-A248-56C3729C701DS3 Object Tagging is a new feature introduced by AWS on December 2016. A really nice feature that allows you to track your costs on your invoice by your tags.

For example, you can add a tag by client name, and a tag by department: development, qa, or production.

I was migrating some objects from folders; however this feature to upload a folder (or a list of files) with tags was not available throw the “aws/aws-sdk-java version 1.11.87” .

So, the solution is to add an ObjectTaggingProvider (similar to the ObjectMetadataProvider) and modify the TransferManager to accept the ObjectTaggingProvider as a parameter in uploadDirectory method.

Then, on the client, we can implement the taggingProvider:

references:

http://docs.aws.amazon.com/AmazonS3/latest/dev/object-tagging.html

https://github.com/aws/aws-sdk-java/issues/1005

https://github.com/aws/aws-sdk-java/pull/1011


Boost quality and productivity Using Grunt as a continuous development tool

Continuous development of the Front End Tier based on Grunt (node.js)

The development of the User Interface (Front End Tier) requires science  and art to work together. To make it works, the process and tools are: Grunt continuous integration.

The Benefits:

Using Grunt as a continuous development tool for the front end tier I’ve Improved the quality:

  • Using the best practices
  • allowing to have metrics with static analysis on css, html & js
  • support a BDD (Behavior Driving Development) with unit tests and code coverage of the JavaScript code.

Boost the productivity:

Development Process with Grunt

Development Process with Grunt


1) At any change in the code when saving the files, the files are compiled or processed
  • Stylus -> css files
  • CoffeScript -> JS Files

2) The validation tools are launched

  • Jshint,
  • csslint,
  • lint5

3) The unit tests are launched

4) The files are copied to Development Web Server

5) The browser reloads the changes.

The plugins are:

CSS

  1. Stylus
  2. Css lint (static code quality analysis)

JavaScript

  1. CoffeeScript
  2. Jshint (static code quality analysis)
  3. Jasmine (unit tests)
  4. grunt-template-jasmine-istambul (code coverage

Html

  1. lint5 (static code quality analysis)

To handle the development process:

  1. Watch
  2. Connect (enables a webserver)
  3. Proxy-Connect (enables a proxy to invoke remote ajax calls)

To handle the delivery process

  1. htmlmin
  2. Copy
  3. cssmin
  4. regex-replace
  5. uglify

Unit Testing?

Much more better, I followed Behavior-Driven development process. The platform I used over Grunt is Jasmine. Why? It is simple to configure, runs on top of phantomjs browser engine, and enables to implement the code coverage.

Coding JavaScript?

Much more better, I used CoffeScript. The benefits are: better code quality, boost the productivity using the good parts of JavaScript. The generated code is 100% jslint clean code, improves the readability and the code needs to be compiled that guarantees the code is free of syntax and typos errors.

Lessons learned.

  1. Start always with a BDD: testing with Jasmine provides a better design, ensuring the testability of the code and using the best practices with MVC pattern on the frontend.
  2. CSSLint provide a great feedback to improve the quality of the css.

Next steps on my process:

  1. Integrate Grunt with a CI server (Jenkins).
  2. Integrate the metrics reports provided by Grunt on a historical dashboard: SonarQube
  3. Implement applications with I18N (internationalization).

Conclusion:

Using Grunt as a continuous development tool, developing the user interface the right process and tools, allows to improve the productivity and the quality. In less than 2 weeks, I implemented the process, I learned 2 new languages (Coffeescript & Stylus) and I deliver a great project. Don’t hesitate to start using Grunt continuous integration of the UI.

 

 

 

 


How to backup uncommitted changes locally using Subversion: svn status, sed

Discover the power of subversion and bash with sed, tar and xargs:

svn status provided the information about the modified files

$svn status

  • using sed, we can get the list of modified files.

$mkdir backup1 $ svn status|sed -e s/”M “// -e s/^?.*// -e ‘/^$/ d’ |cp

$ svn status|sed -e s/”M “// -e s/^?.*// -e ‘/^$/ d’|xargs -n 1 -I {} echo .{} backup1/

  • The final command:

$ svn status|sed -e s/”M “// -e s/^?.*// -e ‘/^$/ d’|sed s/\///g|xargs -n 1 -I {} cp ./{} backup1/

  • or a cleanest:

$ svn status|sed -n s/”M “//p |sed s/\///g |xargs -n 1 -I {} cp ./{} backup2/

  • instead to copy the files loosing the folder structure, we can use the tar program:

$ svn status|sed -n s/”M “//p |sed s/\///g |xargs -I {} tar -u -f backup2/archive.tar {}

  • If I want ot see the time or use it to create the file, I can use the function date: $echo $(date “+%Y-%m-%d@%T”) 2014-01-22@16:50:47

$ svn status|sed -n s/”[M|A] “//p |sed s/\///g |xargs -I {} tar -u -f backup2/$(date “+%Y-%m-%d”)-$(date +%s).tar {}

To revert the changes:

$ svn status|sed -e s/”M “// -e s/^?.*// -e ‘/^$/ d’|sed s/\///g|xargs -n 1 -I {} svn revert {}

  • references:

http://www.cyberciti.biz/faq/linux-unix-bsd-xargs-construct-argument-lists-utility/

http://stackoverflow.com/questions/2193584/copy-folder-recursively-excluding-some-folders

http://www.grymoire.com/Unix/Sed.html

  • Going further: a script that commit the changes, before commit it, backup the changes, later add the diff and finaly add the logs to the backup file:

#We need the folder where will be saved the backup file first param #$ svn status|sed -n s/”[M|A] “//p |sed s/\///g |xargs -I {} tar -u -f backup2/$(date “+%Y-%m-%d”)-$(date +%s).tar {} #We need the message we will use on the commit #We need to know the revision number: svn info provide it. #We need to know the difference #$ svn diff > diff_2014-01-24-1390590293.txt

fileNameWithoutExt=$(date “+%Y-%m-%d”)-$(date +%s) tarFileName=$(fileNameWithoutExt).tar fullTarFileNameWithPath=$1/$tarFileName svn status|sed -n s/”[M|A] “//p |sed s/\///g |xargs -I {} tar -u -f $fullTarFileNameWithPath {} svn diff > diff_$(fileNameWithoutExt).txt tar -u -f $fullTarFileNameWithPath ./$diff_$(fileNameWithoutExt).txt svn commit -m $2 revisionNumber=svn info|sed -n ‘s/Revision: //p’ svn update svn log -r $revisionNumber > $fileNameWithoutExt.log tar -u -f $fullTarFileNameWithPath ./$fileNameWithoutExt.log

 

svn status


The cloud computing monitoring performance tools. Java approaches.

The cloud computing monitoring performance tools are relative new, so how to monitor the cloud computing?

cloud computing monitoring performance tools

The end user experience is crucial to keep on live an application hosted on the cloud. So this is the reason to keep an eye on the performance of our applications. Watch up, maybe your application is not completed levered on the cloud; however, some modules, or computing bottleneck are part of the cloud. Monitoring the cloud is a challenge, as the cloud computing is hosted by third parties most of the time.

Monitoring the cloud computing using distributed performance agents is an approach some tools are using.

Java, JEE or J2EE application servers, now are more often hosted on the cloud. We have multiple service providers with huge experience. As an example, we can see the Redhat service branded Openshift, which has by default Jboss applications servers. I’ve been testing and it is possible to install monitoring tools inside the server.

Does the cloud is hidden the sun? A sunshine is appearing, performance tools and cloud computing are now part of the scope on ITSM.

On the IT daily operations, monitoring the performance is moving from in house, to the cloud. Tools vendors are aware, and they are providing better cloud solutions.

ga('create', 'UA-47326504-1', 'saludnet.com.mx'); ga('send', 'pageview');