alt
April 25th, 2010

by Ivan St. Ivanov

Considering data stores

This article is not from this week, but anyway, it is very interesting and profound, so I cannot help sharing it with you.

What is a data store? Well, this is a repository where you can store data. Author’s idea is to provide a quick overview and benchmark results of a wide range of data stores.

There are certainly several types of data stores – relational databases, object databases, document oriented stores, etc. For each of these types there are different kinds of libraries and even technologies that help the developer work with the data storage. Each of the data stores and the concrete implementations has its strengths and weaknesses. In the beginning the author tries to stress that there is no silver bullet that solves the data storage issues. However, most people (99,99999% according to him) go for the relational database solution and most of those (there is no percentage mentioned) use JPA as the layer between JDBC and the application code. The author calls this no-thought solution SOD – same old data store.

I must admit that I am in the SOD camp – I don’t usually think much when I have to develop a simple application. I go directly to relational DB + JPA. However, at a bigger project where I participated, we had to judge between several persistence representations and technologies, so I had the opportunity to get acquainted with some of them (even a colleague of mine keeps insisting that JCR is the best solution, even though we chose JAXB and XML :-)).

Anyway, the bottom line is that it is good that people like Joe write such reviews so that the next time we’ll take a well-thought decision and not go directly to the SOD.

Groovy closure in pure Java

‘Closures in Java’ is a long discussed topic in the community. And it seems that we are going to have them in JDK 7.

But first, what is a closure? Well, this is can be a very broad and complex area, but the easiest answer is – a function pointer. Hmm, it does not seem quite clear, right? OK, imagine that you can define a block of code and you can pass that as a parameter to normal functions. For example you can define a generic iterate() method over a collection, which receives the algorithm that handles the collection data.

Anyway, we should not wait for JDK 7 to come to start using closures. Most of the dynamically typed languages have them and some of them are built on top of the JVM, which means that they “compile” to Java byte code.

The article described hereby is an example of how you can implement a Groovy method that takes a closure parameter and then create and pass this parameter from withing pure Java code. Short but useful hint! :-)

The ABC of JDBC, part 1

JDBC was discussed in an earlier article of this blog. So if you liked it, but you are still not very familiar with it, then DZone’s series is just for you.

In the first interview-like installment Daniel Rubio explains the very basic terms of JDBC: how it works, what is a connection and how do you create it, what is a statement, how are connections pooled, what is a DB driver, etc.

5 things you didn’t know about… the Java Collections API, part 1

IBM developerWorks continues Ted Neward’s series “5 things you didn’t know about…”. Now it is time for the Java Collections API. As this is a very vast topic, there will be several parts devoted to it.

In this first part the author starts with the most obvious things – how you can use the API for common stuff: converting an array into a collection, iteration, the new for-each loop and its usage with collections, new and handy collection algorithms and extending a collection.

I personally thought that using the plain old array is better in terms of optimization. Well, not exactly. Think about all the tedious code that you have to write for a single operation with the array. For example a simple dumping to the console requires at least several lines of code. Not to mention extending it with one or several elements. Well, this is handled by the collections API for you. Not only that, but the API designers have overcome all the traps that you can fall into if you decide to work with the array by yourself – concurrency for example.

So, my advice is: use collections as much as possible. It’s easy, it’s fast and your code will be clean and easy to understand and maintain.

JavaScript in Java with Mozilla Rhino

Last but not least, Steven Haines, the editor in chief of the InformIT Java guide created a quick series this week on running JavaScript code inside Java programs using the Rhino library coming from Mozilla.

The usecase presented here is that a certain web application needed same validation to be run on the server, as well as on the client side. The easiest solution would be to develop the validation logic twice – the first time in a language suitable for the browser (e.g. JavaScript) and the second time in Java – our server side language of choice. After a while you can realize that this is not a good idea: developing something twice is one of the worst practices, which we always try to avoid (though I must admit I’m still doing it :-().

So the solution is simple: write the validation code just once and call this same code on both locations. It’s easy to run JavaScript on the client side – all the modern browsers understand it. But how do you do it in Java? Well, Mozilla Rhine does it all and InformIT knows it all.

Don’t forget after reading the first part to go on with the next ones by clicking the Next links at the bottom right of the articles. Thus you will find out how you can combine the result of different functions, how you can build a JavaScript entity using JSON and how you can convert it to a JavaBean with GSON, and finally how you can organize your validation code.

April 18th, 2010

by Ivan St. Ivanov

More on test coverage

Last week we discussed the topic of the missing compiler in dynamic languages (and in particular in JavaScript). If you remember well, according to the author the lack of a compiler can be easily made up by having near to 100% test coverage.

This week I read two blogs circling around the test coverage topic. The first one insists that 100% test coverage may sometimes give you false idea about your code quality. The idea is that if you think that with 100% coverage you are done and safe, then you are wrong. You have to surely go to the next level – manual or automatic integration tests. Such tests that test your code the way that it will be really executed and not the way that you think it will be executed (which you express in the unit tests).

The next blog post is kind of a sequel (though written by different person) to that from last week on JavaScript. This time it looks from the Groovy language perspective. It actually explains why preventing syntax or type mismatch errors that are usually caught by the compiler in the statically typed languages, does not require writing additional tests. You can see that the mainstream tests that you would write in a compiled language, would do the job of the compiler.

So, do not fear, write test, be cool (or groovy? :-)) and don’t worry about the rest.

Flow-managed persistence in Spring Web Flow

This is an article for those of you who are aware and use Spring Web Flow. It is a framework that gives you the opportunity to easily build wizard-like web application.

The author of the IBM developerWorks article focuses on the persistence mechanisms in a Web Flow application. Basically during a typical application the user is navigating through several pages and views, each of which presents different data, which needs to be stored somehow during the flow. However usually the user input is not committed in the database on each click of a Next button, but rather at the end, when the whole process is finished.

So the author shows you how you can use different techniques of controlling the transactions in both the atomic and non-atomic web flows (the latter commits a portion of the data on each transition). It was interesting for a person like me (I have a common knowledge on Spring, but nothing on Spring Web Flow). I hope that there are readers with interest and experience in the latter. You will surely appreciate and find the article useful.

Understanding JSF 2.0 Flash scope

This is a very interesting introductory article explaining how to use the so called flash scope in JSF 2.0. The scopes in the Java EE 6 world are coming from the Contexts and dependency injection technology. They are basically concerning how long an object is managed by the container. The most usual scope is the request scope. The problem there is that after the request is over (it usually happens on each redirect to another page), the data submitted at the previous request is lost. However, sometimes you need that data and you don’t want to use the session for storing the data that has to survive between the different requests. If you just need you data to be alive in the next request, then you put it in the flash scope.

How to do that and how to get it from there? Go on and read the article and you’ll find out. :-)

Quick introduction to Hibernate validator

Most the application developed today (no matter whether they do web or desktop apps) ask the user to enter some kind of data. This automatically implies that the user input has to be validated against defined rules (or constraints) like “not null”, “at least 10 characters long”, “at most 100 characters long”, etc. The different [web application] frameworks have provided different approaches for data validation.

Before Hibernate validator was born, all the existing validation libraries were bound to a certain layer. Some of them did the validation in the presentation layer, others in the business layer and even some application rely on the validations done by the database engine.

With Hibernate validator the developers are free to put the validation code wherever they like. The only thing they need to do before that is to create the metadata (what are the validation constraints) in the domain objects that will be eventually validated. And leave the rest to the framework.

Now, the article presented hereby gives you a quick start to using the Hibernate validator library, which above all appears to be the reference application for the bean validation specification.

April 11th, 2010

Five things you didn’t know about Java Serialization

Ted Neward has started a new series of articles on IBM developer works. The series is entitled “5 things you didn’t know about…” and the first installment is about Java object serialization.

Most of the Java developers must be familiar with the topic as even the first tutorials teach how to serialize an object [graph] to the disk or to the network. At least I knew that in order for my class to be serialized, it had to implement the Serializable marker interface and make sure that all my class’s fields are serializable as well. Finally I needed to call a special method on the ObjectOutputStream to store the object or to ObjectInputStream to load it back. Later on, I learned why it is so important to have the serial version ID static field.

Well, this is nothing compared to what I found in Ted’s article. Do read it if you want to broaden your scope and follow the series either at developer works or on this blog for the next 5 things that you did not know.

When you can’t throw an exception

Another article from IBM developer works. This time it comes from Elliot Rusty Harold. It deals with the case when you have to override or implement a method from a superclass or interface that does not throw a checked exception. And for some reason you have an exceptional situation in your implementation.

You can’t throw the exception, as your code will not even compile – it is not allowed for the overriding method to throw checked exceptions that are not defined in the superclass. You should not leave the exception unhandled as well – it is a very bad practice, which may lead to unpredictable behavior. It is not a good idea to throw unchecked exceptions in such situations either.

So what you can do according to the author: either separate the code that throws the exception from the one that naturally belongs to the overriding method, or define your own interface.

This is a very interesting article as it not only gives a tip on how to handle such situations, but also discusses the question about using checked exception. During my life as a Java developer I personally used to be fan of both schools. In the beginning I defined interfaces, which all methods threw a certain checked exception, no matter whether it was needed or not. Then I started hating them and went to throwing unchecked exceptions only when it was absolutely necessary (you don’t deal with input/output in 100% of your code, do you?). Of course the truth as always is somewhere in between – use checked exceptions only when something has to be dealt at development time.

OSGi and web applications

This week I recommend you two articles on nearly the same topic: how you can use OSGi when you develop web applications. So far at least my idea about OSGi was that it can be used only by some low level server side components. And that the web applications could only be built upon the OSGi infrastructure, but not use it directly. Well, the good news is that the people from Glassfish, Equinox and [hopefully] the other OSGi vendors or adopters are not so narrow-minded as I used to be :-)

The first article discusses how a Web service built and deployed on Glassfish can call a method, defined and implemented by an OSGi bundle, registered as an OSGi service. As Glassfish is also implemented using OSGi (it is based on Apache Felix), it is perfectly fine that your web module’s MANIFEST.MF file contains the necessary entries so that it imports other bundles. Basically this is all the magic needed by your web service in order to call OSGi services defined in the same container.

The second article opened my eyes even wider. As I mentioned earlier, my recent opinion was that the existing OSGi implementations only offer some basic functionality to enable the OSGi specification. What I didn’t know was that Equinox and most probably other OSGi implementations have packed some bundles that start an embedded web server. So if your bundle has a Java Servlet inside and you import the right packages, you will be able to develop a sample web application directly inside Equinox. As this still sounds very awkward to me (sorry again for the narrow-minded-ness :-)), please feel encouraged to comment on this topic.

Java script vs Java (again)

One of the people from whom I have learned very much in the area of object oriented design in Java – Misko Hevery, explains why he has become fan of JavaScript. He starts with the most popular myth from the camp of the statically typed languages fans – the lack of the compiler in the dynamically typed world makes the latter such. According to the author the fact that you can write unit tests in JavaScript makes up for the missing compiler. If you have close-to-100% code coverage of your JavaScript code, which is run (successfully) on more than regular basis, then you should not worry about syntax or other errors that might appear at runtime, but could be caught by the compiler.

After that, the author starts with the most common advantages of dynamically typed languages – you write less code to achieve the same obvious things, closures are part of the language, simplicity, duck typing. Finally the author tries to bust another myth – that writing a program in a dynamic language cannot scale to a big team of developers.