by Ivan St. Ivanov
Java EE 7
The first day of JavaOne conference started with a Tutorial talk, where Arun Gupta decomposed the Java EE 7 soup to nuts. For the most part he was comparing via SWAT analysis the different servers that implement Java EE 7 as well as the IDEs that support it. On the server side, besides the usual suspects Glassfish and WildFly, I was surprised to hear that there is a new kid on the block providing a developer preview version. It comes from a Korean company and its name is JEUS. Unfortunately Arun could not show much of it for various reasons: it does not work on MacOS (so Arun used a VM with Fedora installed), it does not have integration with any IDE and he could not deploy and start even a simple web application. The support that the company provided on its forums was basically by a single person answering by random.
The other two servers are very well known to most of us, but still here are some remarks from the presentation:
- Glassfish is always on the bleeding edge being the Java EE reference implementation, so it adopts all the new features first
- It has great command line and also REST interface for monitoring and management and comes bundled with NetBeans
- Wildfly on the other hand is commercially backed by Red Hat and 99.9% of the code is the same in the commercial version JBoss EAP, which implies very easy migration of the applications written for the free offering
- It has much more active community around it (with Glassfish having almost none) and very well established contribution processes
- The clustering in Glassfish is not tested and most probably is not working as expected
- A threat for both servers is the Tomcat + Spring combination
In the IDE area we all know that IntelliJ IDEA rocks, however unfortunately its Java EE development modules are not free. If you are an open source project developer, teacher or student, you can request a free license though. Although widely used, Eclipse is the hardest of the three IDEs to develop Java EE, that is why Red Hat have created the JBoss Development Tools bundle, which eases Eclipse development and also provides support for the various JBoss projects. NetBeans has very pleasant experience out of the box, great Maven support and comes bundled with Glassfish and WildFly.
Besides that, Arun showed how easy it is to develop with JBoss Forge (tell me about it), how you can setup Arquillian (with Maven archetype and with Forge) and how you can create an account and deploy an application on OpenShift (Red Hat’s xPaaS offering).
An interesting use case with OpenShift is to use it as a continuous integration server. You can install a Jenkins cartrage with one click, i.e. a virtual machine bundled with web server running Jenkins, then push your changes to it, which will trigger running the integration tests and if they pass, it can deploy automatically everything on the productive instance. Arun promised that he will provide videos about this process on his blog.
Functional thinking and streams
I went to two talks from Venkat Subramaniam in the afternoon. Their topic was Functional Programming styles in Java. You cannot easily describe Venkat’s talks; you must go and see them!
His first session was on the functional thinking. It touched the topic of pure functions: those that do not modify external state and are not changing because of changes in the external environment. They provide same output when they get the same input and thus have no side effects. Among all the other benefits of the pure functions, there is one that is a little bit more subtle. If we have two such functions that execute in sequence, then the compiler might easily decide to reorder their execution to reduce the CPU cache misses for example. It wouldn’t be a safe operation if those functions had side effects though. The declarative style of programming tells the computer what to do, not how to achieve it. Venkat also touched things like function composition, memoization (the other word for caching), difference between lambdas and closures and laziness in functional programming.
One of the biggest benefits that Java got from the introduction of the lambdas was the streaming API. The stream is such an abstraction over the collection, which we should just tell what we need to do and it knows very well how to do it. It is not a data structure by itself, it’s rather a view on the data as it is being transformed. The basic operations that we can do on streams are:
- Filtering: with a given input of items, output only those of them that satisfy a certain condition
- Mapping: return the same number of items that come in, but apply a transformation on each of them
- Reduction: return a single point (a value or a list) from all the items in the stream. Examples of reductions are finding the sum of all the items in the stream (provided they are numbers), returning the minimum or the maximum item, etc.
Filtering, mapping and other such operations are intermediate – the items are still in the stream. While reduction operations are terminating: the items get out of the abstraction and can be operated again in the usual way. The coolest thing is that the intermediate operations are fused together and the evaluation on the items in the stream happens only when a terminal operation is executed.
I went to this session knowing that I have to catch up a lot in the performance area. I was attracted by one of the speakers: Charlie Hunt, whose book I read recently. The talk started by some hints on where to look for performance issues. People tend to look at the processor time, but waiting time is also important – maybe I/O operations need optimizations. Cycles per Instruction and Instructions per Cycle metrics are also good to monitor. If the first one is high, this means that the data structure in use are efficient, while on the other hand, if it is low, then the algorithms used to manipulate those structures are good.
The Java developers are very happy to have the VM as our companion as it does a lot of optimizations for us (no surprise!) and it knows very well the underlying CPU architecture. The compiler has hundreds of intrinsics: decisions on what assembly code to emit based even on the version of the CPU instruction set.
At the end Charlie went on to explain how he broke down a profiling information to find inefficient use of data structures: using TreeMap in applications where items are mostly pushed into the structure, while it is more efficient for rare insertions, as well using arrays as keys and values of a map to minimize the cache misses upon read operations: array elements are laid out in consecutive addresses in memory and memory is always read by the processors in chunks.
Hacking and BOFs
I went to the Hackergarden at noon and worked a little but together with Roberto Cortez on our live coding samples that we want to show on Thursday. The Hackergarden is a great opportunity for everyone visiting a conference to hack something and to contribute to an open source project of their choice. You have most of the project leads here and plenty of books to consult. I wanted to do something like this last year at Java2Days, but it failed as nobody besides the project leads showed up. Today I will go again, this time for Arquillian.
I visited also a couple of Birds of a Feather (BOF) sessions in the evening. First Roberto and Simon Maple from ZeroTurnaround and virtualJUG shared their developer horror stories. The coolest thing here was that half of the session was dedicated to horror stories told by people from the audience. At the end the best one won a signed copy of the Java 8 in Action book.
Last but not least, I went to the Forge BOF, where I finally met in person George Gastaldi – core Forge developer, who helped me so much in my contribution efforts in the last few years. I also got some ideas about our upcoming hands on lab at Devoxx.