alt
September 30th, 2014

by Ivan St. Ivanov

Java EE 7

The first day of JavaOne conference started with a Tutorial talk, where Arun Gupta decomposed the Java EE 7 soup to nuts. For the most part he was comparing via SWAT analysis the different servers that implement Java EE 7 as well as the IDEs that support it. On the server side, besides the usual suspects Glassfish and WildFly, I was surprised to hear that there is a new kid on the block providing a developer preview version. It comes from a Korean company and its name is JEUS. Unfortunately Arun could not show much of it for various reasons: it does not work on MacOS (so Arun used a VM with Fedora installed), it does not have integration with any IDE and he could not deploy and start even a simple web application. The support that the company provided on its forums was basically by a single person answering by random.

The other two servers are very well known to most of us, but still here are some remarks from the presentation:

  • Glassfish is always on the bleeding edge being the Java EE reference implementation, so it adopts all the new features first
  • It has great command line and also REST interface for monitoring and management and comes bundled with NetBeans
  • Wildfly on the other hand is commercially backed by Red Hat and 99.9% of the code is the same in the commercial version JBoss EAP, which implies very easy migration of the applications written for the free offering
  • It has much more active community around it (with Glassfish having almost none) and very well established contribution processes
  • The clustering in Glassfish is not tested and most probably is not working as expected
  • A threat for both servers is the Tomcat + Spring combination

In the IDE area we all know that IntelliJ IDEA rocks, however unfortunately its Java EE development modules are not free. If you are an open source project developer, teacher or student, you can request a free license though. Although widely used, Eclipse is the hardest of the three IDEs to develop Java EE, that is why Red Hat have created the JBoss Development Tools bundle, which eases Eclipse development and also provides support for the various JBoss projects. NetBeans has very pleasant experience out of the box, great Maven support and comes bundled with Glassfish and WildFly.

Besides that, Arun showed how easy it is to develop with JBoss Forge (tell me about it), how you can setup Arquillian (with Maven archetype and with Forge) and how you can create an account and deploy an application on OpenShift (Red Hat’s xPaaS offering).

An interesting use case with OpenShift is to use it as a continuous integration server. You can install a Jenkins cartrage with one click, i.e. a virtual machine bundled with web server running Jenkins, then push your changes to it, which will trigger running the integration tests and if they pass, it can deploy automatically everything on the productive instance. Arun promised that he will provide videos about this process on his blog.

Functional thinking and streams

I went to two talks from Venkat Subramaniam in the afternoon. Their topic was Functional Programming styles in Java. You cannot easily describe Venkat’s talks; you must go and see them!

His first session was on the functional thinking. It touched the topic of pure functions: those that do not modify external state and are not changing because of changes in the external environment. They provide same output when they get the same input and thus have no side effects. Among all the other benefits of the pure functions, there is one that is a little bit more subtle. If we have two such functions that execute in sequence, then the compiler might easily decide to reorder their execution to reduce the CPU cache misses for example. It wouldn’t be a safe operation if those functions had side effects though. The declarative style of programming tells the computer what to do, not how to achieve it. Venkat also touched things like function composition, memoization (the other word for caching), difference between lambdas and closures and laziness in functional programming.

One of the biggest benefits that Java got from the introduction of the lambdas was the streaming API. The stream is such an abstraction over the collection, which we should just tell what we need to do and it knows very well how to do it. It is not a data structure by itself, it’s rather a view on the data as it is being transformed. The basic operations that we can do on streams are:

  • Filtering: with a given input of items, output only those of them that satisfy a certain condition
  • Mapping: return the same number of items that come in, but apply a transformation on each of them
  • Reduction: return a single point (a value or a list) from all the items in the stream. Examples of reductions are finding the sum of all the items in the stream (provided they are numbers), returning the minimum or the maximum item, etc.

Filtering, mapping and other such operations are intermediate – the items are still in the stream. While reduction operations are terminating: the items get out of the abstraction and can be operated again in the usual way. The coolest thing is that the intermediate operations are fused together and the evaluation on the items in the stream happens only when a terminal operation is executed.

Java performance

I went to this session knowing that I have to catch up a lot in the performance area. I was attracted by one of the speakers: Charlie Hunt, whose book I read recently. The talk started by some hints on where to look for performance issues. People tend to look at the processor time, but waiting time is also important – maybe I/O operations need optimizations. Cycles per Instruction and Instructions per Cycle metrics are also good to monitor. If the first one is high, this means that the data structure in use are efficient, while on the other hand, if it is low, then the algorithms used to manipulate those structures are good.

The Java developers are very happy to have the VM as our companion as it does a lot of optimizations for us (no surprise!) and it knows very well the underlying CPU architecture. The compiler has hundreds of intrinsics: decisions on what assembly code to emit based even on the version of the CPU instruction set.

At the end Charlie went on to explain how he broke down a profiling information to find inefficient use of data structures: using TreeMap in applications where items are mostly pushed into the structure, while it is more efficient for rare insertions, as well using arrays as keys and values of a map to minimize the cache misses upon read operations: array elements are laid out in consecutive addresses in memory and memory is always read by the processors in chunks.

Hacking and BOFs

I went to the Hackergarden at noon and worked a little but together with Roberto Cortez on our live coding samples that we want to show on Thursday. The Hackergarden is a great opportunity for everyone visiting a conference to hack something and to contribute to an open source project of their choice. You have most of the project leads here and plenty of books to consult. I wanted to do something like this last year at Java2Days, but it failed as nobody besides the project leads showed up. Today I will go again, this time for Arquillian.

I visited also a couple of Birds of a Feather (BOF) sessions in the evening. First Roberto and Simon Maple from ZeroTurnaround and virtualJUG shared their developer horror stories. The coolest thing here was that half of the session was dedicated to horror stories told by people from the audience. At the end the best one won a signed copy of the Java 8 in Action book.

Last but not least, I went to the Forge BOF, where I finally met in person George Gastaldi – core Forge developer, who helped me so much in my contribution efforts in the last few years. I also got some ideas about our upcoming hands on lab at Devoxx.

September 29th, 2014

by Ivan St. Ivanov

As always, the first (or I should say zeroth) day of JavaOne is dedicated to the community. All the talks were done by the Java User Groups (JUGs) about their activities and also by the Java open source projects from Oracle – NetBeans and Glassfish. It was also the day of the keynotes.

I could divide the technical talks in two: presenting what is new in Java 8 and talking about organizational stuff.

In what is new in Java 8 part we saw all these things that I believe most of the JUGs should have done already: lambda expressions and their use cases, default methods in interfaces, the stream  API (but mostly the na?ve part of it), method handles and all the rest, about which we had a series of meetings in in Bulgarian JUG the beginning of this summer. The surface for Java 9 was scratched a little bit. Some presenters talked about the new features there: from the already announced modular source code (part of Project Jigsaw), HTTP 2.0 client, process API updates, lightweight JSON API, monetary API as well as some new stuff that is not in the Java 9 timeframe like generics on primitive types and value types (known as project Valhalla), collection API improvements (List.of(), Set.of()) and units of measurement API.

The other part of the talks were those about building communities. A substantial part of a JUG or community is how they contribute to existing Java developments – OpenJDK and the various JSRs. I saw a very inspiring talk by London Java Community’s Martijn Verburg. He addressed not only the JUGs, but also the big companies that use Java by stating that if a company has thousands of Java developers and millions lines of code, it should care how Java is evolving and one of the ways to do it is by participating in activities like Adopt a JSR.

As I am teaching Java and Java EE together with my SAP Labs Bulgaria colleagues in a couple of universities in Sofia, I was interested to see the talk about the free tools for teach Java, where distinguished university processors and book authors shared their experience mainly with NetBeans. Some of the talked about the projects that their university participated like Neuroph Studio and the completely new UML plugin done by students from the University of Belgrade as well as the Quorum programming language designed to help blind people to program. All of the presenters agreed that the IDE should not stand in their way while teaching and I cannot agree more with that. I still remember the troubles that we had with our students while we were showing them how to set up JDBC, JPA or a Web Server in Eclipse. One of the professors said that he first starts by introducing maven for dependency management and SVN and Git for source control. He uses TomEE (as we do) for a server as it has everything that you may want to teach: Servlet, JSP, JSF, JPA, EJB, Web services, JAX-RS.

The biggest controversy of the day was the JavaOne keynote. As always, it was divided into three big parts. In the beginning Oracle executes (i.e. vice presidents) present the strategy of the company for Java, not forgetting to mention how awesome the current year was for us all. BTW, kudos to the company for joining Devoxx4Kids initiative. We had several young boys and girls on the stage talking about their experience with programming legos and minecraft mods. During the strategy part Oracle stressed on its accent on Internet of Things (IoT) and on Java SE Embedded and Java ME (which most of the audience didn’t care much about). The second part was the sponsor keynote, where the IBM guys showed us again their ability to put tons of information on one slide. The audience didn’t want to hear much about IBM’s efforts in the cloud and on the mainframe. The thing that we all wanted to see was the last part – the technical keynote. Unfortunately there was no time for the most interesting part: the Java 9, 10, 11 outlook. Brian Goetz just started to talk about the already mentioned project Valhalla, when he was cut by the organizers because the time was over. How sad that Mark Reinhold had to do the dirty job and not one of those VPs that bored the audience to death.

We all left the hall puzzled and in silence. Only the twitter stream was not silent. Don’t worry, Brian, you will tell us all this at Devoxx!

December 9th, 2013

by Ivan St. Ivanov

This year I visited the fifth edition of Java2Days and guess what – it was my fifth attendance. I still remember my Winnie the Pooh paraphrase four years ago: This is the best Java Conference that I’ve been at. Actually, this is the only Java conference that I’ve been at. Well, after all these years, four Devoxx’s and two JavaOne’s I can definitely say that I still enjoy very much going to our local (not-only) Java geeks gathering! I will not blog here about my overall impressions though, but rather about one of the talks that I had.

For a second consecutive time I was speaker at Java2Days. Likewise last year, I was co-speaker to Koen Aers about JBoss Forge – a project that I contribute to from time to time. But this particular post will be about my second session: Dissecting the Hotspot JVM.

Regular readers of this blog might remember that I had the idea to bring OpenJDK to Bulgaria. I got that at last year’s JavaOne when I saw some talks by London Java Community (LJC) members Ben Evans and Martijn Verburg on adopting the Java reference implementation at various JUGs around the world. In the last couple of months I saw my dream come true, with the tremendous help and energy from other two BG JUG members: Martin Toshev and Dmitriy Aleksandrov (known better as Mitia). And of course not to forget the amazing support from LJC’s own Mani Sarkar that also gave a talk at Java2Days and participated in all our activities throughout the conference.

So, let’s get to what we showed at our Dissecting the Hotspot JVM talk. It was co-hosted by Martin Toshev and me, but for the most part was prepared by my co-speaker (kudos for that!). In the beginning we introduced the topic by describing what a Virtual Machine is (no, not that kind of VMs, that you run in VirtualBox or VMWare software). Then we described in a few words what the Hotspot JVM gives to the JVM developers: a byte code interpreter, a couple of compilers, memory model, garbage collection, classloading, startup and shutdown…

Most of the time we spent explaining the three major subsystems of the JVM as defined in this diagram, which we borrowed from artima.com:

The classloading subsystem is responsible for loading, validating and initializing the classes from the file system (or other media) to the memory. We spent some time here to explain the class format: the magic number (CAFEBABE), the class format version, the constant pool, the references to this and super classes as well as to the implemented interfaces, then the fields, methods and attributes.

The biggest part of our talk was devoted to the runtime data subsystem, i.e. the way data is stored in memory during a Java program runtime. Hotspot defines two types of memory: shared by all the threads and specific to every single thread. When a new thread is spawned, it gets its own memory that can be guaranteed to be used just by that thread. It contains the program counter pointing to the next instruction that should be executed as well as the Java and the native stacks. The Java stack in particular consists of number of frames: when a new method is called, the JVM creates a fixed sized segment in memory (called stack frame), which reserves space for the method return value, the local variables (including the method parameters and a reference to this in non-static methods), a reference to another stack, used to store the operands for the various operations run inside the method and a reference to the constant pool. The memory shared between all the threads contains the heap and things like JIT-compiled code, class definitions, interned strings, etc. We then went to explain the overhead that a single object takes when stored in memory. It’s not only about storing the object fields, but we get two machine words (4 bytes in 32-bit machines and 8 bytes on 64-bit ones) in addition. The first one is the so called mark work, which contains the hashcode, information concerning the garbage collector and locking. The other one, the so called class word, contains a reference to object class’s meta-data.

In the last part of our talk we dived into the execution engine. The na?ve look into that is to treat it as a simple interpreter: we have an array of byte code op codes, and the JVM executes them in a row. However, when the virtual machine identifies that a certain chunk of code is small and is executed more than often, it might decide to compile it just in time (hence the name JIT) to assembly code. It does some assumptions in order to do that, so if an event happens that would invalidate those assumptions, the already compiled code may be de-optimized and go back to its interpreted version. For example if the compiler assumed that there is just one implementation of a certain interface and decides that it should directly call that implementation’s method instead of going to look them up, but at some later time a classloader loads another implementation of the interface, then the assumptions gets invalidated and the code is de-optimized.

We had a lot of questions at the end, to most of which we were able to answer. An attendee asked about cross compiling OpenJDK, which means building an image for certain operating system on another operating system. We could not answer, but Mani found some interesting resources a few days later and shared them in the mailing list.

As a whole I think it was a pretty successful talk. We managed to deliver a lot of useful information in really structured way without going out of our time boundaries. I hope this and all the other conference events that we organized will bring much more people to our JUG meetings next year. Good times…