alt
December 9th, 2012

Written by Stoyan Rachev

Overview

Devoxx, the biggest vendor-independent Java conference in the world, took place in Atwerp, Belgium on 12 – 16 November. This year it was bigger yet, reaching 3400 attendees from 40 different countries. As last year, I and a small group of colleagues from SAP were there and enjoyed it a lot.

After the impressive dance of Nao robots and the opening keynotes, more than 200 conference sessions explored a variety of different technology areas, ranging from Java SE to methodology and robotics. One of the most interesting topics for me was the evolution of the Java language and platform in JDK 8. My interest was driven partly by the fact that I was already starting work on Wordcounter, and finishing work on another concurrent Java library named Evictor, about which I will be blogging in a future post.

In this blog series, I would like to share somewhat more detailed summaries of the sessions on this topic which I attended. These three sessions all took place in the same day, in the same room, one after the other, and together provided three different perspectives on lambdas, parallel collections, and parallelism in general in Java 8.

In this post, I will cover the first session, with the other two coming soon.

On the road to JDK 8: Lambda, parallel libraries, and more

In the first session, Joe Darcy, a lead engineer of several projects at Oracle, introduced the key changes to the language coming in JDK 8, such as lambda expressions and default methods, summarized the implementation approach, and examined the parallel libraries and their new programming model. The slides from this session are available here.

Evolving the Java platform

Joe started by talking a bit about the context and concerns related to evolving the language. The general evolution policy for OpenJDK is:

  1. Don’t break binary compatibility
  2. Avoid introducing source incompatibilities.
  3. Manage behavioral compatibility changes

The above list also extends to the language evolution. These rules mean that old classfiles will be always recognized, the cases when currently legal code stops compiling are limited, and changes in the generated code that introduce behavioral changes are also avoided. The goals of this policy are to keep existing binaries linking and running, and to keep existing sources compiling.

This has also influenced the sets of features chosen to be implemented in the language itself, as well as how they were implemented. Such concerns were also in effect when adding closures to Java. Interfaces, for example, are a double-edged sword. With the language features that we have today, they cannot evolve compatibly over time. However, in reality APIs age, as people’s expectations how to use them evolve. Adding closures to the language results in a really different programming model, which implies it would be really helpful if interfaces could be evolved compatibly. This resulted in a change affecting both the language and the VM, known as default methods.

Project Lambda

Project Lambda introduces a coordinated language, library, and VM change. In the language, there are lambda expressions and default methods. In the libraries, there are bulk operations on collections and additional support for parallelism. In the VM, besides the default methods, there are also enhancements to the invokedynamic functionality. This is the biggest change to the language ever done, bigger than other significant changes such as generics.

What is a lambda expression?

A lambda expression is an anonymous method having an argument list, a return type, and a body, and able to refer to values from the enclosing scope:

(Object o) -> o.toString()
(Person p) -> p.getName().equals(name)

Besides lambda expressions, there is also the method reference syntax:

Object::toString()

The main benefit of lambdas is that it allows the programmer to treat code as data, store it in variables and pass it to methods.

Some history

When Java was first introduced in 1995 not many languages had closures, but they are present in pretty much every major language today, even C++. For Java, it has been a long and winding road to get support for closures, until Project Lambda finally started in Dec 2009. The current status is that JSR 335 is in early draft review, there are binary builds available, and it’s expected to become very soon part of the mainline JDK 8 builds.

Internal and external iteration

There are two ways to do iteration – internal and external. In external iteration you bring the data to the code, whereas in internal iteration you bring the code to the data. External iteration is what we have today, for example:

for (Shape s : shapes) { 
    if (s.getColor() == RED)
        s.setColor(BLUE); 
}

There are several limitations with this approach. One of them is that the above loop is inherently sequential, even though there is no fundamental reason it couldn’t be executed by multiple threads.

Re-written to use internal iteration with lambda, the above code would be:

shapes.forEach(s -> { 
    if (s.getColor() == RED)
        s.setColor(BLUE); 
})

This is not just a syntactic change, since now the library is in control of how the iteration happens. Written in this way, the code expresses much more what and less how, the how being left to the library. The library authors are free to use parallelism, out-of-order execution, laziness, and all kinds of other techniques. This allows the library to abstract over behavior, which is a fundamentally more powerful way of doing things.

Functional Interfaces

Project Lambda avoided adding new types, instead reusing existing coding practices. Java programmers are familiar with and have long used interfaces with one method, such as Runnable, Comparator, or ActionListener. Such interfaces are now called functional interfaces. There will be also new functional interfaces, such as Predicate<T> and Block<T>. A lambda expression evaluates to an instance of a functional interface, for example:

Predicate<String> isEmpty = s -> s.isEmpty();
Predicate<String> isEmpty = String::isEmpty;
Runnable r = () -> { System.out.println(“Boo!”) };

So existing libraries are forward-compatible with lambdas, which results in an "automatic upgrade", maintaining the significant investment in those libraries.

Default Methods

The above example used a new method on Collection, forEach. However, adding a method to an existing interface is a no-go in Java, as it would result in a runtime exception when a client calls the new method on an old class in which it is not implemented.

A default method is an interface method that has an implementation, which is woven-in by the VM at link time. In a sense, this is multiple inheritance, but there’s no reason to panic, since this is multiple inheritance of behavior, not state. The syntax looks like this:

interface Collection<T> {
    ...
    default void forEach(Block<T> action) {
        for (T t : this)
            action.apply(t);
    }
}

There are certain inheritance rules to resolve conflicts between multiple supertypes:

  • Rule 1 – prefer superclass methods to interface methods ("Class wins")
  • Rule 2 – prefer more specific interfaces to less ("Subtype wins")
  • Rule 3 – otherwise, act as if the method is abstract. In the case of conflicting defaults, the concrete class must provide an implementation.

In summary, conflicts are resolved by looking for a unique, most specific default-providing interface. With these rules, "diamonds" are not a problem. In the worst case, when there isn’t a unique most specific implementation of the method, the subclass must provide one, or there will be a compiler error. If this implementation needs to call to one of the inherited implementations, the new syntax for this is A.super.m().

The primary goal of default methods is API evolution, but they are useful as an inheritance mechanism on their own as well. One other way to benefit from them is optional methods. For example, most implementations of Iterator don’t provide a useful remove(), so it can be declared "optional" as follows:

interface Iterator<T> {
    ...
    default void remove() {
        throw new UnsupportedOperationException();
    }
}

Bulk operations on collections

Bulk operations on collections also enable a map / reduce style of programming. For example, the above code could be further decomposed by getting a stream from the shapes collection, filtering the red elements, and then iterating only over the filtered elements:

shapes.stream().filter(s -> s.getColor() == RED).forEach(s -> { s.setColor(BLUE); });

The above code corresponds even more closely to the problem statement of what you actually want to get done. There also other useful bulk operations such as map, into, or sum. The main advantages of this programming model are:

  • More composability
  • Clarity – each stage does one thing
  • The library can use parallelism, out-of-order, laziness for performance, etc.

The stream is the basic new abstraction being added to the platform. It encapsulates laziness as a better alternative to "lazy" collections such as LazyList. It is a facility that allows getting a sequence of elements out of it, its source being a collection, array, or a function. The basic programming model with streams is that of a pipeline, such as collection-filter-map-sum or array-map-sorted-forEach. Since streams are lazy, they only compute as elements are needed, which pays off big in cases like filter-map-findFirst.

Another advantage of streams is that they allow to take advantage of fork/join parallelism, by having libraries use fork/join behind the scenes to ease programming and avoid boilerplate.

Implementation technique

In the last part of his talk, Joe described the advantages and disadvantages of the possible implementation techniques for lambda expressions. Different options such as inner classes and method handles were considered, but not accepted due to their shortcomings. The best solution would involve adding a level of indirection, by letting the compiler emit a declarative recipe, rather than imperative code, for creating a lambda, and then letting the runtime execute that recipe however it deems fit (and make sure it’s fast).

This sounded like a job for invokedynamic, a new invocation mode introduced with Java SE 7 for an entirely different reason – support for dynamic languages on the JVM. It turned out this feature is not just for dynamic languages any more, as it provides a suitable implementation mechanism for lambdas, and is also much better in terms of performance.

Conclusion

Project Lambda is a large, coordinated update across the Java language and platform. It enables much more powerful programming model for collections and takes advantage of new features in the VM. You can evaluate these new features by downloading the JDK8 build with lambda support. IDE support is also already available in NetBeans builds with Lambda support and IntelliJ IDEA 12 EAP builds with Lambda support.

I already made my own experiences with lambdas in Java in Wordcounter. As I already wrote, I am convinced that this style of programming will quickly become pervasive in Java, so if you don’t yet have experience with it, I do encourage you to try it out.

December 2nd, 2012

Written by Stoyan Rachev

These days I released Wordcounter, a Java library and command-line utility for counting words in text files and performing analysis on the word counts that makes heavy use of functional programming constructs and parallel computing approaches. This is my fourth entry for the “Geeky Quickies” contest at SAP, after Feeder, Todor, and Hanoier.

The library uses JDK 8 lambdas, as well as new JDK 7 features such as Fork / Join and NIO.2. It is built and can only be used with the early access version of JDK 8 with lambda support.

With the introduction of lambdas and their supporting features in JDK 8, the way we build software in Java is going to change. If you would like to get an idea how your Java code might look like in a few years, you may take a look at Wordcounter. Unlike most resources available at the moment, this is not a simple tutorial, but a real working project.

The contest task requested to implement an algorithm using Fork / Join and lambdas that analyzes all files in a directory and finds the ten most used words in the files together with how many times they occur. Rather than simply sticking with Fork / Join, I tried to find the most suitable parallel approach for this particular task, which led me to choose Producer / Consumer for the core word counting logic.

You can explore the source code on github. There is also a rather comprehensive README which provides more detailed documentation.

The latest binary, javadoc, and sources packages can be found in the GitHub downloads section. If there is enough interest, I will publish the Javadoc online and make the library available also in central Maven repositories.

Feedback, comments, and contributions are welcome!

Overview

Library Features

  • Count all words in a string, a single text file, or a directory tree containing text files.
  • Analyze the word counts to find the top N most used words, the bottom N least used words, or the total word count.
  • Specify whether a character is a word character via an external predicate.
  • Specify an optional operation to be performed on words, for example converting to lower case, via an external operator.
  • Choose between non-parallel and parallel implementations to compare their performance.
  • Specify the parallelism level to be a value different from the number of cores, if you need to.

Programming Highlights

  • Uses Producer / Consumer for reading files and counting the words in each file in parallel. The actual mechanism is encapsulated in a generic, reusable implementation.
  • Uses Fork / Join for performing analysis on the word counts. Here again the actual mechanism is encapsulated in a generic, reusable implementation.
  • Uses NIO.2 for traversing directory trees and reading files.
  • Makes heavy use of functional interfaces and lambda expressions in order to pass functions rather than data where appropriate.
  • There are comprehensive unit and performance tests for the two most important classes.
  • As usual, the code is clean, well-structured, and easy to read. Formatting, naming, and comments are uniform and consistent. A lot of attention has been put to the appropriate use of both object-oriented and functional programming techniques.

Command Line Interface

To start the command line program, execute the following command:

java -jar wordcounter-1.0.4.jar <options>

All options have reasonable default values so none of them is mandatory. Using the defaults for all options results in finding the top 10 most used words in the current directory and its subdirectories. Specifying non-default values allows specifying different directories, analysis modes, word characters, number of words, and parallelism level, as well as ignoring the case or using a serial rather than parallel computation, for example:

  • Find the top 10 most used words in the directory "words": -p words
  • Find the bottom 5 least used words in the directory "wordsx", considering numbers as word characters, ignoring case, with info logging: -p wordsx -m bottom -d 1234567890 -i -n 5 -l info

For more information about the command line interface options, see Command Line Interface in the README.

Design

The design of the library partitions the problem into generic parallel processing utilities, classes that encapsulate the data structures used to represent raw and sorted word counts, and finally classes that perform the counting and analysis using the capabilities of the previous two groups. Practically all of these classes use instances of functional interfaces quite a lot in order to allow specific customizations to their generic behavior. This results in code which is heavily infused with lambda expressions and method references. Welcome to the world of functional programming in Java!

Generic Parallel Processing Utilities

The ForkJoinComputer Class

The ForkJoinComputer<T> class is a generic Fork / Join computer. It divides the initial size by 2 until either reaching the specified parallelism level or falling below the specified threshold, computes each portion serially using the specified Computer<T>, and then joins the results of all computations using the specified Merger<T>. Here, Computer and Merger are functional interfaces that are defined as follows:

public interface Computer<T> {
    T compute(int lo, int hi);
}
    
public interface Merger<T> {
    T merge(T result1, T result2);
}

This class can be used by simply instantiating it with the appropriate lambdas and then calling its compute method.

// Calculate the sum of all integers from 1 to n, using 1000 as a threshold
new ForkJoinComputer<Integer>(n, 1000, 
    (lo, hi) -> { int sum = 0; for (int i = lo + 1; i <= hi; i++) sum += i; return sum; }, 
    (a, b) -> a + b).compute();

The ProducerConsumerExecutor Class

The ProducerConsumerExecutor<T1, T2> class is a generic Producer / Consumer executor. It starts a single Producer<T1> task and multiple Mediator<T1, T2> and Consumer<T2> tasks with their number equal to the specified parallelism level. The producer puts T1 instances in a BlockingQueue<T1>. The mediators take these instances from there, convert them to T2, and put them in another blocking queue of type BlockingQueue<T2>. Finally, the consumers take the T2 instances from the second blocking queue and process them.

Here, Producer, Consumer, and Mediator are functional interfaces that are defined as follows:

public interface Producer<T> {
    void produce(Block<T> block);
}
    
public interface Consumer<T> {
    void consume(T t);
}
    
public interface Mediator<T1, T2> {
    void mediate(T1 t, Block<T2> block);
}

In the above code, Block is a standard function defined in java.util.functions. The block passed to the Producer and Mediator methods puts the produced data in the corresponding blocking queue.

Similarly to ForkJoinComputer, this class can be used by simply instantiating it with the appropriate lambdas and then calling its execute method.

Data Structure Classes

These classes encapsulate the data structures used to represent raw and sorted word counts.

  • The WordCounts class represents a list of words mapped to their usage counts.
  • The TopWordCounts class represents a sorted list of word usage counts mapped to all words that have such counts.

Word Counting and Analysis Classes

The WordCounter Class

The WordCounter class provides a method for counting words in a Path representing a file or a directory tree, either serially or in parallel. It can be used by simply instantiating it with the appropriate lambdas and then calling its count method:

// Count all words consisting of only alphabetic chars, ignoring case, using parallel processing
WordCounts wc = new WordCounter(path, (c) -> Character.isAlphabetic(c), (s) -> s.toLowerCase(), true).count();

The parallel implementation uses ProducerConsumerExecutor<Path, String>. The producer simply walks the directory tree and produces Path instances. The mediators read the files into text pieces, and the consumers count the words in each text piece and collect them in a single WordCounts instance. This is done with the following piece of code:

private WordCounts countPar() {
    final WordCounts wc = new WordCounts(parLevel);
    new ProducerConsumerExecutor<Path, String>(
        (block) -> collectPaths(block), 
        (file, block) -> readFileToBlock(file, block),
        (text) -> wc.add(countWords(text, pred, op)), parLevel).execute();
    return wc;
}

The WordCountAnalyzer Class

The WordCountAnalyzer class provides methods for performing analysis on the word counts produced by WordCounter, such as finding the top N most used words. It also can be used by simply instantiating it and then calling one of its methods such as findTop or total:

// Find the top 10 most used words in wc
TopWordCounts twc = new WordCountAnalyzer(wc, true).findTop(10, (x, y) -> (y - x));

The differnet analysis types implement the internal Analysis<T> interface, which is defined as follows:

interface Analysis<T> {
    T compute(int lo, int hi);
    T merge(T r1, T r2);
}

Since the signatures of the above two methods mimic the Computer and Merger functional interfaces used by ForkJoinComputer, we can use fork / join for all analysis types in the following way:

public TopWordCounts findTop(int number, Comparator<Integer> comparator) {
    return analyse(new FindTopAnalysis(number, comparator));
}

private <T> T analyse(Analysis<T> a) {
    if (par) {
        return new ForkJoinComputer<T>(wc.getSize(), THRESHOLD, a::compute, a::merge, parLevel).compute();
    } else {
        return a.compute(0, wc.getSize());
    }
}

For more information about the library design, see Design in the README.

Performance

I found out that the parallel Producer / Consumer word counting implementation is adapting nicely to the different number of cores and I/O speeds. It is significantly faster than the serial implementation. Unlike it, the parallel Fork / Join analysis implementation is only faster than the serial one when testing with an unrealistically large number of unique words, and only by a mild degree. With small number of unique words, it is actually slower than the serial one.

Conclusion

This was my first significant encounter with functional programming in Java. Although Java is still not Scala, the new functional programming constructs changed dramatically the way I designed and implemented Wordcounter, compared to my previous Java code. I find this new programming style to be powerfull and expressive, and I believe with the release of Java 8 it will quickly become dominant.

This was also the final “Geeky Quickies” task for me. The jury awarded my submissions generously enough, and I found myself the winner even before the contest is over.

If this post picked your interest, feel free to download and explore Wordcounter, use it to learn the new Java functional programming constructs, and let me know if I could help you in this journey.

November 2nd, 2012

Written by Stoyan Rachev

Overview

The Java2Days conference is the major event in Eastern Europe to present the latest trends in Java development. This year, it took place in Sofia, Bulgaria on 25 – 26 October. I was there and enjoyed the opportunity to taste some cutting edge Java, cloud, and mobile content delivered right to my home city, together with few of my colleagues from SAP.

I am visiting this event for a second year in a row. This year, it was bigger and included new things such as “Free Tech Trainings”. There is also a contest (open until 11 November) with some cool programming problems to solve.

In this blog post I would like to share short summaries of some of the conference sessions which I attended, based on my notes and some additional research I did afterwards.


JavaEE.Next(): Java EE 7, 8 and Beyond

In the conference opening session, Reza Rahman, a Java EE / GlassFish evangelist at Oracle, presented the key changes in the upcoming 7th edition of Java EE, which is to be released in March / April next year, as well as a glimpse into Java EE 8. We learned that cloud support is postponed to Java EE 8, and saw an overview of various changes such as an overhaul of JMS, WebSocket and HTML 5 support, a standard API for JSON processing, the next version of JAX-RS, the Caching API, and more.

JMS 2.0

The new version of Java Message Service (JMS), which is currently being developed as JSR 343, introduces some major improvements over the existing JMS 1.1, which was released almost 5 years ago. It introduces new messaging features such as batch delivery and delivery delay, as well as several changes to the JMS API to make it simpler and easier to use, such as new annotations and dependency injection. With the new API, sending a message involves essentially injecting a context, looking-up a queue, and finally a single method call, with all needed objects created on the fly:

@Inject
private JMSContext context;

@Resource(mappedName = "jms/inboundQueue")
private Queue inboundQueue;

public void sendMessage(String payload) {
  context.createProducer().send(inboundQueue, payload);
}

See also JMS 2.0 Early Draft.

Java API for WebSocket

The Java API for WebSocket (JSR 356) is a new specification introduced for the first time with Java EE 7. It features both server and client-side API for WebSocket, in two different flavors – programmatic and declarative. For more information, see the summary of the dedicated session on this topic below.

Java API for JSON Processing

The Java API for JSON Processing (JSR 353) is also a new specification, which brings the ability to parse, generate, transform, and query JSON content. It features both object model and streaming APIs, similar to DOM and StAX in the XML world. In the future, there will be also binding JSON to Java objects, similar to JAXB, but for the moment, this is still not part of the scope of this JSR. See also JSON-P: Java API for JSON Processing.

JAX-RS 2.0

The major addtions in the next version of Java API for RESTful Web Services (JAX-RS) are the standard client API, as well as new features such as message filters and handlers, entity interceptors, asynchronous processing, hypermedia support, and common configuration. See also JAX-RS 2.0 Early Draft Explained.

Java Caching API

The Java Caching API (JSR 107) has been languishing for almost 10 years, and is finally going to be part of Java EE. It introduces a standard API and annotations for storing and retrieving objects from a cache. The API is pretty simple, for example putting a customer object into a cache would involve code similar to the following:

public class CustomerDao {
  @CachePut(cacheName = "customers")
  public void addCustomer(
    @CacheKeyParam int cid, 
    @CacheValue Customer customer) {
    ...
  }
}

Other Java EE 7 Features

  • JPA 2.1 adds built-in schema generation, including indexes, as well as support for stored procedures, and other enhancements.
  • JTA 1.2 introduces support for declarative transactions outside EJB, and the @TransactionScoped annotation.
  • JSF 2.2 introduces better HTML5 support, including islands of HTML5 in JSF pages and binding of HTML5 input types to JSF beans, as well as additional annotations such as @FlowScoped and @ViewScoped, CDI alignment, etc.
  • Batch Processing for Java EE is a new specification which introduces an API for batch processing based on jobs which consist of steps, each one involving reading, processing, and writing items. Instead of implementing interfaces, the methods for reading, processing, and writing items can be simply annotated as @ReadItem, @ProcessItem, and @WriteItem
  • Bean Validation 1.1 introduces method constraints and injectable artifacts.

Java EE 8

The 8th edition of JavaEE is expected to be released 2 years after Java EE 7, which is in 2015. It will focus on the following areas:

  • Cloud, PaaS, multitenancy
  • Modulatiry based on Jigsaw
  • Even better HTML5 support
  • More CDI and EJB alignment
  • NoSQL (?)

Summary

This was indeed a lot of information to swallow in 45 minutes, but the speaker did a good job delivering it. It seems that Java EE 7 will be a solid addition to the series, and even more major features will be coming with Java EE 8.

The slides from this session are available here. You can try these new features out by downloading GlassFish 4. To keep up-to-date, you can follow The Aquarium blog.


Building HTML5/WebSocket Applications with GlassFish, Grizzly and JSR 356

This was in fact the last conference session, in which Reza Rahman presented one very interesting part of the upcoming Java EE 7, namely support for WebSocket.

WebSocket Primer

Two-way communication over the half-duplex HTTP protocol has always been a challenge. Flavors of server push such as polling, long-polling, or AJAX have been introduced but they are complex, inefficient, and wasteful.

In contrast, WebSocket provides a bi-directional, full-duplex communication channel over a single TCP socket. Unlike pure TCP, however, all communications are done over the standard HTTP port (e.g. 80), which allows it to work in environments which block non-standard Internet connections using a firewall. It was originally proposed as part of HTML 5.

W3C defines a very simple WebSocket JavaScript API and a WebSocket Protocol. The Protocol defines “handshake” and “framing”, where the handshake defines how a normal HTTP connection can be upgraded to a WebSocket connection, while the framing defines wire format of the message. The API defines methods for opening and closing a connection and for sending a message in various flavors. It is currently supported by all major browsers.

Java API for WebSocket

Java EE 7 introduces WebSocket support via the Java API for WebSocket (JSR 356). The reference implementation is called Tyrus and is part of GlassFish 4. The specification defines two different API flavors – programmatic based on interfaces and declarative based on annotations, both for the client and the server.

The declarative API is overly simplistic and completely hides the WebSocket internals from the developer. In its simplest form, a WebSocket endpoint can be defined as follows:

@WebSocketEndpoint("/hello")
public class HelloBean {

  @WebSocketMessage
  public String sayHello(String name) {
    return "Hello " + name;
  }
}

There are the following annotations:

  • @WebSocketEndpoint is a class-level annotation that declares a POJO to accept WebSocket messages. The path at which the messages are accepted is specified in this annotation. Optionally, it can also specify decoders, encoders, and subprotocols.
  • @WebSocketMessage is a method-level annotation that for the method that is invoked when the endpoint receives a message. The value returned from this method is sent back to the other side.
  • @WebSocketOpen and @WebSocketClose are method-level annotations for intercepting connection open and close events.
  • @WebSocketError is a method-level annotations for intercepting errors during the conversation.
  • @WebSocketParam is a parameter-level annotation for path segments that are passed as parameters.

Summary

WebSocket seems to be indeed a much better alternative to the currently used server push approaches, so it’s cool that support for this is coming in JavaEE quite soon.

The slides from this session are available here. There is also a similar presentation already available on Slideshare, Building HTML5 WebSocket Apps in Java.


Domain Driven Design with Java EE 6

The third Java EE session by Reza Rahman focused on Domain Driven Design (DDD). This is an approach to developing software by connecting the implementation to an evolving model. Its premise is placing the project’s primary focus on the core domain and domain logic, and basing designs on a model of the domain. The term was first coined by Eric Evans in his book of the same title. DDD emphasizes a return to Object Oriented Analysis and Design, and is gradually gaining popularity as an alternative to traditional architectures originally popularized by J2EE Blue Prints. The goal of the session was to demonstrate how DDD can be implemented using Java EE 6 by mapping its core concepts to Java EE specifications, and by providing a comprehensive code example.

Core Concepts

  • Domain: A sphere of knowledge (ontology), influence, or activity.
  • Model: A system of abstractions that describes selected aspects of a domain and can be used to solve problems related to that domain.
  • Ubiquitous Language: A language structured around the domain model and used by all team members.
  • Context: The setting in which a word or statement appears that determines its meaning.

Building Blocks

The diagram below is the “canonical image” of Domain Driven Design.

  • Entity: An object that is not defined by its attributes, but rather by a thread of continuity and its identity.
  • Value Object: An object that contains attributes but has no conceptual identity.
  • Aggregate: A collection of objects that are bound together by a root entity, otherwise known as an aggregate root.
  • Service: When an operation does not conceptually belong to any object.
  • Repository: Methods for retrieving domain objects should delegate to a specialized Repository object.
  • Factory: Methods for creating domain objects should delegate to a specialized Factory object.

Maintaining Model Integrity (Strategic DDD)

Layers

  • User Interface: Shows the user information and receives input.
  • Application: Thin layer to coordinate application activity. No business logic or business object state is in this layer.
  • Domain: All the business objects and their state reside here. Objects in the domain layer should be free of concern about displaying or persisting themselves.
  • Infrastructure: The objects dealing with housekeeping tasks like persistence.

Mapping DDD to Java EE

DDD Concept / Layer Java EE Concept
Context Packaging, Modularity
UI Layer JSF, Servlet
Services, Application Layer EJB, CDI
Entities, Value Objects, Repositories JPA (Entities), CDI
Infrastructure Layer JPA (Entity Manager API), JMS

Sample Application

The sample application that was demoed during is an evolution of the DDD Sample already available at SourceForge. It is a Shipping Tracker application. The packaging structure of the application reveals clearly the different layers mentioned above. The renovated DDD sample application will be published soon as a java.net project. Until then, you can download and play with the existing DDD sample, it builds with Maven, can be run in an embedded Jetty, and of course imported into Eclipse for a deeper look.

Summary

I found DDD quite interesting, even though initially it was a bit hard to grasp. Reza explained everything in details, however this shortened the time available for demoing and showing the code of the sample application. I and my colleagues were intrigued by the fact that the business logic in the sample is captured in the Domain layer classes themselves, without any controller of facade. This seemed initially as a somewhat unusual approach to me but upon second though it made perfect sense.

The slides from this session are available here. While researching DDD after the conference, I found the following interesting resources: