Angular-datatables with server side pagination using Spring Data

We are using angular-datatables in a project. So far, we just returned all entities from the server’s REST controller (Using Spring Boot and Spring Data on the server side). I wanted to see how I could implement server side pagination to avoid returning all records at once.

I was lucky to find spring-data-jpa-datatables which makes it very easy to do.

First, add the dependency in your pom.xml:

        <dependency>
            <groupId>com.github.darrachequesne</groupId>
            <artifactId>spring-data-jpa-datatables</artifactId>
            <version>2.1</version>
        </dependency>

In my UserController, I have currently this:

    @RequestMapping(method = RequestMethod.GET)
    @Secured(Roles.ADMIN)
    public List<UserDto> getUsers() {
        return StreamSupport.stream(userRepository.findAll().spliterator(), false)
                            .map(UserDto::fromUser)
                            .collect(Collectors.toList());
    }

I added a new method that would be used for the server side pagination and searching (I could have replaced the old method as well, as it will no longer be used):

    @RequestMapping(value = "/datatables-view", method = RequestMethod.POST)
    @Secured(Roles.ADMIN)
    public DataTablesOutput<UserDto> getUsersForDatatables(@Valid @RequestBody DataTablesInput input) {
        DataTablesOutput<User> usersTest = userRepository.findAll(input);

        if( usersTest.getError() != null ) {
            throw new IllegalArgumentException(usersTest.getError());
        }

        DataTablesOutput<UserDto> result = new DataTablesOutput<>();
        result.setData(usersTest.getData().stream().map( UserDto::fromUser ).collect(Collectors.toList()));
        result.setDraw(usersTest.getDraw());
        result.setError(usersTest.getError());
        result.setRecordsFiltered(usersTest.getRecordsFiltered());
        result.setRecordsTotal(usersTest.getRecordsTotal());
        return result;
    }

To support this controller, you need to update the UserRepository to extend from DataTablesRepository, so change this:

  public interface UserRepository extends CrudRepository<User, UserId>, UserRepositoryCustom {

to

  public interface UserRepository extends DataTablesRepository<User, UserId>, UserRepositoryCustom {

On the client side, I had this code:

  $scope.dtOptions = DTOptionsBuilder.fromFnPromise(function(){
            return Users.query().$promise;
        })
        .withBootstrap()
        .withPaginationType('simple_numbers')
        .withDisplayLength(20)
        .withOption('createdRow', function (row) {
            // Recompiling so we can bind Angular directive to the DT
            $compile(angular.element(row).contents())($scope);
        })
        .withOption('saveState', true)
        .withOption('order', [0, 'asc']);

        // Datatables columns builder
        $scope.dtColumns = [ .. ] //column definitions here

This code now changes to:

$scope.dtOptions = DTOptionsBuilder.newOptions()
            .withOption('ajax', {
                contentType: 'application/json',
                url: '/api/users/datatables-view',
                type: 'POST',
                beforeSend: function(xhr){
                    xhr.setRequestHeader("Authorization",
                        "Bearer " + AuthenticationService.getAccessToken());
                },
                data: function(data, dtInstance) {
                    
                    // The returned object has 'email' as property, but the server entity has 'emailAddress'
                    // We need to override what we ask to the server here otherwise search will not work
                    data.columns[1].data = "emailAddress";

                    // Any values you set on the data object will be passed along as parameters to the server
                    //data.access_token = AuthenticationService.getAccessToken();
                    return JSON.stringify(data);
                }
            })
            .withDataProp('data') // This is the name of the value in the returned recordset which contains the actual data
            .withOption('serverSide', true)
        .withBootstrap()
        .withPaginationType('simple_numbers')
        .withDisplayLength(20)
        .withOption('createdRow', function (row) {
            // Recompiling so we can bind Angular directive to the DT
            $compile(angular.element(row).contents())($scope);
        })
        .withOption('saveState', true)
        .withOption('order', [0, 'asc']);

The most important things are:

  • Set the contentType so that we send JSON to the REST controller
  • Set the url that points to our new controller method
  • Set the type to POST since we accept a POST in the controller
  • Add a beforeSend function to set the Authorization header so we can access the controller method that is secured with Spring Security
  • Add a data function to return the data object as JSON

One thing I had to do additionally is this line:

data.columns[1].data = "emailAddress";

The reason for this is that I return UserDto objects from the controller which have email as a property and thus the columns are defined like that in JavaScript. However, the real User entity on the server uses emailAddress as property. With this line, the server side code will use the good property for searching and sorting.

After all this, you can check in the developer console to the requests and responses, only the actual needed data will be returned from the server. The search box will work and also the sorting will work.

What is also very nice is that the pagination adapts perfectly. And when you start to search, it also shows this in the footer:

Showing 1 to 9 of 9 entries (filtered from 24 total entries)

And that is all you need to get pagination and sorting with server-side processing to handle large data sets using AngularJS, Datatables and Spring.

This know-how originated during the development of a PegusApps project.

Posted in Programming | Tagged , , , | Leave a comment

AssertJ custom assertion for testing ConstraintValidator implementations

If you want to unit test a ConstraintValidator with AssertJ, then you can use this custom validator to make the tests more readable:

import org.assertj.core.api.AbstractAssert;

import javax.validation.ConstraintViolation;
import java.util.Set;
import java.util.stream.Collectors;

public class ConstraintViolationSetAssert extends AbstractAssert<ConstraintViolationSetAssert, Set<? extends ConstraintViolation>> {
    public ConstraintViolationSetAssert(Set<? extends ConstraintViolation> actual) {
        super(actual, ConstraintViolationSetAssert.class);
    }

    public static ConstraintViolationSetAssert assertThat(Set<? extends ConstraintViolation> actual) {
        return new ConstraintViolationSetAssert(actual);
    }

    public ConstraintViolationSetAssert hasViolationOnPath(String path) {
        isNotNull();

        // check condition
        if (!containsViolationWithPath(actual, path)) {
            failWithMessage("There was no violation with path <%s>. Violation paths: <%s>", path, actual.stream()
                                                                                                        .map(violation -> violation
                                                                                                                .getPropertyPath()
                                                                                                                .toString())
                                                                                                        .collect(
                                                                                                                Collectors
                                                                                                                        .toList()));
        }

        return this;
    }

    private boolean containsViolationWithPath(Set<? extends ConstraintViolation> violations, String path) {
        boolean result = false;
        for (ConstraintViolation violation : violations) {
            if (violation.getPropertyPath().toString().equals(path)) {
                result = true;
                break;
            }
        }
        return result;
    }
}

An example unit test that uses this:

@Test
public void givenInvalidUsername_violationConstraint() {
  CreateUserParameters p = new CreateUserParameters();
  p.setUsername("x");

  ValidatorFactory factory = Validation.buildDefaultValidatorFactory();
  Validator validator = factory.getValidator();
  Set<ConstraintViolation<CreateUserParameters>> violationSet = validator.validate(parameters);
  // static import for ConstraintViolationSetAssert
  assertThat(violationSet).hasViolationOnPath("username");
}

See http://joel-costigliola.github.io/assertj/assertj-core-custom-assertions.html for the documentation if you want to create your own custom assertions using AssertJ. There is even a Maven plugin that allows to generate custom assertions automatically: http://joel-costigliola.github.io/assertj/assertj-assertions-generator-maven-plugin.html

Posted in Programming | Tagged , , | Leave a comment

Read-only EntryProcessors with Hazelcast

This post is a follow-up on my post EntryProcessors and EntryBackupProcessors with Hazelcast. In the comments, Peter Veentjer from Hazelcast gave me the idea to use an EntryProcessor to read part of the data. I will show you below how to best do this.

We start with a cache that has 10 User objects in it and we want to retrieve the user names of all users in the cache.

Without an entry processor

We can get the list of names without using an entry processor. For example:

public class ReadOnlyEntryProcessorTest0
{
	public static void main( String[] args ) throws InterruptedException
	{
		Config config = new Config();
		config.setProperty("hazelcast.initial.min.cluster.size","2");
		HazelcastInstance hazelcastInstance = Hazelcast.newHazelcastInstance( config );

		// Take a lock so only 1 of the 2 nodes will execute the entry processor
		ILock doItLock = hazelcastInstance.getLock( "doItLock" );
		doItLock.lock();

		IMap<Long, User> testMap;
		try
		{
			testMap = hazelcastInstance.getMap( "testMap" );
			if( !testMap.containsKey( 1L ))
			{
				IntStream.rangeClosed( 1, 10 )
						.mapToObj( ReadOnlyEntryProcessorTest0::createUser )
						.forEach( user -> testMap.put( user.getId(), user ) );

				testMap.values().stream().map( User::getName ).forEach( name -> System.out.println( "name = " + name ) );
			}
		}
		finally
		{
			doItLock.unlock();
		}
	}

	private static User createUser( int i )
	{
		User user = new User();
		user.setId( i );
		user.setName( "Wim" + i );
		return user;
	}
}

So the example first adds 10 User objects in the cache and then retrieves them again, getting the “name” of each User object and printing this. This means Hazelcast has to fetch (and serialize) the full User object from the other nodes running in the cluster.

Since we are only interested in a small part of the object, we can use an EntryProcessor to retrieve just that.

Using an EntryProcessor

This the same example, but this time using the AbstractEntryProcessor class:

public class ReadOnlyEntryProcessorTest1
{
	public static void main( String[] args ) throws InterruptedException
	{
		Config config = new Config();
		config.setProperty("hazelcast.initial.min.cluster.size","2");
		HazelcastInstance hazelcastInstance = Hazelcast.newHazelcastInstance( config );

		// Take a lock so only 1 of the 2 nodes will execute the entry processor
		ILock doItLock = hazelcastInstance.getLock( "doItLock" );
		doItLock.lock();

		IMap<Long, User> testMap;
		try
		{
			testMap = hazelcastInstance.getMap( "testMap" );
			if( !testMap.containsKey( 1L ))
			{
				IntStream.rangeClosed( 1, 10 )
						.mapToObj( ReadOnlyEntryProcessorTest1::createUser )
						.forEach( user -> testMap.put( user.getId(), user ) );

				System.out.println( "Calling the entry processor" );
				Map<Long, Object> userNames = testMap.executeOnEntries( new ReadUserNamesEntryProcessor() );
				userNames.values().forEach( name -> System.out.println( "name = " + name ) );
			}
		}
		finally
		{
			doItLock.unlock();
		}
	}

	private static User createUser( int i )
	{
		User user = new User();
		user.setId( i );
		user.setName( "Wim" + i );
		return user;
	}

	private static class ReadUserNamesEntryProcessor extends AbstractEntryProcessor<Long,User>
	{
		public ReadUserNamesEntryProcessor()
		{
		}

		@Override
		public Object process( Map.Entry<Long,User> entry )
		{
			String name = entry.getValue().getName();
			System.out.println("Returning name from primary entry: " + name );
			return name;
		}
	}
}

The printing of the names has now changed from this (not using an entry processor):

testMap.values().stream()
	.map( User::getName )
	.forEach( name -> System.out.println( "name = " + name ) );

to this (using an entry processor):

Map<Long, Object> userNames = testMap.executeOnEntries( new ReadUserNamesEntryProcessor() );
userNames.values().forEach( name -> System.out.println( "name = " + name ) );

The advantage here is that Hazelcast only has to serialize (and send over the network) the “name” String instead of the full User object.

If you would run the example, you will notice that the text “Returning name from primary entry” will be printed 20 times instead of the expected 10 times (There are 10 User objects in the cache). This is because AbstractEntryProcessor by default, also runs the processor on the backup entries.
In a read-only use case, this has no use at all (For starters, a EntryBackupProcessor cannot return a value anyway). So to have the best performance, we need call the super constructor with false to avoid that a backup processor is used.

This is now the code for our optimal EntryProcessor:

private static class ReadUserNamesEntryProcessor extends AbstractEntryProcessor<Long,User>
{
	public ReadUserNamesEntryProcessor()
	{
		super( false );
	}

	@Override
	public Object process( Map.Entry<Long,User> entry )
	{
		String name = entry.getValue().getName();
		System.out.println("Returning name from primary entry: " + name );
		return name;
	}
}

An alternative would be to implement the interface and return null yourself:

private static class ReadUserNamesEntryProcessor implements EntryProcessor<Long,User>
{
	public ReadUserNamesEntryProcessor()
	{
	}

	@Override
	public Object process( Map.Entry<Long,User> entry )
	{
		String name = entry.getValue().getName();
		System.out.println("Returning name from primary entry: " + name );
		return name;
	}

	@Override
	public EntryBackupProcessor<Long,User> getBackupProcessor()
	{
		return null;
	}
}

It would be nice if Hazelcast provided a ReadOnlyEntryProcessor abstract class. It would be more explicit that remembering having to call the super with ‘false’. Maybe it could even throw an Exception if you would try to call ‘entry.setValue’ from such an EntryProcessor.

Posted in Programming | Tagged , | Leave a comment

EntryProcessors and EntryBackupProcessors with Hazelcast

Hazelcast has the concept of EntryProcessors (like Oracle Coherence). EntryProcessors allow to update cache entries without having to pull over the actual values. You move the processing to where the value lives and update the cache there.

Furthermore, Hazelcast has the notion of EntryBackupProcessor (which Coherence does not have).

To explain the usage of this, we will use a simple User class:

class User implements Serializable
{
	private long id;
	private String name;
	private DateTime lastLoginTime;
  
        // getters and setters omitted
}

We will set the ‘lastLoginTime’ on the user by means of an EntryProcessor.

Default behaviour – using AbstractEntryProcessor

For the first test, we will use the default behaviour given by AbstractEntryProcessor. This is the code for our EntryProcessor:

private static class UpdateLastLoginTimEntryProcessor extends AbstractEntryProcessor<Long,User>
{
	private DateTime loginTime;

	public UpdateLastLoginTimEntryProcessor()
	{
	}

	public UpdateLastLoginTimEntryProcessor( DateTime loginTime )
	{
		this.loginTime = loginTime;
	}

	@Override
	public Object process( Map.Entry<Long,User> entry )
	{
		System.out.println("Processing entry: " + entry );
		User user = entry.getValue();
		user.setLastLoginTime( loginTime );
		entry.setValue( user );
		return null;
	}
}

This default behaviour is to apply the same processing to the backup values and the normal values. To test this, we run this code:

public class HazelcastBackupEntryProcessorTest1
{
	public static void main( String[] args ) throws InterruptedException
	{
		HazelcastInstance hazelcastInstance = setupHazelcast();

		ILock doItLock = hazelcastInstance.getLock( "doItLock" );
		doItLock.lock();

		IMap<Long, User> testMap;
		try
		{
			testMap = hazelcastInstance.getMap( "testMap" );
			if( !testMap.containsKey( 1L ))
			{
				User user = new User();
				user.setId( 1L );
				user.setName( "Wim" );


				testMap.put( user.getId(), user );
				DateTime loginTime = DateTime.now();
				System.out.println( "Calling the entry processor with time: " + loginTime );
				testMap.executeOnEntries( new UpdateLastLoginTimEntryProcessor( loginTime ) );
			}
		}
		finally
		{
			doItLock.unlock();
		}

		printLastLoginTime( testMap );


		System.out.println( "Sleeping 10 sec... manually crash entries node now..");
		Thread.sleep( 10000 );
		System.out.println( "Done sleeping!");


		printLastLoginTime( testMap );
	}

	private static HazelcastInstance setupHazelcast()
	{
		Config config = new Config();
		config.setProperty("hazelcast.initial.min.cluster.size","2");
		return Hazelcast.newHazelcastInstance( config );
	}

	private static void printLastLoginTime( IMap<Long, User> testMap )
	{
		User updatedUser = testMap.get( 1L );
		System.out.println( "last login time: " + updatedUser.getLastLoginTime() );

		LocalMapStats stats = testMap.getLocalMapStats();
		System.out.println("hits: " + stats.getHits());
		System.out.println("entries: " + stats.getOwnedEntryCount());
		System.out.println("backup entries: " + stats.getBackupEntryCount());
	}

	private static class UpdateLastLoginTimEntryProcessor extends AbstractEntryProcessor<Long,User>
	{
		private DateTime loginTime;

		public UpdateLastLoginTimEntryProcessor()
		{
		}

		public UpdateLastLoginTimEntryProcessor( DateTime loginTime )
		{
			this.loginTime = loginTime;
		}

		@Override
		public Object process( Map.Entry<Long,User> entry )
		{
			System.out.println("Processing entry: " + entry );
			User user = entry.getValue();
			user.setLastLoginTime( loginTime );
			entry.setValue( user );
			return null;
		}
	}
}

If you run this code 2 times to simulate 2 nodes (I just run it using IntelliJ IDEA), you get the following output:

Node 1:

Calling the entry processor with time: 2015-09-28T11:08:32.499+02:00
Processing entry: 1=com.traficon.tmsng.server.User@27d067a4
last login time: 2015-09-28T11:08:32.499+02:00
hits: 0
entries: 0
backup entries: 1

Node 2:

Processing entry: 1=com.traficon.tmsng.server.User@77f309d3
last login time: 2015-09-28T11:08:32.499+02:00
hits: 4
entries: 1
backup entries: 0

So we see the entry processor is called 2 times, one time on each node.

During the 10 second sleep. I stop the node that has the backup entries. When the sleep is done, this is printed on the other node:

last login time: 2015-09-28T11:08:32.499+02:00
hits: 5
entries: 1
backup entries: 0

We see the backup entries have become entries now.

Without an EntryBackupProcessor

Now, what would happen if we use this implementation for our entry processor:

private static class UpdateLastLoginTimEntryProcessor implements EntryProcessor<Long,User>
{
	private DateTime loginTime;

	public UpdateLastLoginTimEntryProcessor()
	{
	}

	public UpdateLastLoginTimEntryProcessor( DateTime loginTime )
	{
		this.loginTime = loginTime;
	}

	@Override
	public Object process( Map.Entry<Long,User> entry )
	{
		System.out.println("Processing entry: " + entry );
		User user = entry.getValue();
		user.setLastLoginTime( loginTime );
		entry.setValue( user );
		return null;
	}

	@Override
	public EntryBackupProcessor<Long,User> getBackupProcessor()
	{
		return null;
	}
}

In this implementation, we return null for our EntryBackupProcessor. This in effect means that we will NOT be updating the backup entries!

Node 1:

Calling the entry processor with time: 2015-09-28T11:19:26.237+02:00
last login time: 2015-09-28T11:19:26.237+02:00
hits: 0
entries: 0
backup entries: 1

Node 2:

Processing entry: 1=com.traficon.tmsng.server.User@15101e96
last login time: 2015-09-28T11:19:26.237+02:00
hits: 4
entries: 1
backup entries: 0

So now, we only see “Processing entry” on the node where the actual value lives, nothing happens on the node with the backup entries. If we now crash the node 1 and print our cached User object again we see this:

last login time: null
hits: 1
entries: 1
backup entries: 0

The backup entry has been promoted to primary, but the last login time is lost since we did not run the entry processor on the backup entries.

Updating the backup without double processing

Suppose you have quite complex processing going on in your entry processor. If you want to be on the safe side, you need to run an EntryBackupProcessor. However, doing the processing twice is expensive in terms of CPU. Is there an alternative?

It turns out, you can use this construct:

private static class LostsOfProcessingEntryProcessor implements EntryProcessor<Long,User>
{
	private transient User updatedUser;

	public LostsOfProcessingEntryProcessor()
	{
	}

	@Override
	public Object process( Map.Entry<Long,User> entry )
	{
		try
		{
			System.out.println("Processing entry: " + entry );
			User user = entry.getValue();
			Thread.sleep( 2000 ); // Simulate processing

			//suppose you update something on the user object here
			//user.updateFoo( foo );
			user.setLastLoginTime( DateTime.now() );

			updatedUser = user;
			System.out.println( "updatedUser = " + updatedUser );

			entry.setValue( user );
			return null;
		}
		catch (InterruptedException e)
		{
			e.printStackTrace();
			return null;
		}
	}

	@Override
	public EntryBackupProcessor<Long,User> getBackupProcessor()
	{
		return new CopyValueToBackupEntryBackupProcessor( updatedUser );
	}

	public static class CopyValueToBackupEntryBackupProcessor implements EntryBackupProcessor<Long, User>
	{
		private User user;

		public CopyValueToBackupEntryBackupProcessor( User user )
		{
			this.user = user;
		}

		@Override
		public void processBackup( Map.Entry<Long, User> entry )
		{
			System.out.println( "Updating user on backup entry: " + user );
			entry.setValue( user );
		}
	}
}

When testing this, we get the following output:

Node 1:

Processing entry: 1=com.traficon.tmsng.server.User@1994ad74
updatedUser = com.traficon.tmsng.server.User@1994ad74
last login time: 2015-09-29T08:28:40.756+02:00
hits: 4
entries: 1
backup entries: 0
Sleeping 10 sec... crash entries node now..

Node 2:

Calling the entry processor
Updating user on backup entry: com.traficon.tmsng.server.User@4caf4ac
last login time: 2015-09-29T08:28:40.756+02:00
hits: 0
entries: 0
backup entries: 1
Sleeping 10 sec... crash entries node now..
Done sleeping!
last login time: 2015-09-29T08:28:40.756+02:00
hits: 2
entries: 1
backup entries: 0

Notice how on Node 2 the backup entry becomes primary after the crash of Node 1 and how we did not have to do the expensive processing again in the EntryBackupProcessor.

The CopyValueToBackupEntryBackupProcessor is now specific for this example, but can easily made generic so you can re-use it:

public static class CopyValueToBackupEntryBackupProcessor implements EntryBackupProcessor<K, V>
{
	private V value;

	public CopyValueToBackupEntryBackupProcessor( V value )
	{
		this.value = value;
	}

	@Override
	public void processBackup( Map.Entry<K, V> entry )
	{
		entry.setValue( value );
	}
}

Conclusion

I have showed you several ways to use an EntryBackupProcessor in Hazelcast. Which one is best for your application really depends on your use case, as always. As a general rule of thumb, you could state that the default behaviour in AbstractEntryProcessor is best when the processing is small. If there is a lot of processing going on, it could be interesting to look into using a CopyValueToBackupEntryBackupProcessor.

Posted in Programming | Tagged | 6 Comments

Introduction to using JavaFX with afterburner.fx

I wanted to try out afterburner.fx, a JavaFX framework which describes itself as:

a minimalistic (3 classes) JavaFX MVP framework based on Convention over Configuration and Dependency Injection.

For this purpose I created a simple note taking application which looks like this when finished:

Screen Shot 2015-03-24 at 19.38.24

First off, the domain class that represents a Note:

public class Note
{
	private long id;
	private String title;
	private String content;

// Getter and setters ommitted

I also made a NoteService to retrieve the current notes and update an existing note:

public interface NoteService
{
	SortedSet<Note> getNotes();

	void updateNode( Note note );
}

I have made an in memory implementation for testing purpose:

public class InMemoryNoteService implements NoteService
{
	private Map<Long,Note> notes = new HashMap<>();

	public InMemoryNoteService()
	{
		notes.put( 1L, new Note( 1, "note title 1", "some more info on the note" ) );
		notes.put( 2L, new Note( 2, "note title 2", "some more info on the other note" ) );
	}

	@Override
	public SortedSet<Note> getNotes()
	{
		TreeSet<Note> treeSet = new TreeSet<>( new NoteComparator() );
		treeSet.addAll( notes.values() );
		return treeSet;
	}

	@Override
	public void updateNode( Note note )
	{
		notes.put( note.getId(), note );
	}

}

Now, off to the actual JavaFX stuff. We start with creating our FXML code that defines the components in our application:

<SplitPane dividerPositions="0.3" maxHeight="-Infinity" maxWidth="-Infinity" minHeight="-Infinity"
           minWidth="-Infinity" prefHeight="400.0" prefWidth="600.0" xmlns:fx="http://javafx.com/fxml/1"
           xmlns="http://javafx.com/javafx/8.0.40" fx:controller="org.deblauwe.afterburnernote.view.MainPresenter">
    <items>
        <BorderPane minHeight="0.0" minWidth="100.0" prefHeight="398.0" prefWidth="176.0"
                    styleClass="defaultBorderSpacing">
            <center>
                <ListView fx:id="listView"/>
            </center>
        </BorderPane>
        <GridPane minHeight="0.0" minWidth="0.0" prefHeight="160.0" prefWidth="100.0" styleClass="defaultBorderSpacing">
            <rowConstraints>
                <RowConstraints vgrow="NEVER" valignment="TOP"/>
                <RowConstraints vgrow="ALWAYS" valignment="TOP"/>
                <RowConstraints vgrow="NEVER"/>
            </rowConstraints>
            <columnConstraints>
                <ColumnConstraints hgrow="NEVER"/>
                <ColumnConstraints hgrow="ALWAYS"/>
            </columnConstraints>
            <Label text="Title" GridPane.rowIndex="0" GridPane.columnIndex="0"/>
            <TextField fx:id="titleField" prefWidth="308.0" GridPane.rowIndex="0" GridPane.columnIndex="1"/>
            <Label layoutX="14.0" text="Todo" GridPane.rowIndex="1" GridPane.columnIndex="0"/>
            <TextArea fx:id="contentField" prefWidth="308.0" GridPane.rowIndex="1" GridPane.columnIndex="1"/>
            <Button fx:id="saveButton" text="Save" GridPane.rowIndex="2" GridPane.columnIndex="0" GridPane.columnSpan="2" GridPane.halignment="RIGHT"/>
        </GridPane>
    </items>
</SplitPane>

What is important is the use of the fx:controller attribute which needs to point a controller that defines the behaviour. I named my FXML main.fxml and I followed the convention to name the controller nameofviewPresenter.
Before I show the presenter, you also need a View, which I called MainView. It does not contain any actual code, it just extends from FXMLView (which is a class from the afterburner.fx framework):

public class MainView extends FXMLView
{
}

The MainPresenter contains the bulk of the code:

public class MainPresenter implements Initializable
{
// ------------------------------ FIELDS ------------------------------

	@FXML
	public TextArea contentField;
	@FXML
	public Button saveButton;
	@FXML
	private ListView<Note> listView;

	@FXML
	private TextField titleField;

	@Inject
	private NoteService noteService;

// ------------------------ INTERFACE METHODS ------------------------

// --------------------- Interface Initializable ---------------------

	@Override
	public void initialize( URL location, ResourceBundle resources )
	{
		listView.setCellFactory( param -> new NoteListCell() );
		listView.setItems( FXCollections.observableArrayList( noteService.getNotes() ) );
		listView.getSelectionModel().selectedItemProperty().addListener( new NoteListViewSelectionChangeListener() );

		selectFirstItemIfPossible();

		saveButton.setOnAction( event -> {
			// Save the updated note with the service
			Note selectedItem = listView.getSelectionModel().getSelectedItem();
			selectedItem.setTitle( titleField.getText() );
			selectedItem.setContent( contentField.getText() );
			noteService.updateNode( selectedItem );

			listView.getItems().set( listView.getSelectionModel().getSelectedIndex(), selectedItem );
			listView.getItems().sort( new NoteComparator() );
		} );
	}

// -------------------------- PRIVATE METHODS --------------------------

	private void selectFirstItemIfPossible()
	{
		if (listView.getItems().size() > 0)
		{
			listView.getSelectionModel().select( 0 );
		}
	}

// -------------------------- INNER CLASSES --------------------------

	private static class NoteListCell extends ListCell<Note>
	{
		@Override
		protected void updateItem( Note item, boolean empty )
		{
			super.updateItem( item, empty );
			if (item != null)
			{
				setText( item.getTitle() );
			}
		}
	}

	private class NoteListViewSelectionChangeListener implements ChangeListener<Note>
	{
		@Override
		public void changed( ObservableValue<? extends Note> observable, Note oldValue, Note newValue )
		{
			if( newValue != null )
			{
				titleField.setText( newValue.getTitle() );
				contentField.setText( newValue.getContent() );
			}
		}
	}
}

Let us break this down a bit. First we can reference any component that is declared in the FXML file by using the @FXML annotation on a private field.

For example:

@FXML
public Button saveButton;

Note that the name of the field should match with the fx:id in the FXML file for this to work:

<Button fx:id="saveButton" text="Save" GridPane.rowIndex="2" GridPane.columnIndex="0" GridPane.columnSpan="2" GridPane.halignment="RIGHT"/>

@Inject allows to inject arbitrary values or services. Here, I used it to get a reference to the NoteService:

@Inject
private NoteService noteService;

To have this working, you need to setup the injection in your main class. This is what I have:

public class Main extends Application
{

	@Override
	public void start( Stage primaryStage ) throws Exception
	{
		Map<Object, Object> context = new HashMap<>();
		context.put( "noteService", new InMemoryNoteService() );

		Injector.setConfigurationSource( context::get );

		MainView mainView = new MainView();
		Scene scene = new Scene( mainView.getView() );
		primaryStage.setTitle( "AfterburnerNoteFX" );
		primaryStage.setScene( scene );
		primaryStage.show();
	}
}

The Injector has a static method which needs a Function. So anything that returns an Object, given another Object is ok. A Java 8 method reference to the get method of a Map is probably the easiest.
Notice that the key in the Map has to match with the field name of the @Inject annotation in the controller.

To make it good looking, we add a CSS file which has the same name as the FXML file (So main.css in my example):

.defaultBorderSpacing {
    -fx-border-width: 10;
    -fx-border-color: transparent;
}

GridPane {
    -fx-hgap: 10;
    -fx-vgap: 10;
}

This the full file tree for the application:

File Tree AfterburnerNote

This concludes my introduction. Please take a look at the website for some more info and links to other example projects. I really like what afterburner.fx provides. It would be even better if this could be combined with the Spring Framework to have a more feature rich dependency injection, but I can understand that this would totally clash with the minimalistic goal of the framework.

Posted in Programming | Tagged | 2 Comments

Using Font Awesome in JavaFX with fontawesomefx

Icons are a great way to spice up any UI. You can easily use the Font Awesome icons in JavaFX, by using the fontawesomefx library.

I will show a small example on how to use the icons in JavaFX code and how to apply some styling.

First import the library. I am using Maven, so I just add this dependency:

<dependency>
  <groupId>de.jensd</groupId>
  <artifactId>fontawesomefx</artifactId>
  <version>8.2</version>
</dependency>

We will start with a simple app that uses a BorderPane to put some content at the center and have a kind of header at the top:

public class FontAwesomeTest extends Application
{
  @Override
  public void start( Stage stage ) throws Exception
  {
    StackPane root = new StackPane();

    BorderPane borderPane = new BorderPane();
    HBox headerBox = new HBox();
    headerBox.getStyleClass().add( &quot;header-component&quot; );
    borderPane.setTop( headerBox );
    Label centerComponent = new Label( &quot;CENTER COMPONENT&quot; );
    centerComponent.setPrefSize( Double.MAX_VALUE, Double.MAX_VALUE );
    centerComponent.getStyleClass().add( &quot;center-component&quot; );
    borderPane.setCenter( centerComponent );
    root.getChildren().add( borderPane );

    Scene scene = new Scene( root );
    scene.getStylesheets().add( &quot;font-awesome-test.css&quot; );

    stage.setScene( scene );
    stage.setWidth( 300 );
    stage.setHeight( 400 );
    stage.setTitle( &quot;JavaFX 8 app&quot; );
    stage.show();
  }
}

The CSS file used:

.center-component {
    -fx-background-color: coral;
    -fx-alignment: center;
}

The app looks like this initially:

Screen Shot 2015-03-13 at 17.59.59

We will now add an icon in the header:

HBox headerBox = new HBox();
headerBox.getStyleClass().add( &quot;header-component&quot; );
headerBox.getChildren()
         .addAll( GlyphsDude.createIcon( FontAwesomeIcons.BARS, 
                                         &quot;40px&quot; ) );

Notice how we use the static factory method createIcon to build us a Node with the icon from the constants in the enum FontAwesomeIcons. As a 2nd argument, we can specify the size of the icon.

We get the following result:

Screen Shot 2015-03-13 at 18.03.49

We can add some CSS to add a border so the icon does not stick to the side:

.header-component {
    -fx-border-width: 7px;
    -fx-border-color: transparent;
}

NOTE: Do not forget to set the -fx-border-color style as well as the -fx-border-width. Only setting the width will not do anything!

Screen Shot 2015-03-13 at 18.10.06

If we want to add some text next to the icon, we can use the static factory method createIconLabel:

HBox headerBox = new HBox();
headerBox.getStyleClass().add( &quot;header-component&quot; );
Label iconLabel = GlyphsDude.createIconLabel( FontAwesomeIcons.BARS, 
                                              &quot;Menu&quot;, 
                                              &quot;40px&quot;, 
                                              &quot;40px&quot;, 
                                              ContentDisplay.LEFT );
iconLabel.getStyleClass().add( &quot;header-label&quot; );
headerBox.getChildren().addAll( iconLabel );

Which shows as:

Screen Shot 2015-03-13 at 18.13.33

Finally, we can color the icon and the text by applying this CSS:

.header-label &gt; .text {
    -fx-fill: #8A0808;
}

.header-label &gt; .glyph-icon {
    -fx-fill: #8A0808;
}

Final result:

Screen Shot 2015-03-13 at 18.15.09

Posted in Programming | Tagged , | 4 Comments

Spring Boot application with “exploded” directory structure

Spring Boot is really amazing for getting started quickly with a new Spring application. By default, your application is contained in a single jar when packaging it. This has some advantages, but what if you want a “classic” layout with a config folder (for your application.properties or logback.xml files) and a lib folder?

Getting Started

This blog post will show you a way of doing this using Maven and the Maven Assembly Plugin.

First, we create a simple project using the Spring Initializr. I opted for Java 8 and selected the “Web” dependency. The current Spring Boot version is 1.1.8.

This gave me a zip file with the following structure:

pom.xml
src/main/java/.../Application.java
src/main/resources/application.properties
src/test/java/...

For some fun, I added a simple rest controller:

package org.deblauwe.example.boot.exploded;

import org.springframework.beans.factory.annotation.Value;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
@RequestMapping("/test")
public class TestController
{
	@Value( "${hello.value:World}" )
	private String helloValue;

	@RequestMapping("/")
	public String sayHello()
	{
		return "Hello " + helloValue;
	}
}

Running this application and pointing to http://localhost:8080/test/ returns “Hello World” in the browser.

We can now inject a different value into ‘helloValue’ by adding the following line to application.properties:

hello.value=Maven

If you now refresh the browser, it shows: Hello Maven

Creating the assembly

We now want to build a zip file out of this simple application with the following layout:

start.sh
config/application.properties
lib/spring-boot-exploded-example-0.0.1-SNAPSHOT.jar

For this, we add the Maven Assembly Plugin to our pom.xml:

<build>
  <plugins>
    ...
    <plugin>
      <artifactId>maven-assembly-plugin</artifactId>
        <configuration>
          <descriptors>
            <descriptor>src/main/assembly/descriptor.xml</descriptor>
          </descriptors>
        </configuration>
        <executions>
          <execution>
            <id>make-assembly</id> <!-- this is used for inheritance merges -->
            <phase>package</phase> <!-- bind to the packaging phase -->
            <goals>
              <goal>single</goal>
            </goals>
          </execution>
        </executions>
      </plugin>

We also create the descriptor.xml file in the src/main/assembly folder:

<assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2"
          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
          xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2 http://maven.apache.org/xsd/assembly-1.1.2.xsd">
    <id>application</id>
    <formats>
        <format>zip</format>
    </formats>
    <fileSets>
        <fileSet>
            <directory>${project.basedir}/src/main/resources</directory>
            <outputDirectory>/config</outputDirectory>
            <includes>
                <include>application.properties</include>
            </includes>
        </fileSet>
        <fileSet>
            <directory>${project.basedir}/src/main/assembly</directory>
            <outputDirectory>/</outputDirectory>
            <filtered>true</filtered>
            <fileMode>0755</fileMode>
            <includes>
                <include>*.sh</include>
            </includes>
        </fileSet>
        <fileSet>
            <directory>${project.build.directory}</directory>
            <outputDirectory>/lib</outputDirectory>
            <includes>
                <include>*.jar</include>
            </includes>
        </fileSet>
    </fileSets>
</assembly>

The final piece is the start.sh file. Place this one also in the src/main/assembly folder:

#!/bin/sh

DIR=`dirname $0`
cd $DIR
java -jar lib/${project.artifactId}-${project.version}.jar --spring.profiles.active=prod $*

The assembly plugin will replace project.artifactId and project.version during the build. The --spring.profiles.active=prod is not needed for this sample application, but it shows how you can force a certain Spring profile in the startup script.

Now run: mvn package

This will create a zip file in the target folder with exactly the layout like we wanted:
Screen Shot 2014-11-04 at 21.24.31

So now it becomes very easy to change something in application.properties if needed.

Assembly with all jar files separately

We can now take this one step further. Maybe you want to have all the jar files separately in the lib folder, just in case you need to patch one of your dependencies, or you just want to test something quickly? For this, we need to remove the spring-boot-maven-plugin in the pom.xml. After this, update the assembly descriptor:

<assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2"
          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
          xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2 http://maven.apache.org/xsd/assembly-1.1.2.xsd">
    <id>application</id>
    <formats>
        <format>zip</format>
    </formats>
    <dependencySets>
        <dependencySet>
            <outputDirectory>lib</outputDirectory>
            <unpack>false</unpack>
        </dependencySet>
    </dependencySets>
    <fileSets>
        <fileSet>
            <directory>${project.basedir}/src/main/resources</directory>
            <outputDirectory>/config</outputDirectory>
            <includes>
                <include>application.properties</include>
            </includes>
        </fileSet>
        <fileSet>
            <directory>${project.basedir}/src/main/assembly</directory>
            <outputDirectory>/</outputDirectory>
            <filtered>true</filtered>
            <fileMode>0755</fileMode>
            <includes>
                <include>*.sh</include>
            </includes>
        </fileSet>
    </fileSets>
</assembly>

Notice the dependencySets that has been added and the fileSet for the jar has been removed.

You also need to edit the start.sh startup script to load all jar files from the lib directory:

#!/bin/sh

DIR=`dirname $0`
cd $DIR
java -cp .:./config:./lib/* ${start-class} --spring.profiles.active=prod $*

After running mvn clean package, you end up with a zip file with this structure:
Screen Shot 2014-11-04 at 21.46.02

 

Conclusion

I showed how can you easily use the Maven assembly plugin to output your project in a zip file so you can edit properties without having to unjar the jar file like in the standard Spring Boot setup. I find this extremely useful for things like changing the log level settings for example.

Posted in Programming | Tagged , , | 1 Comment