Recently I’ve been working with a project heavily using JBoss Drools. I am not Drools expert – I am not also very convinced to this framework, or maybe only to particular use case in this project
– and I found it quite difficult to write simple, maintainable unit tests for Drools based business rules.
Modern frontend technologies like SASS, Less or CoffeeScript are not as easy to setup in classic Java web application as it is in modern frameworks like Ruby On Rails or Play Framework but it is possible. After spending some time on a project using SASS I can’t really imagine going back to plain old CSS so I decided to have a look what is offered currently for Java.
There are very powerful (and complex) solutions for handling SASS compilation like Wro4j but I wanted something simple that meets requirements:
compiles SASS to CSS during build
not requiring me to add any additional libs to application
developer mode friendly – I want to be able to change SASS files on the fly during development and avoid server restarts
Spring Data MongoDB 1.2.0 silently introduced new feature: support for basic auditing. Because you will not find too much about it in official reference in this post I will show what benefits does it bring, how to configure Spring for auditing and how to annotate your documents to make them auditable.
Auditing let you declaratively tell Spring to store:
If you did not attend and you don’t have good excuse – you should regret it. I have never seen such professionals in giving lectures like those two guys. No matter what was the reason of your absence in this post I will try to shortly summarize what this event was all about.
Beauty and curse same time of Java ecosystem is the variety of available frameworks and solutions. There are plenty of ready to use solutions for most of problems that we meet on our daily basis and the tricky part is only to choose the right one. It also applies to testing frameworks. Times when the only tool you use is JUnit are hopefully gone – now there are plenty of very high quality frameworks that help to write any kind of automated tests not only faster but what is more important tests reliability and maintainability factor increases a lot.
Today I would like to share with you 3 testing frameworks/tools that I find very useful and handy.
Behavior Driven Development becomes more and more popular. In my opinion it is a great way not only writing acceptance tests but developing software in general. There are couple of BDD frameworks for Java and since march 2012 we can find real Java port of one of the most important Ruby frameworks – Cucumber – Cucumber JVM.
Cucumber supports Gherkin language to define steps which makes it easy and natural to create specification. Another advantage of Gherkin is that there are cucumber-like frameworks for many languages and platforms which makes it grate to use in organization when team members need to learn only one way of defining steps.
Complete example of usage Cucumber-JVM can be found in version 1.0.0 official announcement. As you can see there – it does not look hard. Hard part begins when it comes to write testable step definitions – it is something for another post. That’s how example feature can look like:
Feature: As a player I want to be able to send mail invitation to my friends
Scenario: player sends successful invitation
Given player "John Doe"
When "John Doe" invites "email@example.com"
Then 1 mail is sent
And invitation from "John Doe" to "firstname.lastname@example.org" is saved
REST, sessions .. wait. There are no sessions in REST application, right? Well, thats true. If we can avoid sessions we should do that. REST is stateless. The main concern about statelessness is authentication. In usual web applications we were used to store user data in session after authentication. How to solve that if we don’t want to use sessions? We should authenticate every request.
Thanks to that we can scale our application, add new nodes, remove nodes without care about session replication and also about consumed Java heap memory.
Recently I’ve been working on high load REST application. Actually we didn’t expect to have high traffic there but surprisingly we had much much higher that we have been prepared for (it’s so called “happy problem”). Application is based on Spring Framework and its secured with Spring Security deployed on Apache Tomcat 7. All resources are totally stateless – HttpSession is not touched by any piece of my code. Unfortunately used Java heap space was increasing all the time until:
java.lang.OutOfMemoryError: Java heap space
was thrown. In order to analyze Java heap dump and runtime usage of heap I used VisualVM.
In this article I will show you how to generate code with JAnnocessor framework created by Nikolche Mihajlovski. First time I met JAnnocessor on GeeCON 2012 conference during Nikolche’s speech: “Innovative and Pragmatic Java Source Code Generation”(slides). Afterwards I used it successfully in one of my projects. There are almost no resources about this framework so I hope my article will be useful for those who are interested in using it or just are looking for brand new toy for their project.
Every Java developer uses some sort of code generation tool on a daily basis. Setters, getters, trivial constructors, toString – all of these is just a boilerplate code. Usually we generate it with our favorite IDE’s help. I can’t really imagine coding it manually and because Java is a static language we will never be able to skip this process.
Those trivial examples of code generation provided by all modern IDEs are not the only situations when code generation is useful. There are many modern frameworks generate some code to help us to write more reliable code and do it faster. I think the most well known examples are QueryDSL and JPA2 Metamodel Generator that creates objects used to perform type-safe database queries.
There are also other situations – not so well supported by IDE – where we could use code generation. Usually it might be helpful in generating:
DTOs and mappers from domain objects to DTOs
These are only examples. For some projects there might be something project specific where we can’t use any existing code generation tool and we have to write our own. How to do that? With Java APT – Annotation Processing Tool.
In the middle of May one of the biggest Java conferences in Poland took place – GeeCON 2012. Although I think it is almost always worth to technical conferences even if lectures not necessarily fit your needs – it was my first time on GeeCON and first visit to technical conference since DevCrowd in April 2011. I went to GeeCON with hurray optimism and very high expectations. I was interested in lots of lectures and I hoped that I will be able to use some of knowledge shared there immediately when I am back to work (some people would call it conference driven development ;)).
Few facts about GeeCON
3 days – one so called University Day and 2 main days
5 paths – at most 5 lectures at same time
Variety of topics from Java EE by testing, tools like Vaadin, OSGI into Agile methodologies
No speech about Spring Framework or any related product
This is already third post about tuning and enhancing Spring Data MongoDB capabilities. This time I found that I miss one JPA feature – @OrderBy annotation. @OrderBy specifies the ordering of the elements of a collection valued association at the point when the association is retrieved.
In this article I will show how to implement sorting with @OrderBy annotation with Spring Data MongoDB.
Just a short example of what is it all about for those who did not use JPA @OrderBy before. We’ve got here two classes and one to many relation:
Backpack is here a main class and contains list of embedded items. When Backpack is loaded from database its items are loaded in order close to insertion order. What if we want to change that and order items by one of its fields? We need to implement sorting by our own and again we will extend AbstractMongoEventListener.
Spring Data MongoDB by default does not support cascade operations on referenced objects with @DBRef annotations as reference says:
The mapping framework does not handle cascading saves. If you change an Account object that is referenced by a Person object, you must save the Account object separately. Calling save on the Person object will not automatically save the Account objects in the property accounts.
That’s quite problematic because in order to achieve saving child objects you need to override save method in repository in parent or create additional service methods like it is presented in here.
In this article I will show you how it can be achieved for all documents using generic implementation of AbstractMongoEventListener.
Because we can’t change @DBRef annotation by adding cascade property lets create new annotation @CascadeSave that will be used to mark which fields should be saved when parent object is saved.
How does it work? When object MongoTemplate#save method is called, before object is actually saved it is being converted into DBObject from MongoDB api. CascadingMongoEventListener implemented below provides hook that catches object before its converted and:
goes through all its fields to check if there are fields annotated with @DBRef and @CascadeSave at once.
when field is found it checks if @Id annotation is present
As you can see in order to make thing work you need to follow some rules:
parent’s class child property has to be mapped with @DBRef and @CascadeSave
child class needs to have property annotated with @Id and if that id is supposed to be autogenerated it should by type of ObjectId
In order to use cascade saving in your project you need just to register CascadingMongoEventListener in Spring Context:
In test there is one user with address created and then user is saved. Test will cover only positive scenario and its just meant to show that it actually works (applcationContext-tests.xml contains only default Spring Data MongoDB beans and CascadingMongoEventListener registered):
With this simple solution we can finally save child objects with one method call without implementing anything special for each document class.
I believe that we will find this functionality together with cascade delete as part Spring Data MongoDB release in the future. Solution presented here works but:
it requires to use additional annotation
uses reflection API to iterate through fields which is not the fastest way to do it (but feel free to implement caching if needed)
If that could be part of Spring Data MongoDB instead of additional annotation @DBRef could have additional property cascade. Instead of reflection we could use MongoMappingContext together with MongoPersistentEntity. I’ve started already to prepare pull request with those changes. We will see if it will be accepted by Spring Source team.