Friday, October 15, 2010

Starting Scala



I'm starting to learn Scala. I try to learn a new programming language every year (a tip from the Pragmatic programmer) because it's a good way to keep your head young. Last year I learned JavaFx Script, too bad that Oracle announced that they are not going forward with this. I really liked the whole notion of binding.

Anyway Scala is a functional language, so that will be a challenge, because I've never written any code in a functional language before. I'm still learning so I thought let's take something form my day to day job.

This was the problem, we have a web service that we can only call with 100 ids at a time. So we need to split our list with ids in batches of a 100. Fist I thought there must be some library that can do this. We did some google-ing, but we could not find it, so this is what we came up with:


static <E> List<List<E>> subDivide(List<E> list, int size) {
List<List<E>> resultList = new ArrayList<List<E>>();

int subdivideCount = list.size() / size;
if (list.size() % size != 0) {
subdivideCount++;
}

for (int i = 0; i < subdivideCount; i++) {
int maxLength = Math.min(i * size + size, list.size());
resultList.add(list.subList(i * size, maxLength));
}

return resultList;
}


Very standard way to solve something like this in Java. So I thought let's solve this in Scala. List are very cool in Scala and you can do pattern matching on them. Let's try that:


def subDivide(list: List[Int], batchSize:Int):List[List[Int]] = list match {
case Nil => Nil
case l if l.length > batchSize => List(list.splitAt(batchSize)._1) ::: subDivide(list.splitAt(batchSize)._2, batchSize)
case l => List(list)
}


As you can see this code is much nicer. It checks the size of the list taking the batchSize off and then sub dividing the rest and adding this to the result. If the size is smaller than the batchSize we can return it without splitting. If anyone knows a better way of doing this in Scala, please leave me a comment.

This is a much nicer solution, because there is way less int calculation involved. And the recursion also makes it a lot nicer, because there is no need of a for loop. So maybe this is what people are talking about, Scala does make you solve things differently.

But of course we could do this in Java as well. In Java we don't have a nice pattern matching, so we'll have to do with if-s.


static <E> void subDivide(List<E> list, int size, List<List<E>> result) {
if (list.isEmpty()) {
return;
}
if (list.size() > size) {
result.add(list.subList(0, size));
subDivide(list.subList(size, list.size()), size, result);
} else {
result.add(list);
}
}


As you can see a lot better. So at least Scala improved my Java code a bit today.

Tuesday, September 28, 2010

Swiss ICT Awards 2010

Ok, this is not really a Java article as we usually post here on the blog - just a small company advertisement where you might help out. CTP has been nominated by the Swiss ICT Awards 2010 in the category "Champion". This award category gets selected by a jury. For the PUBLIC category, you can actually place a vote!

Please click on the below link and vote for Cambridge Technology Partners now and we may become the winners of the PUBLIC category:

http://www.swissitmagazine.ch/index.cfm?pid=7786&cid=2103

Thanks!

Thursday, September 23, 2010

JavaOne 2010

Time is running quickly, and shocked we have realized that this year's JavaOne is already over - high time to share some conference highlights with you! It's the first time that Oracle drove the conference after acquiring Sun, so we were curious whether the spirit we valued so much during the last years is still around. To give you the conclusion already now - it's a yes, but there are some drawbacks.

It's clear that JavaOne is just the "side conference" this year - it had to leave its home in the Moscone center, which is mainly hosting Oracle OpenWorld. With the move, things got slightly smaller - session rooms (which is probably the most annoying aspect), the chill-out zones and exhibition space, ... . But of course it's not the size that matters; it's the content that compensates for it! The quality of the talks, we've seen, has been very good so far, and as usual it's a lot of great ideas, visions and news floating around. Below we give you a quick summary of the latest Java news and session highlights:

Keynotes

On Sunday evening Larry Ellison hosted the opening session of the Oracle OpenWorld. First of all most of the JavaOne attendees were rather surprised, they were not allowed to join in for this keynote. Nevertheless a quick update on this keynote is provided here before moving on with the real Java stuff.

The first presentation was the Oracle Exalogic Elastic Cloud, which is very briefly summarized “The Cloud in a Box”. The Exalogic box provides both hardware and software. Oracle further announced benefits to the customer due to the homogeneous hardware and software, which is also reflected in Oracle's new tag line "Hardware and Software Engineered to Work Together". Exalogic combines up to 30 servers, including storage (hard disk drive and SSD) and a high-speed internal network between the servers in a rack. The Exalogic solution is designed and optimized to run Enterprise Java applications, i.e. the Oracle Fusion Middleware stack, similar to the Oracle Exadata center which runs Oracle DB servers. Oracle Exalogic can be scaled from 1/4 up to 8 racks, whereas according to Larry Ellison two full racks were enough to host all of Facebook.com.

Afterwards the Unbreakable Enterprise Kernel was presented, which will be 100% RedHat compatible. It is Oracle’s answer to the slowly moving RedHat distribution; according to Larry Ellison RedHat is 4 years behind the mainline.

Finally Oracle Fusion Application was unveiled. It is a CRM/ERP/HRMS system which offers the features of products earlier acquired by Oracle such as PeopleSoft and Siebel. The entire code basis has been re-written based on Oracle Fusion Middleware, which had to be enhanced to support all the middleware functionality needed. It took Oracle five years to develop Oracle Fusion Applications and is one of the biggest projects Oracle ever faced.

On Monday and Tuesday the JavaOne Opening Keynote and the General Technical Session were about the Future of Java, which is the right thing to do after the Sun/Oracle merger last year. After finalizing the Java EE 6 Specification in December 2009, this year the focus is moved to the next version of the Java Standard Edition (Java SE). The last update on the Standard Edition was Java SE 6 in 2006. After a five years break we will get two new versions within roughly 18 months, OpenJDK 7 in mid 2011 and OpenJDK 8 in late 2012.

Thomas Kurian, Executive Vice President, Product Development at Oracle at JavaOne 2010 Opening Keynote

Java SE version 7 contains a bunch of smaller language changes and enhancements. Project Coin contributes most of the new language features, such as Diamond Operator for declaration and instantiation of generics, Try-with-Resource block for proper closing of resources implementing the new AutoCloseable interface in the java.lang package, Mutli-Catch and Re-Throw of Exceptions, Switch statements with Strings and String literals using “_” (int a = 0b100_1010_0011;). Additionally JSR 292 is implemented in this release, which supports mainly other languages setting up on top of the JVM, such as Scala and Groovy. The Fork/Join Framework provides better support for modern multi core processors. There are some other updates shipped with this release such as JDBC 4.1, which supports resource management, and more.

Java SE version 8 will bring a new modularization concept, finally no more hassle with the JAR files and the classpath! Project Jigsaw is defining these new features. A module (*.jmod) contains all its classfiles and a declaration of modules it depends on. This declaration is placed in the module-info.java file, which contains the name and the version of the dependencies. Modules need to be installed on the target system before they can be executed. A second set of major improvements will come out of the Project Lambda. These improvements include closures, value classes - no need to write getters and setters, extension methods and eventually reification.

On the tooling side two new versions of Netbeans IDE will be released during 2011 (check the feature list). Project Glassfish also committed to publish two new releases during calendar year 2011 (check the feature list).

JavaFX script will not be further developed by Oracle and is replaced by a set of APIs, which is supposed to make the technology more accessible to all Java developers without the need to learn a new language. Additionally 3D support is added to JavaFX! According to Thomas Kurian there will be direct 3D hardware rendering support using Microsoft DirectX or OpenGL and a HTML5/JavaScript output for web-based clients.

Not surprisingly, hot topics like the lawsuit with Google or the future of the JCP were not discussed.

Session Highlights



In Hyperproductive JSF 2.0, JSF co-spec lead Ed Burns showed a couple of reasons why your project development might run with hand brake pulled. With several demos he showed probably underused new features in JSF 2 like Groovy support for all JSF artifacts that get hot deployed, or the new facelets >ui:debug< tag that gives you information on the component trees or scope variables. He also emphasised the importance of standards or conventions for a team, which is indeed an often overlooked aspect when a new project needs to get out quickly. This was supplemented by another talk on the new JSF 2 composite components, which make it very easy to create reusable components for e.g. your corporate layout and build up a highly productive framework like that.

While Maven founder Jason van Zyl's talk on Maven 3 was not intended to show new Maven features, he gave an overview of how he envisions future enterprise development. This includes tight integration from IDE over SCM to CI, up to a new tool called Proviso which is targeted to standardize enterprise deployment processes. Another interesting aspect was how Sonatype standardized their component model around Maven and Hudson plugins as well as their repository manager to achieve better reuse, which is now all built around JSR-330.

As a company mainly dealing with enterprise development we were of course giving a main focus on all the Java EE 6 talks. Spec lead Roberto Chinnici gave one of them explaining the Java EE programming model. Even though we have been working with the technology for a while now, this gave a nice summary and showed again that you still get that extra little information from JavaOne that makes the conference very useful when you hear things from the mind behind it. Roberto provided a quick overview on managed beans, CDI and several "must knows" when changing to the new programming model.

On the same track was Dan Allen with presenting in depth on CDI and Seam 3. The talk went from all the CDI concepts from typesafe injection with qualifiers over loose coupling with events to the SPI in CDI leading to Seam 3. This is now split in several independent modules where your application can take what it needs. Spinning this thought further - why not break up all the EJB services and make them a portable extension? We might see something like this in Java EE 7. By the way, Dan is not only an excellent author (when are we going to see Seam in Action 2nd edition?), but also a great speaker - make sure to visit one of his talks when you got a chance.

To close the sessions on Java EE, we were deeply impressed by the session on Arquillian. The framework evolved out of working on the RI for CDI (Weld) and takes the idea of the new EE component model to your unit tests. By eliminating the build to create archives for integration tests, the team around Arquillian has managed to decrease complexity and improve speed of the integration test massively - something critical if you want your team to actually use them. The outcome was a generic component testing framework, which we're convinced will significantly improve quality on EE applications. You can programmatically create an archive on the components you want to test, and run the test either in container or as client in a running application server. This is the time when you feel that enterprise engineering is still a young discipline - why has this not been around for longer?

Last but not least, if you have been following our people spotlights you probably know we're big fans of the Java Posse! As in the last 4 years the posse was present also at this year's JavaOne holding a BoF in the Mason street tent, celebrating the 5th anniversary of the Java Posse. While it was less interactive than the last years BoFs, we got a great show with special guest "Loose" Bruce Kerr giving a live performance of the Posse song, funky costumes and interesting insights on the future of Java - well kind of. Go and download the recordings - although this time I guess it's too bad this is not a video podcast!

As usual we enjoyed being here - CU again next year!

Tuesday, September 21, 2010

Installing Oracle on Ubuntu

Most of the applications we write here at CTP depend on an Oracle database.
After putting some effort on configuring our development environment for a new project and figuring out the process wasn't as straightforward as I expected, I decided to share my finding in this walkthrough.
It describes the steps required to install Oracle on a Ubuntu Server.

Prerequisites


The instructions apply to Oracle 11g (version 11.2.0.1.0) on an Ubuntu 10.04.1 Server Edition 64-bit.
Download Ubuntu's installation ISO from http://www.ubuntu.com/server/get-ubuntu/download and Oracle's ZIPs from http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html.
A Windows environment is used as client. You will need the following tools:


Server configuration



Installation


Install Ubuntu in your server. In this example a virtual machine with 32 Gb of disk and 2Gb of RAM is used.
During the installation process, select the "SSH server" to be installed. To do it after installation, run:

sudo apt-get install openssh-server openssh-client

X Forwarding


You should configure PuTTY to forward the graphical applications from the Ubuntu server (the 'server') to your Windows workstation (the 'client'),
as the Oracle 11g installation depends on it.
Open PuTTY and enter the IP address of your server, give a name to the session and save it. Configure the following options:

  • In "Connection > SSH > X11", check "Enable X11 forwarding" and for "X display location" enter "localhost:0"

  • Go to the "Session" menu and save your session again, to persist the configuration changes.



Start the X server in your client computer before connecting to the server through PuTTY. In the programs menu, open "Cygwin-X" and run "XWin Server".
It may start a graphical terminal (XTerm) but you should close it, to avoid confusion. All you need is to keep the server running, indicated by the fancy X icon in the system tray.
Now you can establish the SSH connection. To test if the X Forwarding is properly configured, run xeyes in the server (if not found, run sudo apt-get install x11-apps) and you should see the eyes in your desktop.

Swap configuration


The amount of swap space required by the Oracle DB depends on the available RAM:

Available RAMSwap required
Between 1 and 2 Gb1.5 x RAM
Between 2 and 16 Gb1 x RAM
More than 16 Gb16 Gb

To check the existing configured space, run

$ free
total used free shared buffers cached
Mem: 2057836 181680 1876156 0 12268 91092
-/+ buffers/cache: 78320 1979516
Swap: 1417208 0 1417208

In this walkthrough I'm going to add 2Gb to the existing swap space.

# Creates a swap file, may take a while to execute
$ sudo dd if=/dev/zero of=/mnt/2Gb.swap bs=1M count=2048
# Activating the swap file
sudo mkswap /mnt/2Gb.swap
sudo swapon /mnt/2Gb.swap

To have the swap file mounted automatically after a reboot, add the following line to the /etc/fstab file:

/mnt/2Gb.swap none swap sw 0 0

Updating and installing dependencies



$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo apt-get install build-essential libaio1 libaio-dev libmotif3 libtool expat alien ksh pdksh unixODBC unixODBC-dev sysstat elfutils libelf-dev binutils lesstif2 lsb-cxx rpm lsb-rpm gawk unzip x11-utils ia32-libs

Installing libstdc++ 5


Ubuntu comes with libstdc++ version 6, but Oracle requires version 5. To fix it:

$ wget http://mirrors.kernel.org/ubuntu/pool/universe/g/gcc-3.3/libstdc++5_3.3.6-17ubuntu1_amd64.deb
$ dpkg-deb -x libstdc++5_3.3.6-17ubuntu1_amd64.deb ia64-libs
$ sudo cp ia64-libs/usr/lib/libstdc++.so.5.0.7 /usr/lib64
$ cd /usr/lib64
$ sudo ln -s libstdc++.so.5.0.7 libstdc++.so.5
$ ls -al libstdc++.*
lrwxrwxrwx 1 root root 18 2010-09-17 14:19 libstdc++.so.5 -> libstdc++.so.5.0.7
-rw-r--r-- 1 root root 829424 2010-09-17 14:18 libstdc++.so.5.0.7
lrwxrwxrwx 1 root root 19 2010-09-14 12:05 libstdc++.so.6 -> libstdc++.so.6.0.13
-rw-r--r-- 1 root root 1044112 2010-03-27 01:16 libstdc++.so.6.0.13

Links update


Update the symbolic link /bin/sh from /bin/dash to /bin/bash.

$ ls -l /bin/sh
lrwxrwxrwx 1 root root 4 2010-09-14 12:05 sh -> dash
$ cd /bin
$ sudo ln -sf bash /bin/sh
$ ls -l sh
lrwxrwxrwx 1 root root 4 2010-09-14 14:19 sh -> bash

Some links need to be created as well:

ln -s /usr/bin/awk /bin/awk
ln -s /usr/bin/rpm /bin/rpm
ln -s /usr/bin/basename /bin/basename

System users & groups


Change to root and create the following users and groups

ctp@oracle11g:~$ sudo -s
root@oracle11g:~# addgroup oinstall
Adding group `oinstall' (GID 1001) ...
Done.
root@oracle11g:~# addgroup dba
Adding group `dba' (GID 1002) ...
Done.
root@oracle11g:~# addgroup nobody
Adding group `nobody' (GID 1003) ...
Done.
root@oracle11g:~# usermod -g nobody nobody
root@oracle11g:~# useradd -g oinstall -G dba -p password -d /home/oracle -s /bin/bash oracle
root@oracle11g:~# passwd -l oracle
passwd: password expiry information changed.
root@oracle11g:~# mkdir /home/oracle
root@oracle11g:~# chown -R oracle:dba /home/oracle

Creating ORACLE_HOME



root@oracle11g:~# mkdir -p /u01/app/oracle
root@oracle11g:~# chown -R oracle:dba /u01

System parameters


Some Linux kernel parameters need to be modified, as specified in Oracle's installation guide. To do that, add the parameters below to the file /etc/sysctl.conf:

fs.file-max = 65535
kernel.shmall = 2097152
kernel.shmmax = 2147483648
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 1024 65535
net.core.rmem_default = 1048576
net.core.rmem_max = 1048576
net.core.wmem_default = 262144
net.core.wmem_max = 262144

The oracle user needs to have some shell limits increased by adding the parameters below to the file /etc/security/limits.conf:

oracle soft nproc 2047
oracle hard nproc 16383
oracle soft nofile 1023
oracle hard nofile 65535

Add the parameters below to the file /etc/pam.d/login:

session required /lib/security/pam_limits.so
session required pam_limits.so

Reboot the system to reload the configuration.

Oracle installation


From the directory you extracted the Oracle zip files, run database/runInstaller as the user oracle created above (see Unix Tips if you need help). You can ignore the DISPLAY warning, if any. If you receive an error regarding the X forwarding permission, use this remote X forwarding trick.

Installation options


The suggested installation options:

  • Create and configure a database

  • Server class

  • Single instance database installation

  • Advanced install

  • Enterprise Edition

  • General purpose/Transaction processing

  • Memory, Character sets, Security and Sample Schemas - as default


Run the fixup script following the instructions and then ignore the package dependencies.

To check if everything is ok, open the following address in your browser:

https://<server>:1158/em


Startup script


First, edit the file /etc/oratab or create one if you don't have it already. Make sure that the last parameter is "Y", meaning you want the database to be started during boot.

oracle:/u01/app/oracle/product/11.2.0/dbhome_1:Y

Now create the startup script /etc/init.d/oracledb.

cd /etc/init.d
sudo touch oracledb
sudo chmod a+x oracledb

Add the following content to the startup script

#!/bin/bash
#
# /etc/init.d/oracledb
#
# Run-level Startup script for the Oracle Enterprise Manager

export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1
export ORACLE_OWNR=oracle
export ORACLE_SID=oracle
export PATH=$PATH:$ORACLE_HOME/bin

if [ ! -f $ORACLE_HOME/bin/emctl -o ! -d $ORACLE_HOME ]
then
echo "Oracle startup: cannot start"
exit 1
fi

case "$1" in
start)
# Oracle listener and instance startup
echo -n "Starting Oracle: "
sudo -u $ORACLE_OWNR -E $ORACLE_HOME/bin/lsnrctl start
sudo -u $ORACLE_OWNR -E $ORACLE_HOME/bin/dbstart $ORACLE_HOME
sudo -u $ORACLE_OWNR -E touch /var/lock/oracle
# Oracle enterprise manager startup
sudo -u $ORACLE_OWNR -E $ORACLE_HOME/bin/emctl start dbconsole
echo "OK"
;;
stop)

echo -n "Shutdown Oracle: "
# Oracle enterprise manager shutdown
sudo -u $ORACLE_OWNR -E $ORACLE_HOME/bin/emctl stop dbconsole
# Oracle listener and instance shutdown
sudo -u $ORACLE_OWNR -E $ORACLE_HOME/bin/lsnrctl stop
sudo -u $ORACLE_OWNR -E $ORACLE_HOME/bin/dbshut $ORACLE_HOME
sudo -u $ORACLE_OWNR -E rm -f /var/lock/oracle
echo "OK"
;;
reload|restart)
$0 stop
$0 start
;;
*)
echo "Usage: `basename $0` start|stop|restart|reload"
exit 1
esac

exit 0


To execute the script automatically during server startup:

sudo update-rc.d oracledb defaults 99


General tips n' tricks



Unix tips


Open a shell as anoter user:

sudo -u <username> -s


Remote X session config



$ xauth list
oracle/unix:10 MIT-MAGIC-COOKIE-1 a18621a7bf2c102fc2b27758007b56a0
# Copy the line returned above
sudo -u oracle -s
export HOME=/home/oracle
# Paste the copied line after xauth add
xauth add oracle/unix:10 MIT-MAGIC-COOKIE-1 a18621a7bf2c102fc2b27758007b56a0


References


Friday, August 13, 2010

Java People Spotlight: Bartek Majsak

I'm glad to write another people spotlight, this time about a "newbee" (from a timely perspective), or from a technical perspective about a spicy addition to the Java Geeks at our Zurich Office: Bartosz Majsak... well actually Bartek, but that's another story. He joined CTP in March 2010.
So let's see how geeky his answers are then!

Java Competence Role:
Senior Developer [aka Mr. T due to his extreme passion on Testing]
My Master Kung-Fu Skills
:
I can mock you out even if you are static ;)
I'd be excited to get my hands dirty on:
Scala and/on Android

Q&A
Q: Hi Bartek, how would your message look like if you would have to tell it via Twitter what you are currently doing?
A: Thinking how to design my own #arquillian TestEnricher and how to complete "The Challenge of Hades"  in GoW at the same time.
Q: ... and having some drops of Sudden Death on top of it? ;-)

Q: What was the greatest piece of code you have ever written so far?
A: Please come back to me with this question when I will retire :) After a few days, the code which I thought was the most brilliant I've ever written... I refactor, so... I don't have a good answer for this question yet :D

Q: What is the best quote you have ever heard about programming?
A: "If debugging is the process of removing bugs, then programming must be the process of putting them in." (E.W. Dijkstra)

Q: What is the best quote you have heard from our managers?
A: "There is nothing as permanent as a temporary solution".

Q: What is the most cutting-edge technology or framework you actually used on projects?
A: CDI (JSR-299) and Arquillian.

Q: What is your favorite podcast?
A: I used to listen podcast while I was commuting but nowadays I'm a little bit out of the loop - living too close to the office ;) Of course I like Java Posse (who doesn't?!) and Software Engineering Radio. I'm also addicted to Parleys, DZone and InfoQ.

Q: Which Java book can you recommend and for what reason?
A: I really enjoyed reading "Implementation Patterns" by Kent Beck and "Working Effectively with Legacy Code" by Michael Feathers. The first one gives you interesting hints for how to express yourself through code by keeping it clean and easy to understand for others. The second one will help you to stay sane while digging  into the code of a person who definitely never read the first book. If you are serious about testing you should read Growing Object-Oriented Software Guided by Tests by Steve Freeman & Nat Pryce as well as xUnit Test Patterns by Gerard Meszaros. Of course all books recommended already by my colleagues are also just great but I didn't want to repeat them here.
Q: DRY pattern... :-) .... Well thanks for your answers and enjoy your mock-ito this evening at the CTP Summer Event!!


You can further follow Bartek's web presence in these directions:
- linked in: http://ch.linkedin.com/in/bartoszmajsak
- lastfm: http://www.last.fm/user/majson
- twitter: http://twitter.com/majson

Test drive with Arquillian and CDI (Part 2)

The first part of the Arquillian series was mainly focused on working with an in-memory database, DI (dependency injection) and events from the CDI spec. Now we will take a closer look on how to deal with testing Contextual components. For this purpose we will extend our sample project from the first part by adding a PortfolioController class, a conversation scoped bean for handling processing of user's portfolio management.

@ConversationScoped @Named("portfolioController")
public class PortfolioController implements Serializable {

// ...

Map<Share, Integer> sharesToBuy = new HashMap<Share, Integer>();

@Inject @LoggedIn
User user;

@Inject
private TradeService tradeService;

@Inject
private Conversation conversation;

public void buy(Share share, Integer amount) {
if (conversation.isTransient()) {
conversation.begin();
}
Integer currentAmount = sharesToBuy.get(share);
if (null == currentAmount) {
currentAmount = Integer.valueOf(0);
}

sharesToBuy.put(share, currentAmount + amount);
}

public void confirm() {
for (Map.Entry<Share, Integer> sharesAmount : sharesToBuy.entrySet()) {
tradeService.buy(user, sharesAmount.getKey(), sharesAmount.getValue());
}
conversation.end();
}

public void cancel() {
sharesToBuy.clear();
conversation.end();
}

// ...

}

So, let's try out Arquillian! As we already know from the first part we need to create a deployment package, which then will be deployed by Arquillian on the target container (in our case Glassfish 3.0.1 Embedded).


@Deployment
public static Archive<?> createDeploymentPackage() {
return ShrinkWrap.create("test.jar", JavaArchive.class)
.addPackages(false, Share.class.getPackage(),
ShareEvent.class.getPackage())
.addClasses(TradeTransactionDao.class,
ShareDao.class,
PortfolioController.class)
.addManifestResource(new ByteArrayAsset("<beans />".getBytes()), ArchivePaths.create("beans.xml"))
.addManifestResource("inmemory-test-persistence.xml", ArchivePaths.create("persistence.xml"));
}

Next we can start develop a simple test scenario:


  • given user choose CTP share,

  • when he confirms the order,

  • then his portfolio should be updated.

Which in JUnit realms could be written as follows:


@RunWith(Arquillian.class)
public class PortfolioControllerTest {

// deployment method

@Inject
ShareDao shareDao;

@Inject
PortfolioController portfolioController;

@Test
public void shouldAddCtpShareToUserPortfolio() {
// given
User user = portfolioController.getUser();
Share ctpShare = shareDao.getByKey("CTP");

// when
portfolioController.buy(ctpShare, 1);
portfolioController.confirm();

// then
assertThat(user.getSharesAmount(ctpShare)).isEqualTo(3);
}

}

Looks really simple, doesn't it? Well, it's almost that simple but there are some small details which you need to be aware of.

Producers

CDI provides a feature similar to Seam factories or Guice providers. It's called producer and it allows you to create injectable dependency. This could be especially useful when creation of such an instance requires additional logic, i.e. it needs to be obtained from an external source. A logged in user in a web application is a good example here. Thanks to the CDI @Produces construct we can still have very clean code which just works! All we need to do in order to inject the currently logged in user to our bean is as simple as that:

1. Create a @LoggedIn qualifier which will be used to define that a particular injection is expecting this concrete User bean.


@Qualifier
@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.METHOD, ElementType.FIELD, ElementType.PARAMETER, ElementType.TYPE})
public @interface LoggedIn {
}

2. Implement the producer method which will instantiate the logged in user in the session scope just after he successfully accesses the application, so it will provide an instance of the User class which is of @LoggedIn "type".


@Produces @SessionScoped @LoggedIn User loggedInUser() {
// code for retrieving current user from session
}

3. Decorate all injection points in other beans where we need this instance.


@Inject @LoggedIn
User user;

However this construct could be problematic when writing tests and an attentive reader would probably already be concerned about it. But with Arquillian we will run our test code in the CDI container and there is no need to simulate login procedure, using mock http sessions or any other constructs. We can take full advantage of this fact and create producer method which will replace our original one and provide the user directly from entity manager for example.


@Produces @LoggedIn User loggedInUser() {
return entityManager.find(User.class, 1L);
}

Note: I removed @SessionScoped annotation from loggedInUser() producer method intentionally. Otherwise you could have troubles with Weld proxies and EclipseLink while trying to persist the entity class. For tests it actually does not make any difference.

Context handling

One small problem arrived when I tried to test logic based on the conversation context. I had to figure out a way to programmatically create the appropriate context which then will be used by the SUT (or CUT if you prefer this abbreviation), because I was getting org.jboss.weld.context.ContextNotActiveException. Unfortunately I wasn't able to find anything related to it on the Arquillian forum or wiki, so I desperately jumped to the Seam 3 examples. I read somewhere that they are also using this library to test their modules and sample projects. Bingo! I found what I was looking for. To make the test code more elegant I built my solution the same way as for handling the database in the first part - by using annotations and JUnit rules. Using a @RequiredScope annotation on the test method will instruct JUnit rule to handle proper context initialization and cleanup after finishing the test. To make the code even cleaner we can implement such logic in a dedicated class and treat the enum as a factory:


public enum ScopeType {

CONVERSATION {
@Override
public ScopeHandler getHandler() {
return new ConversationScopeHandler();
}
}

// ... other scopes

public abstract ScopeHandler getHandler();

}

public class ConversationScopeHandler implements ScopeHandler {

@Override
public void initializeContext() {
ConversationContext conversationContext = Container.instance().services().get(ContextLifecycle.class).getConversationContext();
conversationContext.setBeanStore(new HashMapBeanStore());
conversationContext.setActive(true);
}

@Override
public void cleanupContext() {
ConversationContext conversationContext = Container.instance().services().get(ContextLifecycle.class).getConversationContext();
if (conversationContext.isActive()) {
conversationContext.setActive(false);
conversationContext.cleanup();
}
}
}

The JUnit rule will only extract the annotation's value of the test method and delegate context handling to the proper implementation:


public class ScopeHandlingRule extends TestWatchman {

private ScopeHandler handler;

@Override
public void starting(FrameworkMethod method) {
RequiredScope rc = method.getAnnotation(RequiredScope.class);
if (null == rc) {
return;
}
ScopeType scopeType = rc.value();
handler = scopeType.getHandler();
handler.initializeContext();
}

@Override
public void finished(FrameworkMethod method) {
if (null != handler) {
handler.cleanupContext();
}
}
}

Finally here's fully working test class with two additional test scenarios. I also used DBUnit add-on from first post for convenience.


@RunWith(Arquillian.class)
public class PortfolioControllerTest {

@Rule
public DataHandlingRule dataHandlingRule = new DataHandlingRule();

@Rule
public ScopeHandlingRule scopeHandlingRule = new ScopeHandlingRule();

@Deployment
public static Archive<?> createDeploymentPackage() {
return ShrinkWrap.create("test.jar", JavaArchive.class)
.addPackages(false, Share.class.getPackage(),
ShareEvent.class.getPackage())
.addClasses(TradeTransactionDao.class,
ShareDao.class,
TradeService.class,
PortfolioController.class)
.addManifestResource(new ByteArrayAsset("<beans />".getBytes()), ArchivePaths.create("beans.xml"))
.addManifestResource("inmemory-test-persistence.xml", ArchivePaths.create("persistence.xml"));
}

@PersistenceContext
EntityManager entityManager;

@Inject
ShareDao shareDao;

@Inject
TradeTransactionDao tradeTransactionDao;

@Inject
PortfolioController portfolioController;

@Test
@PrepareData("datasets/shares.xml")
@RequiredScope(ScopeType.CONVERSATION)
public void shouldAddCtpShareToUserPortfolio() {
// given
User user = portfolioController.getUser();
Share ctpShare = shareDao.getByKey("CTP");

// when
portfolioController.buy(ctpShare, 1);
portfolioController.confirm();

// then
assertThat(user.getSharesAmount(ctpShare)).isEqualTo(3);
}

@Test
@PrepareData("datasets/shares.xml")
@RequiredScope(ScopeType.CONVERSATION)
public void shouldNotModifyUserPortfolioWhenCancelProcess() {
// given
User user = portfolioController.getUser();
Share ctpShare = shareDao.getByKey("CTP");

// when
portfolioController.buy(ctpShare, 1);
portfolioController.cancel();

// then
assertThat(user.getSharesAmount(ctpShare)).isEqualTo(2);
}

@Test
@RequiredScope(ScopeType.CONVERSATION)
@PrepareData("datasets/shares.xml")
public void shouldRecordTransactionWhenUserBuysAShare() {
// given
User user = portfolioController.getUser();
Share ctpShare = shareDao.getByKey("CTP");

// when
portfolioController.buy(ctpShare, 1);
portfolioController.confirm();

// then
List<TradeTransaction> transactions = tradeTransactionDao.getTransactions(user);
assertThat(transactions).hasSize(1);
}

@Produces @LoggedIn User loggedInUser() {
return entityManager.find(User.class, 1L);
}

}

For the full source code you can jump directly to our google code repository.

Conclusion

As you can see playing with Arquillian is pure fun for me. Latest 1.0.0.Alpha3 release brought a lot of new goodies to the table. I hope that the examples in this blog post convinced you that working with different scopes is quite straightforward and requires just a little bit of additional code. However it's still not the ideal solution because it's using Weld's internal API to create and manage scopes. So if you are using a different CDI container you need to figure out how to achieve it, but it's just a matter of adjusting ScopeHandler implementation to your needs.

There is much more to write about Arquillian so keep an eye on our blog and share your thoughts and suggestions through comments.

Friday, July 30, 2010

Using Seam 2 with JPA 2

It’s a difficult time to architect new Java web applications. Seam 2 is a proven and well working application stack, but we will hardly see many new versions on the Seam 2 train. Java EE 6 is in general an excellent option, but if your customer’s choice of application server does not yet support it, it is not reasonable. Also there is still some time left for Seam 3 prime time, which builds on top of Java EE 6.

Facing this kind of choice recently, I looked into possible migration paths between the two variants. One thing I have seen often on Seam 2 applications is that people really like the Hibernate criteria API and therefore use Hibernate directly. While Hibernate is an excellent ORM framework, it’s preferrable to use the JPA API when moving to Java EE 6. So - why not use Seam 2 with JPA 2, which finally features an even better (typesafe) criteria API?

It turns out to be a quite easy setup (once you get classloading right), with some small extra modifications. I’ve been using Maven, Seam 2.2 and Hibernate 3.5.4 on JBoss 4.2.3. Lets start with preparing the server. You need to remove the old Hibernate and JPA classes and add the new persistence provider with some dependencies (of course this will be different on other servers):

To be removed from server/lib:


Add to server/lib:


Next lets create a sample Seam project. I’m using the CTP Maven archetype for Seam. In the POM file, remove the references to the Hibernate and old JPA libraries and add the new JPA 2 libraries:

<!-- Remove all embedded dependencies
<dependency>
<groupId>org.jboss.seam.embedded</groupId>
<artifactId>...</artifactId>
</dependency -->

<dependency>
<groupId>org.hibernate.javax.persistence</groupId>
<artifactId>hibernate-jpa-2.0-api</artifactId>
<version>1.0.0.Final</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-core</artifactId>
<version>3.5.4-Final</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-jpamodelgen</artifactId>
<version>1.0.0.Final</version>
<scope>provided</scope>
</dependency>

Note: If you’ve been using embedded JBoss for your integration tests, this will probably not work without exchanging the JARS there too. I’ve been moving away from this approach as it turned out to be a common reason for headache on our nightly builds as well as running tests in Eclipse. I’m very excited to see Arquillian evolving on this topic!

JPA 2 integrates with JSR 303 bean validation, which is the successor of Hibernate Validator. Unfortunately Seam 2 has references to Hibernate Validator 3, where JPA needs version 4. Adding the validator legacy JAR fixes this problem. As bean validation is now part of Java EE 6, we can add it to the server classpath as shown above, as well as to our POM:

<dependency>
<artifactId>validation-api</artifactId>
<groupId>javax.validation</groupId>
<version>1.0.0.GA</version>
<scope>provided</scope>
</dependency>

Now it’s time to move to some code. You can jump straight in your persistence config files and bring the schema up to version 2:

<persistence xmlns="http://java.sun.com/xml/ns/persistence"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd"
version="2.0"> ...

<entity-mappings xmlns="http://java.sun.com/xml/ns/persistence/orm"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/persistence/orm http://java.sun.com/xml/ns/persistence/orm_2_0.xsd"
version="2.0"> ...

Seam proxies entity managers to implement features like EL replacements in JPQL queries. This proxy does not implement methods new in JPA 2 and will therefore fail. You can write your own proxy very easy:

public class Jpa2EntityManagerProxy implements EntityManager {

private EntityManager delegate;

public Jpa2EntityManagerProxy(EntityManager entityManager) {
this.delegate = entityManager;
}

@Override
public Object getDelegate() {
return PersistenceProvider.instance()
.proxyDelegate(delegate.getDelegate());
}

@Override
public void persist(Object entity) {
delegate.persist(entity);
}
...
}

Add the special Seam functionality as needed. In order to use the proxy with Seam, you’ll have to overwrite the HibernatePersistenceProvider Seam component:

@Name("org.jboss.seam.persistence.persistenceProvider")
@Scope(ScopeType.STATELESS)
@BypassInterceptors
// The original component is precedence FRAMEWORK
@Install(precedence = Install.APPLICATION,
classDependencies={"org.hibernate.Session",
"javax.persistence.EntityManager"})
public class HibernateJpa2PersistenceProvider extends HibernatePersistenceProvider {

@Override
public EntityManager proxyEntityManager(EntityManager entityManager) {
return new Jpa2EntityManagerProxy(entityManager);
}

}

If you use Hibernate Search, have a look at the superclass implementation - you might want to instantiate a FullTextEntityManager directly (as you have it in your classpath - but note that this has not been tested here).

Both implementations are on our Google Code repository, and you can integrate them directly over the following Maven dependency:

<dependency>
<groupId>com.ctp.seam</groupId>
<artifactId>seam-jpa2</artifactId>
<version>1.0.0</version>
</dependency>


You’re now ready to code JPA 2 queries! We’ve already included the meta model generator utility, so let’s activate it for the build:

<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>build-helper-maven-plugin</artifactId>
<executions>
<execution>
<id>add-source</id>
<phase>validate</phase>
<goals>
<goal>add-source</goal>
</goals>
<configuration>
<sources>
<source>${basedir}/src/main/hot</source>
<source>${basedir}/target/metamodel</source>
</sources>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.bsc.maven</groupId>
<artifactId>maven-processor-plugin</artifactId>
<version>1.3.1</version>
<executions>
<execution>
<id>process</id>
<goals>
<goal>process</goal>
</goals>
<phase>generate-sources</phase>
<configuration>
<outputDirectory>${basedir}/target/metamodel</outputDirectory>
<processors>
<processor>
org.hibernate.jpamodelgen.JPAMetaModelEntityProcessor
</processor>
</processors>
</configuration>
</execution>
</executions>
</plugin>


In order to use the processor plugin, you also need the following Maven repositories in your POM:

<pluginRepository>
<id>annotation-processing-repository</id>
<name>Annotation Processing Repository</name>
<url>http://maven-annotation-plugin.googlecode.com/svn/trunk/mavenrepo</url>
</pluginRepository>
<pluginRepository>
<id>jfrog-repository</id>
<name>JFrog Releases Repository</name>
<url>http://repo.jfrog.org/artifactory/plugins-releases</url>
</pluginRepository>

Run the build, update the project config to include the new source folder - and finally we’re ready for some sample code:

@Name("userDao")
@AutoCreate
public class UserDao {

@In
private EntityManager entityManager;

private ParameterExpression<String> param;
private CriteriaQuery<User> query;

@Create
public void init() {
CriteriaBuilder cb = entityManager.getCriteriaBuilder();
query = cb.createQuery(User.class);
param = cb.parameter(String.class);
Root<User> user = query.from(User.class);
query.select(user)
.where(cb.equal(user.get(User_.username), param));
}

public User lookupUser(String username) {
return entityManager.createQuery(query)
.setParameter(param, username)
.getSingleResult();
}
}

This is now quite close to Java EE 6 code - all we will have to do is exchange some annotations:

@Stateful
public class UserDao {

@PersistenceContext
private EntityManager entityManager; ...

@Inject
public void init() { ...
}

Enjoy!

Tuesday, July 13, 2010

Test drive with Arquillian and CDI (Part 1)

Here at Cambridge Technology Partners we are as serious about testing as we are about cutting-edge technologies like CDI. Last time we wrote about testing EJBs on embedded Glassfish and now we are back with something even more powerful, so keep on reading!

Background

Recently I was involved in a project based on JBoss Seam where we used Unitils for testing business logic and JPA. I really like this library, mainly because of the following aspects:

  • Provides easy configuration and seamless integration of JPA (also a little bit of DI).
  • Greatly simplifies management of the test data. All you need to do in order to seed your database with prepared data is providing a xml dataset (in a DBUnit flat xml format) and then add @DataSet annotation on the test class or method.
Unitils library is definitely an interesting topic for another blog entry, but since we are going to dive into the Java EE 6 testing those of you who are not patient enough for next blog entry can jump directly to the tutorial site. I'm sure you will like it.

The only thing in Unitils which I'm not really comfortable with, is the fact that this library is not really designed for integration testing. The example which is clearly demonstrating it is an observer for Seam events. In this particular case we might need to leave unit testing world (mocks, spies and other test doubles) and develop real integration tests. The SeamTest module together with JBoss Embedded could help but it's really a tough task to make it running with Maven. On the other hand JBoss AS wasn't the target environment for us. Thankfully there is a new kid on the block from JBoss called Arquillian. In the next part of this post I will try to summarize my hands-on experience with this very promissing integration testing library. But first things first, let's look briefly at CDI events.

CDI Events

We are going to have JEE6 workshops for our customers and I was extremely happy when my colleagues asked me to play around with Arquillian and prepare some integration tests. I picked up a piece of logic responsible for logging market transactions based on CDI events. In brief it is a design technique which provides components interaction without any compilation-time dependencies. It's similar to the observer pattern but in case of CDI events, producers and observers are entirely decoupled from each other. If following example won't give you a clear explanation of the concept please refer to this well written blog post. Let's take a look at quite simplified code example.

import javax.ejb.Stateless;
import javax.enterprise.event.Event;
import javax.inject.Inject;

@Stateless
public class TradeService {

@Inject @Buy
private Event<ShareEvent> buyEvent;

public void buy(User user, Share share, Integer amount) {
user.addShares(share, amount);
ShareEvent shareEvent = new ShareEvent(share, user, amount);
buyEvent.fire(shareEvent);
}

...

}

import javax.enterprise.event.Observes;
import javax.inject.Inject;
import javax.inject.Singleton;

@Singleton
public class TradeTransactionObserver implements Serializable {

...

@Inject
TradeTransactionDao tradeTransactionDao;

public void shareBought(@Observes @Buy ShareEvent event) {
TradeTransaction tradeTransaction = new TradeTransaction(event.getUser(), event.getShare(), event.getAmount(), TransactionType.BUY);
tradeTransactionDao.save(tradeTransaction);
}

...

}

To preserve the clear picture I'm not going to include Share, TradeTransaction, User and ShareEvent classes. What is worth to mention however is that the instance of ShareEvent contains user, a share which he bought and amount. In the User entity we store a map of shares together with amount using new @ElementCollection annotation introduced in JPA 2.0. It allows to use entity classes as keys in the map.

@ElementCollection
@CollectionTable(name="USER_SHARES")
@Column(name="AMOUNT")
@MapKeyJoinColumn(name="SHARE_ID")
private Map<Share, Integer> shares = new HashMap<Share, Integer>();

Then in the TradeTransaction entity we simply store this information and additionally date and TransactionType. Complete code example could be downloaded from our google code page - see Resources section at the bottom of the post.

The very first test

We will use a following scenario for our test example (written in the BDD manner):

  • given user choose a CTP share,
  • when he buys it,
  • then market transaction should be logged.
So the test method could look as follows:

@Test
public void shouldLogTradeTransactionAfterBuyingShare() {
// given
User user = em.find(User.class, 1L);
Share share = shareDao.getByKey("CTP");
int amount = 1;

// when
tradeService.buy(user, share, amount);

// then
List<TradeTransaction> transactions = tradeTransactionDao.getTransactions(user);
assertThat(transactions).hasSize(1);
}

In the ideal world we could simply run this test in our favourite IDE or a build tool without writing a lot of plumbing code or dirty hacks to set up the environment like Glassfish, JBoss AS or Tomcat. And here's when Arquillian comes to play. The main goal of this project is to provide a convenient way for developers to run tests either in embedded or remote containers. It's still in alpha version but amount of already supported containers is really impressive. It opens the door to world of easy and pleasant to write integration tests. There are only two things required in order to make our tests "arquillian infected":

  1. Set @RunWith(Arquillian.class) annotation for you test class (or extend Arquillian base class if you are a TestNG guy).
  2. Prepare a deployment package using ShrinkWrap API in a method marked by the @Deployment annotation.

@Deployment
public static Archive<?> createDeploymentPackage() {
return ShrinkWrap.create("test.jar", JavaArchive.class)
.addPackages(false, Share.class.getPackage(),
ShareEvent.class.getPackage(),
TradeTransactionDao.class.getPackage())
.addClass(TradeService.class)
.addManifestResource(new ByteArrayAsset("<beans/>".getBytes()), ArchivePaths.create("beans.xml"))
.addManifestResource("inmemory-test-persistence.xml", ArchivePaths.create("persistence.xml"));
}

Odds and ends

Until now I guess everything was rather easy to grasp. Unfortunately while playing with tests I encountered a few shortcomings but found the solutions which I hope make this post valuable for readers. Otherwise you could simply jump to the user guide and code examples, don't you?

JAR hell

The most time consuming issue were the dependencies conflicts better known as JAR hell. The target environment for the workshop application is Glassfish v3 so I used the embedded version for my integration tests. I decided to have my tests as integral part of the project and here problems began.

The main problem is that you cannot use javaee-api because you will get exceptions while bootstrapping the container more or less similar to : java.lang.ClassFormatError: Absent Code attribute in method that is not native or abstract in class file javax/validation/constraints/Pattern$Flag (related thread on JBoss forum). I also recommend not to download separated jars for each project which you are using because you will get even more exceptions :)

Important remark here: if you are using JPA 2.0 with Criteria API and a hibernate-jpamodelgen module for generating metamodel classes then you should also exclude org.hibernate.javax.persistence:hibernate-jpa-2.0-api dependency to avoid yet another class conflict.

You have basically two options:

  1. Use glassfish-embedded-all jar since it already contains all needed APIs.
  2. Create a separated project for integration testing and forget about everything what I mentioned in this section.

Preparing a database for testing

Next step is to create a data source for the Glassfish instance. But first we need to tell Arquillian to not delete the Glassfish server folder after each deployment / test execution (which is the default behaviour). All you need to do is to create a arquillian.xml file and add following configuration:

<arquillian xmlns="http://jboss.com/arquillian"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:glassfish="urn:arq:org.jboss.arquillian.glassfish.embedded30">
<glassfish:container>
<glassfish:bindPort>9090</glassfish:bindPort>
<glassfish:instanceRoot>src/test/glassfish-embedded30</glassfish:instanceRoot>
<glassfish:autoDelete>false</glassfish:autoDelete>
</glassfish:container>
</arquillian>

Then we need to take a domain.xml file from our normal Glassfish instance (i.e. from ${glassfish_home}/glassfish/domains/domain1/config), remove all <applications> and <system-applications> nodes, add new data source and copy it to the src/test/glassfish-embedded-30/config folder. We will use HSQL 1.8.0.7 version in our tests (2.0 version is causing some problems with DBUnit).

<domain log-root="${com.sun.aas.instanceRoot}/logs" application-root="${com.sun.aas.instanceRoot}/applications" version="22">
<system-applications />
<applications />
<resources>
...
<jdbc-connection-pool res-type="java.sql.Driver" description="In memory HSQLDB instance" name="arquilliandemo" driver-classname="org.hsqldb.jdbcDriver">
<property name="URL" value="jdbc:hsqldb:mem:arquilliandemomem" />
<property name="user" value="sa" />
<property name="password" value="" />
</jdbc-connection-pool>
<jdbc-resource pool-name="arquilliandemo" jndi-name="arquilliandemo-ds" />
</resources>
<servers>
<server name="server" config-ref="server-config">
...
<resource-ref ref="arquilliandemo-ds" />
</server>
</servers>
<configs>
...
</configs>
</domain>

The last file which you need to take from ${glassfish_home}/glassfish/domains/domain1/config folder is server.policy. And that's it! You have running Glassfish with HSQL database ready for some serious testing.

Data preparation

As I mentioned in the introductory section I really like Unitils and the way how you can seed the database with the test data. The only thing to do is to provide an XML file in flat dbunit format like this one:

<dataset>
<user id="1" firstname="John" lastname="Smith" username="username" password="password" />
<share id="1" key="CTP" price="18.00" />
</dataset>

and then put the @DataSet("test-data.xml") annotation either on the test method or a class.

I was really missing this feature so I decided to implement it myself. Very cool way of adding such behaviour is by using a JUnit rule. This mechanism, similar to interceptors, has been available since 4.7 release. I choose to extend the TestWatchman class since it provides methods to hook around test invocation. You can see the rule's logic based on the DBUnit flat xml data for seeding database in the example project. All you need to do is to create a public field in your test class and decorate it with @Rule annotation. Here's the complete test class.

@RunWith(Arquillian.class)
public class TradeServiceTest {

@Deployment
public static Archive<?> createDeploymentPackage() {
return ShrinkWrap.create("test.jar", JavaArchive.class)
.addPackages(false, Share.class.getPackage(),
ShareEvent.class.getPackage(),
TradeTransactionDao.class.getPackage())
.addClass(TradeService.class)
.addManifestResource(new ByteArrayAsset("<beans />".getBytes()), ArchivePaths.create("beans.xml"))
.addManifestResource("inmemory-test-persistence.xml", ArchivePaths.create("persistence.xml"));
}

@Rule
public DataHandlingRule dataHandlingRule = new DataHandlingRule();

@PersistenceContext
EntityManager em;

@Inject
ShareDao shareDao;

@Inject
TradeTransactionDao tradeTransactionDao;

@Inject
TradeService tradeService;

@Test
@PrepareData("datasets/shares.xml")
public void shouldLogTradeTransactionAfterBuyingShare() {
// given
User user = em.find(User.class, 1L);
Share share = shareDao.getByKey("CTP");
int amount = 1;

// when
tradeService.buy(user, share, amount);

// then
List<TradeTransaction> transactions = tradeTransactionDao.getTransactions(user);
assertThat(transactions).hasSize(1);
}

}

I must admit that it's a JUnit specific solution, but you can always implement your own @BeforeTest and @AfterTest methods to achieve the same result in TestNG.

DBUnit gotchas

Using DBUnit's CLEAN_INSERT strategy (or deleting table content after test execution by using DELETE_ALL) could raise constraint violation exceptions. HSQL provides special SQL statement for this purpose and the sample project is invoking this statement just before DBUnit.

Final thoughts

All in all Arquillian is a really great integration testing tool with full of potential. It's just great that the JBoss guys are aiming to provide support for almost all widely used application servers and web containers. As you could see from the examples above it's not that hard to have tests for more sophisticated scenarios than you can find in the user guide. Keep your eyes on Arquillian - the roadmap is really promising.
In the upcoming second part I will dive into CDI contexts and demonstrate how to use Arquillian for testing contextual components.
If you are writing an application for the Java EE 6 stack while not using Arquillian is a serious mistake!

Resources

Thursday, June 24, 2010

JSF composite component

On my latest project we used RichFaces as a component library. Our client didn't like to fill out the time with the RichFaces calendar. They wanted two combo boxes to fill out the time. I don't think this is the most user friendly way to solve picking a time (I like this timepicker) but our customer is king.

I thought that there would be someone on the internet who already created a component like that, but I could not find any so I made my own. JSF 2 will introduce a new component creation method. It will be simpler to create components and you do not have to write java code to make them work. But until we have JSF 2 available on all major application servers we will have to program some java for our components to work.

The plan I had for creating it was simple enough just take two HtmlSelectOneMenu components and fill them with default options.

So I ended with something like this:


private void createChildComponents(FacesContext context) {
Application application = context.getApplication();
hourInput = (HtmlSelectOneMenu) application.createComponent(HtmlSelectOneMenu.COMPONENT_TYPE);
minuteInput = (HtmlSelectOneMenu) application.createComponent(HtmlSelectOneMenu.COMPONENT_TYPE);
List<UIComponent> children = getChildren();
children.add(hourInput);
HtmlOutputText sparator = (HtmlOutputText) application.createComponent(HtmlOutputText.COMPONENT_TYPE);
sparator.setValue(":");
children.add(sparator);
children.add(minuteInput);

UISelectItems hourItems = createSelectItems(application, 24, 1);
hourInput.getChildren().add(hourItems);

UISelectItems minuteItems = createSelectItems(application, 60, 5);
minuteInput.getChildren().add(minuteItems);
delegateProperties();
}

private UISelectItems createSelectItems(Application application, int number, int step) throws FacesException {
UISelectItems selectItems = (UISelectItems) application.createComponent(UISelectItems.COMPONENT_TYPE);
List<SelectItem> items = new ArrayList<SelectItem>();
for (int i = 0; i < number; i += step) {
items.add(new SelectItem(StringUtils.leftPad(String.valueOf(i), 2, "0")));
}
selectItems.setValue(items);
return selectItems;
}


Rendering this was not hard but to extract the selected value and store the submitted value back into the value binding, that was not so easy. First I tried to override the validate method but that didn't "sound" right, doing it in the updateModel was much better.

The only weird thing I was still facing was that, sometimes the value from the hour and minute components were converted to Integer but sometimes they where not. So I ended up with this construction to ensure that they were always Integer.


@Override
public void updateModel(FacesContext context) {
super.updateModel(context);

hourInput.validate(context);
minuteInput.validate(context);

ValueExpression valueExpression = getValueExpression("value");
Date value = (Date) valueExpression.getValue(context.getELContext());
if (value != null && hourInput.getValue() != null
&& minuteInput.getValue() != null) {
Calendar calendar = Calendar.getInstance();
calendar.setTime(value);
Integer hour = Integer.valueOf(hourInput.getValue().toString());
calendar.set(Calendar.HOUR_OF_DAY, hour);
Integer minute = Integer.valueOf((minuteInput.getValue().toString()));
calendar.set(Calendar.MINUTE, minute);

valueExpression.setValue(context.getELContext(), calendar.getTime());
}
}


Here I'm using the Java Date API. That of course is extremely sadistic so we could change it to use joda-time.

Now to use it we first register it as a component in components.taglib.xml

<facelet-taglib>
<namespace>http://ctp-consulting.com/components</namespace>
<!--
usage: <ctp:timepicker value="date"/>
-->
<tag>
<tag-name>timepicker</tag-name>
<component>
<component-type>com.ctp.web.components.Timepicker</component-type>
</component>
</tag>


Then make sure you have the components.taglib.xml registered in you're web.xml

<context-param>
<param-name>facelets.LIBRARIES</param-name>
<param-value>/WEB-INF/tags/components.taglib.xml;</param-value>
</context-param>

And here an example of use, fist add the namespace on the top of you're page:

xmlns:ctp="http://ctp-consulting.com/components"

Then in our page you can do something like:

<ctp:timepicker value="#{catering.deliveryTime}"/>

Maybe this is not the best solution, if you have comments or tips, feel free drop them directly on this post. I'm interested in what you have to say!

Saturday, June 5, 2010

Jazoon 2010

As a Java developer you'll need to stay on-top of what is new and noteworthy. A great way to do this is to go to a conference, because this is an interactive way to learn about new stuff. Together with Douglas,  I was attending Jazoon, an international conference held in Zurich, it is my first time here. Like Devoxx it's also held in a movie theater. Although Jazoon is also an international conference there are of course a lot of Swiss here and it's a little smaller. But that just means I have more change to win an iPad :D (didn't happen, too bad).

After the keynote the first presentation I attend is about REST. Stefan Tilkov is the presenter and he does a nice job bashing web frameworks like JSF that do not do REST at all. He has a point, the web is about URI (Unified Resource Indicator) and a document should be identified with that. That has always been the way of the web. And "modern" web application frameworks have a tendency not to change the URI for every resource and use only one HTTP method (e.g. POST) instead of all 7 of them. Stefan calls this abusing the web, or even desktop applications in disguise. I must agree to what he has to say about a lot of things. Search engines like Google work because of this and even if we are writing applications that are used inside intranets they are still web applications and they should be able to work accordingly. I'll need to investigate what people developing JSF thoughts are about this.

Next up Blueprint by Costin Leau, he works for SpringSource. I personally have not used Spring since  they were hesitant to adopt annotations and was rather using JBoss Seam ever since. But as we have customers interested in doing something with OSGi it will be interesting to see how they eased the way to develop something for OSGi. When I heard about Spring the first time, I was amazed and wondered why I didn't think of that. Now I have the same feeling with OSGi, it can be a lot simpler when you use IoC in combination with it. Blueprint is only a spec and there are 2 implementations. I really need to take a look at it because it looks really helpful.


On Thursday we had a nice Java roundtable to discuss all the news.
Erik was all excited! :D


On day 2 the keynote is from Ken Schwaber he is one of the founders of SCRUM. In his presentation he made the point that a lot of people who think they are doing scrum are actually not: Because they don't finish their user stories. This is due to the fact that we don't include enough when we say it's done. When is a user story done? When user acceptance testing has taken place? Load test integration test and refactoring? When things like these are kept to the very end of the project a lot of work still remains to be done when the project should have been finished. According to Ken this work will be exponentially more then when you do it after each sprint. All in all it was quite a mood breaker because Ken tried to emphasize that a lot of software projects still fail and that we should be professional and fix that by changing what we call done.

Peter Lubers (he must be Dutch :D ) had a talk about HTML 5 Websockets. I've heard about a couple of things that are going to be in the HTML 5 spec but I did not know about this! And this is very exiting:  there are a lot of projects that have tried to solve the problem how to push data to a web browser client. All of the implementations are not ideal. There are a couple of polling solutions, but the overhead that you get when making a request, on average, is about 800k. That does not look like much but if you want to scale your applications it is a lot. The way Websockets work is to "upgrade" your standard HTTP  connection to a socket where you can then send and receive data over. He showed a working example of this with Google Chrome the only browser that supports this right now- At the end of the presentation he said he talked to one of the technical guys from Microsoft and he has a strong feeling it will not be in IE9, how nice is that (the IE6 story all over).

The two talks I went to after lunch were both about alternative languages on the JVM namely Groovy and Scala. And they we're both about Distributed Computing. I thought it was really exciting, all we need now is a customer with a large set of data and a need to compute something with. That is the only problem with distributed computing and with cloud computing even more so, it needs lot of data to work with. Otherwise there is no need to use it. This is even more true when you already use web applications. They scale very easy because they use the request response model. Every request could be handled by a separate thread or instance in a cluster. So interesting things are happening here and a lot of development is going into solutions based on distributed computing and cloud computing.


Douglas and Daniel trying to get more infos about Web Sockets...
Well...isn't that a game on that iPad?


Last Day

On the last day of the conference the keynote was about Gaia the satellite that is going to map a lot of stars in our milky way. It was interesting to see that they use Java to analyse the huge amount of data. And that there are still debates about Java being fast enough. Come on people it's 2010 Java is blazing fast!!

The whole last day was more or less about testing. There were a lot of talks in this area and only one that I would like to mention here. There was a way to short talk about Arquillian. I don't think that the presenters wanted to have a small slot but I guess they we're forced to use a small one. Arquillian is a cool way to test EJBs inside the container. The project is brand new and they still need to add support for a couple of containers. But it's looking really promising. I got a cool shirt at their presentation so I'll definitely keep an eye on this project :-).

Thoughts

All in all Jazoon was great to see and to be at. But if I would have to choose between Jazoon or Devoxx  I'd rather go for Devoxx. The idea I get is that Jazoon is younger than Devoxx and still has to proof itself as important and unique besides Devoxx. Of course Jazoon is better organized, everybody knows that people from Belgium are chaotic ( :D sorry guys), but that is not enough of course. At Jazoon I don't get yet the atmosphere you get from a Devoxx, this hold true for the size and quality of speakers too.

But I do have a lot of new things I have to look at! So all in all it was a positive experience. I hope Jazoon gets more popularity so it will attract more people from around the world.

Thursday, May 20, 2010

JavaOne 2010 and Oracle OpenWorld 2010

... or "Oracle acquired the city of San Francisco":
This year is the first year of JavaOne under the umbrella of Oracle and it takes place in the same city and the same week as Oracle OpenWorld: September 19th - 23rd in San Francisco.

You may ask how Oracle can host two conferences of that size at the same time in the same city: Moscone Center (for me the home of JavaOne) is now the home of Oracle OpenWorld whereas JavaOne has been relocated to the so called "The Zone" consisting of three large hotels, all near Union Square (see red markers in the following map):


View JavaOne 2010 in a larger map

Registration is open for both conferences and both do offer a discount at this moment when registering within the next few weeks.

If you go to JavaOne for the first time you might want to compare it with last year's conference where it was hosted by Sun Microsystems for the last time:

We (the CTP Java Competence Group) are very curious how both conferences will present themselves in this new Java era led by Oracle.