Wednesday, September 14, 2016

Hibernate JSON Types in Kotlin: A TypeCasting puzzler and generic collections

Trying to persist JSON types in Postgresql using Hibernate I came across the excellent post from Vlad Mihalcea How to map JSON objects using generic Hibernate Types.

I just wanted to have it as Kotlin code, as mostly all of my project is coded in Kotlin. So I used the Java-to-Kotlin conversion which as usual left me with some puzzlers, one of them I'd like to mention:

class JsonBinarySqlTypeDescriptor : AbstractJsonSqlTypeDescriptor() {

    override fun <X : Any> getBinder(
            javaTypeDescriptor: JavaTypeDescriptor<X>): ValueBinder<X> {

        return object : BasicBinder<X>(javaTypeDescriptor, this) {
            override fun doBind(
                    st: PreparedStatement,
                    value: X,
                    index: Int,
                    options: WrapperOptions) {
                                ( as Class<X>),

The most puzzling research was up, when it came to passing the JsonNode type to

protected abstract void doBind(PreparedStatement st, J value, ...
in BasicBinder. I had to cast it to Class<X>, it's still an unchecked conversion, but at last it works.

Ome thing I would wish I had a starting idea for is how to persist generic collections,  Let's start with passing in the TypeReference Jackson needs as a Hibernate Type Parameter (String only...)

private lateinit var jsonObjectClass: Class<*>
private var jsonTypeReference: TypeReference<*>? = null
override fun setParameterValues(parameters: Properties) {
    jsonObjectClass = (parameters.get(

            as DynamicParameterizedType.ParameterType)
    val jsonTypeParam = parameters.get("json.typereference") as String?
    if (jsonTypeParam != null) {
        jsonTypeReference = Class.forName(jsonTypeParam)
                .getConstructor().newInstance() as TypeReference<*>

Now, when we set this on a Collection like this:

@Type(type = "jsonb",
 parameters = arrayOf(Parameter(name = "json.typereference",
 value = "de.eiswind.xino.datalayer.entities.PermissionTypeReference")))
var permissions: MutableList<Permission> = ArrayList<Permission>()
We can pass in any custom TypeReferene instance to Jackson to deserialize generic collections!
class PermissionTypeReference :
        TypeReference<MutableList<Permission>>() {
At last we need to make the proper call to Jackson for the deserialization:
override fun fromString(string: String): Any {
    if (jsonTypeReference == null) {
        return JacksonUtil.fromString(string, jsonObjectClass)
    } else {
        return JacksonUtil.fromString(string,
                jsonTypeReference as TypeReference<*>)
JacksonUtil has the corresponding overloaded methods:
fun <T> fromString(string: String, clazz: Class<T>): T {
    try {
        return OBJECT_MAPPER.readValue(string, clazz)
    } catch (e: IOException) { ...
fun <T> fromString(string: String, reference: TypeReference<T>): T {
    try {
        return OBJECT_MAPPER.readValue(string, reference)
    } catch (e: IOException) { ...
There's one thing that hit me terrible in the next morning hours, we have to think about clone()!
fun <T : Any> clone(value: T): T {
    return when (value) {
        is ArrayList<*> -> {
            val newList = ArrayList<Any?>()
            for (elem in value) {
                        toString(elem), elem.javaClass))
            newList as T        }
        else ->
            fromString(toString(value), value.javaClass)

Friday, January 22, 2016

Consuming p2 Repositories from Maven - Update

I always have been struggling with p2. Tycho somewhat made things easier. But I always tried to find ways to use the eclipse p2 repositories with maven and bndtools.

Today I installed Package Drone to see if this solves my troubles, and believe it, it does. I uploaded the p2-repository zip from RAP and I was able to consume the databinding-bundles with plain maven and bnd-tools. Strike!

Tuesday, July 14, 2015

In-Container Selenium Tests

Currently I'm evaluating vaadin's testbench selenium framework. But the principles should be working with other frameworks too, as long as you find a way to register your http session while running the test. For the setup, I'm running vaadin with spring boot and the vaadin-spring add-on. The Selenium Tests are running the complete Spring Boot container.

@SpringApplicationConfiguration(classes = VaadinApplication.class)

In the Main UI class I register the UI instance in a simple List Bean called UIRegsitry:

if (uiRegistry != null) {
    addDetachListener(event -> uiRegistry.getRegisteredUIs().remove(this));

The Registry is optional and gets injected only when running the selenium tests:

@Autowired(required = false)
private UIRegistry uiRegistry;

Now "inside" the selenium test we can access all of the Vaadin UI Beans in the UI Scope:

UI uiUnderTest = uiRegistry.getRegisteredUIs().get(0);
uiUnderTest.access(() -> bookView = applicationContext.getBean(BookView.class)).get();
productIdentifiers = bookView.getProduct().getProductIdentifiers();

So now we can enter a value with selenium and directly check if all the bindings are working properly by accessing the involved model instances.

It should be possible to do similar stuff with arquillian and the http session for "regular" webapps.

Saturday, May 16, 2015

A multi-tenant application architecture using vaadin, spring, jooq and postresql

In this tutorial we will put together a full application stack for a multi-tenant architecture.
We will be using

  • Vaadin and Vaadin-Spring for the UI
  • Postgres for storage
  • jooq for the data-layer
  • Spring-Security for authentication
  • Some glue magic to handle multi-tenancy

You can find the complete source code for this example at Github. I will not go into the details of the different technologies used, instead the focus lies on the schema setup and the implementation of the tenancy-logic.

We basically start of with the vaadin-spring-security example, the one you can find at the vaadin4spring repository.
This example gives us a basic setup for a spring based vaadin application, the is able to authenticate a user using spring security. For multi-tenancy I decided to opt for a schema-per tenant solution like this:


The master schema contains the table definitions for all table, tenant schemas inherit these definition to sensure we have a consistent data structure across all tenants. All database objects are managed using liquibase scripts. Liquibase is a database refactoring tool, if you never heard about it, find out for yourself at The master.tenant table holds information about the existing tenants, and their database connection properties. For each tenant schema an corresponding database user gets created by liquibase, that has restricted privileges and can only access his own tenant schema.

The master.user table for e.g. is created like this:

<changeSet author="thomas" id="master-create-table-user">
    <createTable tableName="user" schemaName="master" >
        <column name="id" type="BIGINT" autoIncrement="true"/>
        <column name="user_name" type="varchar(255)"/>
        <column name="password_hash" type="varchar(2048)"/>
        <column name="active" type="boolean" defaultValue="false"/>

The tenant_xx.user tables are defined as:

<changeSet author="thomas" id="${db.schema}-create-child-table-user" >
        create table ${db.schema}.user () INHERITS (master.user);

Through this inheritance trick we ensure that the actual structure of the tables in all tenants keep in sync, plus as we'll see later, we are able to query across all tenants from the master schema.

The liquibase scripts in the github example create two tenants (tenant_1 and tenant_2) and one admin user each. the admin user name is for sake of simplicity the same as the tenant name, the password is simply "admin". So if you try to run the example you can log in as tenant_1/tenant_1/admin.

Now we have a basic database schema. The liquibase scripts needed some tuning to run not only on postgres but as well on hsqldb. We need this at least for unit testing, but we also want to be able to generate our jooq mapping and dsl without having to run a postgresql server on our build server.

Running the maven build creates the master schema on a temporary hsqldb instance and runs the jooq code generator afterwards. The idea for this came from

This is all the basic stuff we need, before we now can get into the mutli-tenant stuff. I wanted to have a clean separation of the tenants from a connection perspective. So each tenant has it's own restricted database user and it's own connection pool. This has some drawbacks, as we could probably get a quite bunch of connection pools, but the advantage is, that whatever you try to do with the tenant connection you'll get from spring, you'll never be able to touch another tenant than the one you are authenticated for. The TenantDataSource is a simple proxy wrapper that routes you to the proper connection pool based on the current spring authentication object. Here comes a snippet:

public class MultiTenantDataSource implements DataSource {

    private TenantAuthentication authentication;

    private TenantDao tenantDao;

    private TenantHelper tenantHelper;
    private Map<String, DataSource> dataSourceMap = new ConcurrentHashMap<>();
    public Connection getConnection() throws SQLException {
        DataSource ds = getDataSource();
        return ds.getConnection();

    private DataSource getDataSource() {

        String tenantName = authentication.getTenant(); // will throw if not authenticated
        return  dataSourceMap.computeIfAbsent(tenantName, (key)->{
            Optional<ITenant> tenantOptional = tenantDao.findbyName(tenantName);
            return -> {
                HikariConfig config = tenantHelper.toHikariConfig(tenant);
                return new HikariDataSource(config);
            }).orElseThrow(() -> new IllegalStateException("This should never happen"));

The injected TenantAuthentication is a Proxy Bean around the Authentication object from the SpringSecurityContextHolder:

@Bean(name ="currentToken")
TenantAuthentication currentToken() {

    return ProxyFactory.getProxy(TenantAuthentication.class, new MethodInterceptor() {
        public Object invoke(MethodInvocation invocation) throws Throwable {
            SecurityContext securityContext = SecurityContextHolder.getContext();
            TenantAuthenticationToken authentication = (TenantAuthenticationToken)securityContext.getAuthentication();
            if (authentication == null) {
                 throw new AuthenticationCredentialsNotFoundException("No auth..");
            return invocation.getMethod().invoke(authentication, invocation.getArguments());

Wiring jooq and spring with transaction management is a little effort, and I solved it as explained in

The last step we have to solve is mapping the jooq schema to the current authenticated user's schema. I solved this again with a proxy bean for the jooq DSLContext:

public DSLContext dsl(){
    return ProxyFactory.getProxy(DSLContext.class, new MethodInterceptor() {
        Map<String, DSLContext> contextMap = new ConcurrentHashMap<>();
        public Object invoke(MethodInvocation invocation) throws Throwable {
            String tenant = authentication.getTenant(); // will throw if not authenticated
            DSLContext ctx = contextMap.computeIfAbsent(tenant, (key) ->{ 
                Settings settings = new Settings().withRenderMapping(new RenderMapping().withSchemata(new MappedSchema().withInput("master").withOutput(key)));
                DefaultConfiguration configuration = new DefaultConfiguration();
                configuration.setExecuteListenerProvider(new DefaultExecuteListenerProvider(new ExceptionTranslator()));
                return new DefaultDSLContext(configuration);
            return invocation.getMethod().invoke(ctx, invocation.getArguments());

Now we can inject this dsl into any data access object, and we are sure that we will always access the schema of the current user, using the connection that is authorized to access this schema only:

public class UserDao {
     * the dsl context.
     * the context we inject here is actually a proxy bound to the current tenant schema
     * backed by a datasource that has only access to this tenant schema
    DSLContext dsl;

    public List<IUser> findAll() {
        return dsl.selectFrom(USER).fetchInto(User.class);

Finally we need a form of authenticating our users. This is done straightforward using a spring AuthenticationProvider and a view that lets us read all users accross the tenant schemas:

    create view "master"."v_user" as
    select n.nspname as tenant, u.user_name, u.password_hash 
    from master.user u left join pg_class p on u.TABLEOID = p.oid 
    left join pg_catalog.pg_namespace n on n.OID =p.relnamespace 

Check out the example from github if you'd like to dive into the nifty details!