What exactly is the application container Apache Karaf?


Apache Karaf is a small OSGi-based runtime environment that provides a lightweight container capable of hosting various components and applications. Karaf offers numerous features familiar to those who use application containers based on Java EE or Spring.

karaf_logo

Those encompass the support for the Java Authentication and Authorization Service (JAAS), different dependency injection frameworks such as OSGi blueprint and Spring, the support for Java Persistence API (JPA) and Java Transaction API (JTA) and the offering of clustering, monitoring, and cloud integration utilities.

This article is the first of a series covering topics on the development and operation of OSGi based applications with Karaf.

Overview of Core Features

As already mentioned in the introduction, Karaf offers a comprehensive set of core features. These core features are structured along the areas of provisioning and deployment, logging, dynamic configuration, administration and management and last but not least OSGi framework support.

Provisioning and Deployment

Karaf provides different options for application provisioning and deployment which allows deploying artifacts as feature sets. This enables the structuring of artifacts into larger deployment units, which is a great way to increase the re-use of existing functionality and therefore reduces the overall footprint of your application.

Even if  feature deployment is a great way to structure OSGi based applications, Karaf’s deployment options are not limited to this approach. The container allows also to deploy so-called “Karaf archives” or even more commonly used web archives.

Logging

Additionally, Karaf provides a broad support of logging frameworks by integrating the Pax Logging project. Pax Logging integrates many of today’s popular logging frameworks and closes the gap that emerged by the OSGi community’s decision to discontinue their logging service due to the fact that numerous competing logging frameworks were already available and commonly used.

Dynamic Configuration

While it has been a challenge for other application containers to realize dynamic configuration

capabilities, Karaf offers the opportunity to interact with configuration changes at runtime. In order to utilize dynamic configurations, an application needs to register with the management service with its reference. The management service is then able to pass in any configuration change back to the application bundle.

Although this is a great way to make configuration changes without restarting the application, it is the application’s responsibility to react to these changes, i.e. by destroying and restarting an existing thread.

Administration and Management

Karaf offers several ways to administer and manage the container itself and all application artifacts executed within the container runtime. The most comprehensive administration interface is the extensible command-line interface that is remotely accessible via an incorporated SSH server. In addition to the command-line interface, Karaf offers also a comprehensive web UI that exposes all basic management features to your browser.

OSGi Framework Support

Finally, the container supports different OSGi frameworks. Per default, Apache Felix is pre-configured, but Karaf supports also Eclipse Equinox while it is theoretically possible to run on any OSGi environment.

History of Origins

The roots of Apache Karaf reach back to the kernel development of the Apache ServiceMix project. The aim

of the ServiceMix subproject was to develop a simple-to-use command-line interface for the administration and management of OSGi artifacts.

The development team announced in spring 2008 publicly that they completed the third milestone of the kernel project. It took only until the September 18th, 2008  to release the first version of the kernel as GA.

Even if ServiceMix was a great home for the kernel project during the early days, the project moved in spring 2009 under the Apache Felix project umbrella and got renamed to “Karaf”. The developer community made that decision to increase the attention towards the project within the broader OSGi community and grow the developer base.

Moving the project eventually paid off and Karaf received a promotion to become a top-level Apache project. Since that time, the project has continued to improve the runtime environment and added numerous features.

Summary

Given the fact that Karaf is around for almost eight years, has maintained a solid developer community with most founding members still on the team and has grown a comprehensive feature set, it is fair to say that Karaf is a stable application container to host OSGi based applications and more. The structure of the container offers a large variety of application areas, covering all areas of application development and its adaptability serves as a foundation for many operational scenarios.

Converting Locales to Currencies with Java Using Spring

Successful commercial software applications have to deal with internationalization and localization so that the software can be distributed to other countries in the world, no matter if it is a desktop application or an online service which is used via the Internet. Besides the translation of the language, it is also important to incorporate regional differences such as number and date formatting, or the display of the regional currency.

8463683689_baa33ca431_z

This article provides an introduction to Java Locale and Currency, the Java based Currency retrieval and provides further detail on how to embed more advanced currency conversions in your application logic.

Java Locale

Regional characteristics are coded into the Local object of java.util. According to the API documentation, a Locale object

… represents a specific geographical, political, or cultural region.

Locales are commonly used to tailor information to the end user, such as date and number format representations. To achieve localization, a Locale object consists of a language, a script (e.g. Latin or Cyrillic), a country and some additional fields like variant or extension that contain additional formatting information.

Java Currency

The Currency object of Java is a representation of the ISO 4216 currency code list. To retrieve a Currency object, you are supposed to call one of the getInstance methods. One of those methods returns a Currency based on a Locale object.

Based on the method signature, it seems that any Local object can be mapped against a Currency. However, it turns out that only Locale objects that fulfill a set of preconditions can be actually used to instantiate a Currency object.

Discovering the Relation between Locales and Currencies

First of all, a Currency and a Locale do not maintain a common reference that can be used to navigate between both types. A Currency only maintains attributes that describe a currency in more detail, such as a currencyCode or a symbol. On the other side, Locale objects hold only information that are related to language or country settings. So there is no direct relation between both classes, other than both have the capability to be serialized.

CurrencyConversion

However, since a Currency has an instantiation method, Java provides some application logic to derive a Currency from a Locale object.

Converting Locales into Currencies in Java

Locale objects can be initialized with a subset of all attributes. It is therefore possible to instantiate an Locale by passing in e.g. only the language information to the constructor. However, when we want to instantiate a Currency, the country information becomes important, since Java can not interpret language-only Locale parameters and throws an IllegalArgumentException.

Although the exception handling may be misleading, since we passed in a valid Locale object. The application behavior makes sense, since currencies relate rather to a country, not a language. English is a language spoken in many countries such as Great Britain, the United States, Canada, New Zealand or Australia. However each of these countries maintains their own currency. So even if the Locale object is totally valid, it does not contain enough information to derive the desired information.

Advanced Conversions via Spring Converters

To avoid scenarios where your application does not provide detailed failure information, you might want to wrap the Java conversion logic in your own converter and provide a service for that. Since Spring 3.x, type conversion facilities have become part of spring-core. One option to wrap your conversion logic would be therefore to implement a Locale to Currency converter.

To expose the converter, you may want to implement a conversion services and add the converter to the service.

Once the service is in place, you can simply add the conversion service to your application context and retrieve the Currency from your newly created service.

The Good, the Bad and the Ugly

Overall, I would like to conclude that Java’s functionality to derive currency information from regional information is quite powerful and easy to use. However the approach has two limitations, from my perspective

First of all, there is definitely room to improve the built in exception handling. It would be desirable, if a failed conversion provides detailed information on the cause instead of the generic IllegalArgumentException.

The second limitation is related to the support of multiple currencies per country. Currently, Java supports only a single currency per country, which covers the reality in most of the countries. However, some countries, such as Serbia-Montenegro, maintain multiple legal tenders at the same time or have a secondary currency next to the country’s official exchange. Those more exotic cases are not built into the core framework.

Finally, if you face one of the exotic use cases and you need to extend the base functionality, you might want to utilize the conversion facilities of your application framework. In this article, I utilized the conversion mechanism of Spring, but other frameworks provide equal support.

Java 8 Time API (JSR310), Hibernate and Spring-Data-JPA

Recently I spend some time porting one of my old applications to Java 8. Java 8 offers besides several nice enhancements with respect to the overall language expression (i.e., Lambda expressions or the Stream API) also a new Date-Time package released under the JSR 310. Since I’ve been working for quite some time with joda-time, I was really interested what Java 8 had to offer.

Calendar

First off all, although JSR310 is not a direct port of joda-time, it is very intuitive and migrating from joda-time to the new java.time.* package started without any unexpected impediments. I was able to port my resources and REST services with having any major problem, but when I finally reached the point to port my persistence layer, consisting of spring-data-jpa and Hibernate with an underlying MySQL engine I discovered that the transition is not as smooth as expected. I’ve implemented my entities back in the days using the  java.util.Date functionality, and never bothered touching this layer and adjust it in order to use joda-time or any other date library. Therefore I never ran into the issue of Date and Time serialization from hibernate to the underlying persistence store. Since I upgraded all my underlying dependencies to use the latest version, my expectation was that I could adjust my Entities using java.time.LocalDate and java.time.LocalDateTime without any further adjustments. Surprisingly – at least for me – this was not the case.

The following two sections give an overview on what I discovered and how I was able to overcome my implementation serialization issues.

Serialization on DDL Generation

The first observation I made was during the DDL generation of Hibernate. Although some of you may point out, that Hibernate’s DDL generation is by no means built to keep you production database schema up to date and there are better tools to manage your database schema changes, it is a great way to setup your test database. It is also a good indicator to see Hibernate’s default type mapping, i.e. how the object relational mapper (ORM) treats the LocalDate type.

Default Column Definiton: LocalDate to TINYBLOB

The first change I made was adjusting the existing orderDate and migrate it from java.util.Date to java.time.LocalDate. Since I wanted to reveal the mapping behavior, I did not specify a column definition in my @Column JPA annotation. After the change, my @Entity object contained an id and a date field.

In order to generate the DDL on application startup, I enabled the HibernateJpaVendorAdapter to generate the DDL. Surprisingly I found out that Hibernate treats the LocalDate field as binary object and translates it into a TINYBLOB. Having written quite a few SQL statements in my life, I can definitly confirm that date and time functions are quite common in order to extract meaningful information of your data. Since the default mapping ends up as a binary object, it becomes necessary to translate the binary object into a date object with every statement execution that requires data and time functions. Additionally, any index over a date may be wasted, since the binary-to-date translation may not operate directly on the index and will therefore not achieve the desired performance optimization.

Custom Column Definiton: LocalDate to DATETIME

Since the default mapping on DDL generation did not result in the desired DATETIME fields, it becomes necessary to add a column definition to the JPA entity. Specifying an annotation attribute columnDefintion = “DATETIME” will force Hibernate or an other ORM to use the specified database type.

The explicit definition of the column results in the the right DATETIME field in the database. However, it is worth to note that specifying columnDefinitions will bind the application code closer to the underlying RDBMS and may introduce therefore a bigger effort when migrating between different database systems. This statement is especially valid when using RDBMS specific data types in your columnDefinition section.

Data Serialization on Query Execution

Although we are now able to map the LocalDate object to the correct database type on DDL generation, this does not imply that the ORM system is already capable to serialize objects with the correct data type. To evaluate the statement execution, I’ve created a simple test case that attempts to insert an object to the database. The following snippet shows the Hibernate generated SQL statement with relevant fields.

When executing this statement, I ran into a DataIntegrityViolationException that finally pointed to an insert attempt of an incorrect DateTime object to the ORDER_DATE column. The behavior was somewhat expected, since the columnDefinition has no direct impact on the query or statement execution and does therefore not facilitate any mapping from LocalDate to Date.

In order to support JSR310 when using JPA and an underlying ORM, it is still necessary to convert from LocalDate to Date objects. If you require full control over your date conversion, you might want to consider writing your Spring @Converter yourself. Since I had less ambitious goals, I found a nice spring-data-jpa class called Jsr310JpaConverters that contained the mapping logic meeting my needs. to configure the converter, I simply added the conversion package to my entity manager package scan. The entity manager will pickup the converter class and execute the conversion back to java.util.Date so that any JSR310 DateTime object can be directly used in your @Entity.

Conclusion

The migration to Java 8 offers a lot more functionality and it is worth to consider an upgrade, without mentioning the official support and maintenance cycles of each version. However, when you consider to upgrade and adopt new features, make sure that the underlying dependencies support the features already. The spring-data-jpa and Hibernate example shows that certain components have a faster adoption rate whereas others may need more time to implement new features provided.

If you consider upgrading to Java 8, this article demonstrates some pitfalls in case you may want to apply the new DataTime features to your JPA entities. I also hope, that the article provides sufficient detail to make a decision if the new DateTime functionality adds actually a lot value to your entities, or if the conversion should be executed at a different application level. Personally, in my scenario, it was a good decision to migrate my entities, since I was able to apply the conversion class provided by spring-data.

Release of Camel-Extra 2.14.0 and LGPL License Support

Only few people may have recognized that June 21st 2015 was a big day for the Camel-Extra community. Besides the fact that this day in June marks the first release of Camel-Extra that supports ASF Camel 2.14.x, it is the release where we opened up the license support to the most open OSS license that is applicable for the underlying component.

Free and Open Source Software

Free and Open Source Software

Finding the appropriate software license is always difficult for an open source project. This statement is even more valid if you start building off existing components where each has its own license model. Since Camel-Extra extends the enterprise integration project ASF Camel with components that can not be hosted within the Apache infrastructure, it is built into its DNA to deal with third-party libraries. The reason why Camel-Extra components can not be hosted within the Apache infrastructure is related to license compatibility issues between the Apache License and the third-party OSS licenses.

Courageous Camel riders have therefore built an environment within Apache-Extras, to provide an environment where the development and community support for these components is possible. In order to simplify the build structure and reduce the amount of headache with new components, the initial attempt was to base Camel-Extra upon the GNU General Public License. However, this approach lead to several discussions within the community, since the GPL License is not very friendly for commercial adoption. As a consequence, we’ve decided to approach a multi-license support strategy, which allows us to open the software license for components that are not dependent upon GPL libraries. On the other hand, those components that have still dependencies to GPL libraries still reside under its respective parent license.

The following list will help to understand the current license assignments. Please note that these assignments may change over time, in case the underlying library adjust their license model.

GPL License LGPL License
camel-db4o camel-couchbase
camel-esper camel-exist
camel-spring-neo4j camel-hibernate
camel-vtdxml camel-jboss
camel-jboss6
camel-jcifs
camel-rcode
camel-virtualbox
camel-zeromq

I hope you all enjoy the new license approach and many of you will be able to adopt components within your projects. Please remember that we love contributions. Anything you share with the community (i.e.filing bugs, contributing code, helping with documentation) will help to maintain the project better!

Esper Component Configuration and Config File Support in Camel-Extra

While I was going through the list of enhancement requests for Camel-Extra, which is a community project related to Apache Camel, I came across an old request asking to support the default Esper configuration in order to ease the development of event patterns and queries. Camel-Extra is a sister project of ASF Camel that hosts components, which are not compliant to the Apache license. Within that space Esper is a LGPL licensed library that supports complex event processing (CEP) and analytics on event series. The the Camel component has been generously contributed to Camel-Extra by James Strachan in November 2007.

Although Esper possesses only a small number of configuration parameters, it is sometimes quite useful to simplify event patterns and event processing language (EPL) statements by providing a small amount of configuration parameters. Additionally, it might be useful to provide some tuning parameters meeting specific requirements. However, I am not tempted to offer a detailed description on how to configure the Esper engine in order to meet your specific requirements, since you will find a comprehensive guide in the Esper documentation. I will rather write on how to use the current camel-esper component.

Using Camel-Esper to Query Event Streams

Before diving into the configuration example, I would like to provide an overview, on how the camel-esper component can be configured in your route configuration in order to execute queries upon event streams without neglecting the fact, that you will always find the most recent documentation within the component description of Apache Camel. The component adheres to the overall Camel concept that defines a processing chain via an integration DSL calling subsystems via endpoint URI configurations. Esper can be considered within this context as one of those subsystems, which leads us to the fact that you need to configure an endpoint for being able to interact with the Esper library and framework.

Conceptual Overview: Calling Esper from Camel

Since Esper is embedded within Camel as component concept, addressing the component works like addressing any other subsystem of Camel. Fig. 1 provides an overview a route configuration which is being used as example. Route 1 has a direct endpoint, which serves as interface where any event producer can send messages in a synchronous invocation style. All messages being consumed from this endpoint will be passed towards an configured Esper endpoint, identified via a name as representing the internal ID within the Camel context that serves as addressable endpoint. The second part of the Esper configuration is a query or pattern piece, which does query the event streams coming through the specific Esper communication channel. Finally, after having executed the evaluation, the message will be passed within route 2 towards a consuming, direct endpoint.

Fig.1: Conceptual route configuration

Fig.1: Conceptual route configuration

This simple example shows already, that the logic to evaluate the event streams is encoded within the Esper endpoint. Esper offers basically two different options to write evaluation statements for event streams, a pattern language and an event query language. Both options have been integrated to the Esper component provided by camel-extra and will be introduced in the next two sections.

Esper Event Query Language Configuration

The first configuration example demonstrates, how it is possible to configure the camel-extra Esper component endpoint to query an event stream via the event processing language. The query language is a SQL like language, specifically designed to query event streams in contrast to database tables. The concept of stream therefore replaces the commonly known concept of tables. Nevertheless, since events are nothing else than data, the existing SQL concepts of joins, filtering and aggregation via grouping can be effectively applied upon streams as well.

In order to run your event queries based upon the event processing language, it is necessary to specify the eql option followed by the actual expression. In our case, we are looking for all events of type StockTick having the symbol AAPL which results in the select statement of the above endpoint configuration.

Esper Event Pattern Language Configuration

The second example shows a configuration option for using the event pattern language. The pattern language is based upon University research, originally conducted within the “Rapid” project at Stanford University. The Esper implementation is based upon dynamic state trees and can be considered as so called delta networks, where only changes to data is being communicated across object boundaries. Additionally, changes are only propagated, if the information is needed somewhere else. To optimise performance, Esper operates upon indices for data retrieval operations. The entire grammar of the pattern language is build on top of ANTLR, based on the Extended Backus-Naur Form (EBNF).

To enable the Esper pattern language, it is required to define the pattern option in your endpoint configuration followed by the pattern expression you want to execute. In the example, we are looking for every Stock tick that contains the symbol ‘AAPL’, since we want to retrieve all information related to Apple.

One may have noticed, that addressing an event object requires the full package name, in order to acquaint Esper with the respective event type. Since typing the entire package name including the name of the actual Java object can be cumbersome, the following section introduces a different way, to induce the addressing of event types.

Enabling the File-based Configuration for Esper

Esper contains an option to provide configuration via an external XML-based file. The purpose of the configuration file is on the one hand to simplify queries, written in EPL and pattern language and on the other hand, to tune the engine behaviour to meet your individual requirements. Camel-extra’s Esper component supports the configuration via the default configuration file.

To enable the configuration via XML in camel-esper, it is required to set the configured option to true, a flag set per default to false. The specification of this parameter ensures, that camel-esper conducts a lookup for the esper.cfg.xml file in the root of your class path (e.g. <project_home>/src/main/resources/esper.cfg.xml).

Having enabled the XML-based configuration, it is now possible to add a name for an event type and it’s corresponding class with fully qualified name (i.e. including the package name). This way, Esper knows, that the name refers to a specific address so that the name can be used within the EPL or pattern language. This example shows only a very limited set of configuration options provided by Esper. For a full reference, please refer to the configuration section of the Esper documentation.

Summary

This article briefly introduces camel-esper, a component hosted within the camel-extra project. It outlines the integration and use of Esper within a camel route configuration at a conceptual level and demonstrates, how to adopt the EPL and pattern language, in order to select events from an event stream. Finally, one of the recently added features, of using the default Esper configuration via an XML file concludes the article and gives some insights in how to optimise your Camel application, when using Esper. Since the article introduces only the general usage concept, some further reading can be recommended.

  1. Apache Camel Component Concept
  2. Camel-Extra Esper Component
  3. Camel-Extra Project
  4. Esper Event Processing Language
  5. Esper Pattern Language
  6. Esper Configuration