Is GraphQL the Resurgence of RPC style API’s?

GraphQL’s development started in 2012 at Facebook to mitigate the issues they faced integrating their mobile application with the verbose API’s the website used before it was released to the public in 2015. Since then, the there has been a public debate if GraphQL should be classified as RPC-style API standard or if it is something entirely different. While there has always been a debate of which API style might be best, the truth is that each style has its advantages and shortcomings and is tailored to solve a specific use case. So let’s try to understand the differences and commonalities between RPC, REST and GraphQL before coming to the conclusion that GraphQL might be the latest resurgence of RPC style API integration.

RPC – Remote Procedure Calls

Remote Procedure Calls were developed in the 1970’s gained adoption throughout the 1980’s where Bruce Jay Nelson is credited to coin the term for the technique of calling a function or subroutine over the network on a different computational node and expecting a response. From an API point of view, the unit of interest is the function that defines the service. With the advent of REST, RPC style communication lost initially some traction but re-gained interest resulting in gRPC or Apache Thrift.

Advantages

  • Simple and easy to understand as each function describes semantically its intention
  • Payloads are lightweight and tailored to the problem the function solces
  • Performance generally higher as protocol layers can be optimized for RPC execution

Disadvantages

  • Tight coupling between a service provider and consumer based on function and related data sets
  • No standardization, implementations vary and services can not be discovered through standardized lookups
  • It is easy to add additional functions which results often in the so called “function explosion”
  • Maintenance of different RPC services grows more and more difficult with the growing number of services and dependencies
  • No standardized abstraction at an API layer that is decoupled from the underlying system

REST – Representational State Transfer

REST defines a set of constraints to expose resources via URLs. The term was introduced in 2000 by Roy Fielding and described the principles that were discussed during the 1990s as “HTTP object model” resulting in the Uniform Resource Identifiers standard. With its focus on resources and the adoption of the HTTP protocol, that defines the operations that can be applied on resources, REST achieves the opposite of tight coupling. The latest development defining the REST API interaction model further resulted in the JSON Hypertext Application Language (HAL), JSON-API as a specification for building APIs in JSON and the Ion Hypermedia Type as intuitive, JSON-based hypermedia type.

Advantages

  • Client and server are semantically decoupled and only focused on operations around resource access
  • API can evolve over time as long as basic constraints for backward compatibility are respected
  • Use of the HTTP protocol helps to decrease resource consumption and reduces the network bandwidth resulting in APIs performance
  • No limitations regarding the data format used between client and server

Disadvantages

  • Not a single specification that defines the way the interaction between service provider and consumer should be structured
  • Potential for large payloads that reduce the overall application performance for both, server and client

GraphQL – Query language for your API

GraphQL is a query language for your API, was developed since 2012 by Facebook as a response to their transition to mobile and released in 2015 to the open-source community. Since 2018 the project was moved to the newly found GraphQL Foundation and hosted by the Linux Foundation. GraphQL offers clients the ability to define the structure of they expect. To achieve that, the central unit of the work is structured around is a query. The latest development includes a schema definition as a description of all queries that can be executed against a service. The queries provide the client with flexibility to defined what they expect.

Advantages

  • Due to the nature that queries specify exactly what a service consumer needs, the network overhead is low
  • The schema definition is typed and both service provider and consumer can use the definition as contract
  • Larger object graphs are best suited to be exposed via a GraphQL service since clients want to consume portions of the entirety

Disadvantages

  • Exposing queries instead of the simple HTTP protocol operations increase complexity of adopting the framework
  • Caching architectures that fit the HTTP resource model become obsolete since not all data is exposed in a default way consistently
  • Versioning of queries and attributes is not clearly defined by any set of best practices

Conclusion

When looking at the core concepts of each API style, it becomes obvious that there is a clear distinction between the resource-centric view of REST and the functional- and query-based perspective of RPC and GraphQL API styles. Trying to distinguish between functions and queries becomes much harder and I agree with Phil Sturgeon’s statement:

“GraphQL is essentially RPC, with a lot of good ideas from the REST/HTTP community tacked in”

The biggest advantage that I see over REST is, that the specification has lead to a clear schema enabling the navigation of your object tree whereas REST-full services have often the notion of an RPC-style interaction based on HTTP operations without considering the navigation across resources. I think it is fair to say that there are commonalities between GraphQL and RPC, but SQL like queries provide still more flexibility than exposing a function that accomplishes something. As a consequence, I would not consider GraphQL to be the resurgence of the RPC API-style and rather a valuable addition to the existing set of

Each of these API-styles has its own use case and there is not a single API-style that can be considered to be superior to another. The advantages of each API-style feature and I recommend considering the adoption of more than one when building out APIs to offer integration points to your applications. There are cases, where a query-based integration style might be superior, i.e. when trying to integrate flexibly with a data or a mobile API, but there are other API scenarios like a management or command API that has entirely different requirements on how to interact with the data and the functions it exposes.

References

  1. GraphQL – A query language for your API
  2. Phil Sturgeon – Understanding RPC, REST and GraphQL
  3. Tom Smith – APIs: RPC versus REST versus GraphQL
  4. Renato Athaydes – The return of RPC
  5. gRPC – A high performance, open-source universal RPC framework
  6. Apache Foundation – Apache Thrift
  7. M. Kelly – JSON Hypertext Application Language
  8. JSON-API – A specification for building APIs in JSON
  9. Ion Working Group – The Ion Hypermedia Type

What exactly is the application container Apache Karaf?


Apache Karaf is a small OSGi-based runtime environment that provides a lightweight container capable of hosting various components and applications. Karaf offers numerous features familiar to those who use application containers based on Java EE or Spring.

karaf_logo

Those encompass the support for the Java Authentication and Authorization Service (JAAS), different dependency injection frameworks such as OSGi blueprint and Spring, the support for Java Persistence API (JPA) and Java Transaction API (JTA) and the offering of clustering, monitoring, and cloud integration utilities.

This article is the first of a series covering topics on the development and operation of OSGi based applications with Karaf.

Overview of Core Features

As already mentioned in the introduction, Karaf offers a comprehensive set of core features. These core features are structured along the areas of provisioning and deployment, logging, dynamic configuration, administration and management and last but not least OSGi framework support.

Provisioning and Deployment

Karaf provides different options for application provisioning and deployment which allows deploying artifacts as feature sets. This enables the structuring of artifacts into larger deployment units, which is a great way to increase the re-use of existing functionality and therefore reduces the overall footprint of your application.

Even if  feature deployment is a great way to structure OSGi based applications, Karaf’s deployment options are not limited to this approach. The container allows also to deploy so-called “Karaf archives” or even more commonly used web archives.

Logging

Additionally, Karaf provides a broad support of logging frameworks by integrating the Pax Logging project. Pax Logging integrates many of today’s popular logging frameworks and closes the gap that emerged by the OSGi community’s decision to discontinue their logging service due to the fact that numerous competing logging frameworks were already available and commonly used.

Dynamic Configuration

While it has been a challenge for other application containers to realize dynamic configuration

capabilities, Karaf offers the opportunity to interact with configuration changes at runtime. In order to utilize dynamic configurations, an application needs to register with the management service with its reference. The management service is then able to pass in any configuration change back to the application bundle.

Although this is a great way to make configuration changes without restarting the application, it is the application’s responsibility to react to these changes, i.e. by destroying and restarting an existing thread.

Administration and Management

Karaf offers several ways to administer and manage the container itself and all application artifacts executed within the container runtime. The most comprehensive administration interface is the extensible command-line interface that is remotely accessible via an incorporated SSH server. In addition to the command-line interface, Karaf offers also a comprehensive web UI that exposes all basic management features to your browser.

OSGi Framework Support

Finally, the container supports different OSGi frameworks. Per default, Apache Felix is pre-configured, but Karaf supports also Eclipse Equinox while it is theoretically possible to run on any OSGi environment.

History of Origins

The roots of Apache Karaf reach back to the kernel development of the Apache ServiceMix project. The aim

of the ServiceMix subproject was to develop a simple-to-use command-line interface for the administration and management of OSGi artifacts.

The development team announced in spring 2008 publicly that they completed the third milestone of the kernel project. It took only until the September 18th, 2008  to release the first version of the kernel as GA.

Even if ServiceMix was a great home for the kernel project during the early days, the project moved in spring 2009 under the Apache Felix project umbrella and got renamed to “Karaf”. The developer community made that decision to increase the attention towards the project within the broader OSGi community and grow the developer base.

Moving the project eventually paid off and Karaf received a promotion to become a top-level Apache project. Since that time, the project has continued to improve the runtime environment and added numerous features.

Summary

Given the fact that Karaf is around for almost eight years, has maintained a solid developer community with most founding members still on the team and has grown a comprehensive feature set, it is fair to say that Karaf is a stable application container to host OSGi based applications and more. The structure of the container offers a large variety of application areas, covering all areas of application development and its adaptability serves as a foundation for many operational scenarios.

Converting Locales to Currencies with Java Using Spring

Successful commercial software applications have to deal with internationalization and localization so that the software can be distributed to other countries in the world, no matter if it is a desktop application or an online service which is used via the Internet. Besides the translation of the language, it is also important to incorporate regional differences such as number and date formatting, or the display of the regional currency.

8463683689_baa33ca431_z

This article provides an introduction to Java Locale and Currency, the Java based Currency retrieval and provides further detail on how to embed more advanced currency conversions in your application logic.

Java Locale

Regional characteristics are coded into the Local object of java.util. According to the API documentation, a Locale object

… represents a specific geographical, political, or cultural region.

Locales are commonly used to tailor information to the end user, such as date and number format representations. To achieve localization, a Locale object consists of a language, a script (e.g. Latin or Cyrillic), a country and some additional fields like variant or extension that contain additional formatting information.

Java Currency

The Currency object of Java is a representation of the ISO 4216 currency code list. To retrieve a Currency object, you are supposed to call one of the getInstance methods. One of those methods returns a Currency based on a Locale object.

Based on the method signature, it seems that any Local object can be mapped against a Currency. However, it turns out that only Locale objects that fulfill a set of preconditions can be actually used to instantiate a Currency object.

Discovering the Relation between Locales and Currencies

First of all, a Currency and a Locale do not maintain a common reference that can be used to navigate between both types. A Currency only maintains attributes that describe a currency in more detail, such as a currencyCode or a symbol. On the other side, Locale objects hold only information that are related to language or country settings. So there is no direct relation between both classes, other than both have the capability to be serialized.

CurrencyConversion

However, since a Currency has an instantiation method, Java provides some application logic to derive a Currency from a Locale object.

Converting Locales into Currencies in Java

Locale objects can be initialized with a subset of all attributes. It is therefore possible to instantiate an Locale by passing in e.g. only the language information to the constructor. However, when we want to instantiate a Currency, the country information becomes important, since Java can not interpret language-only Locale parameters and throws an IllegalArgumentException.

Although the exception handling may be misleading, since we passed in a valid Locale object. The application behavior makes sense, since currencies relate rather to a country, not a language. English is a language spoken in many countries such as Great Britain, the United States, Canada, New Zealand or Australia. However each of these countries maintains their own currency. So even if the Locale object is totally valid, it does not contain enough information to derive the desired information.

Advanced Conversions via Spring Converters

To avoid scenarios where your application does not provide detailed failure information, you might want to wrap the Java conversion logic in your own converter and provide a service for that. Since Spring 3.x, type conversion facilities have become part of spring-core. One option to wrap your conversion logic would be therefore to implement a Locale to Currency converter.

To expose the converter, you may want to implement a conversion services and add the converter to the service.

Once the service is in place, you can simply add the conversion service to your application context and retrieve the Currency from your newly created service.

The Good, the Bad and the Ugly

Overall, I would like to conclude that Java’s functionality to derive currency information from regional information is quite powerful and easy to use. However the approach has two limitations, from my perspective

First of all, there is definitely room to improve the built in exception handling. It would be desirable, if a failed conversion provides detailed information on the cause instead of the generic IllegalArgumentException.

The second limitation is related to the support of multiple currencies per country. Currently, Java supports only a single currency per country, which covers the reality in most of the countries. However, some countries, such as Serbia-Montenegro, maintain multiple legal tenders at the same time or have a secondary currency next to the country’s official exchange. Those more exotic cases are not built into the core framework.

Finally, if you face one of the exotic use cases and you need to extend the base functionality, you might want to utilize the conversion facilities of your application framework. In this article, I utilized the conversion mechanism of Spring, but other frameworks provide equal support.

Java 8 Time API (JSR310), Hibernate and Spring-Data-JPA

Recently I spend some time porting one of my old applications to Java 8. Java 8 offers besides several nice enhancements with respect to the overall language expression (i.e., Lambda expressions or the Stream API) also a new Date-Time package released under the JSR 310. Since I’ve been working for quite some time with joda-time, I was really interested what Java 8 had to offer.

Calendar

First off all, although JSR310 is not a direct port of joda-time, it is very intuitive and migrating from joda-time to the new java.time.* package started without any unexpected impediments. I was able to port my resources and REST services with having any major problem, but when I finally reached the point to port my persistence layer, consisting of spring-data-jpa and Hibernate with an underlying MySQL engine I discovered that the transition is not as smooth as expected. I’ve implemented my entities back in the days using the  java.util.Date functionality, and never bothered touching this layer and adjust it in order to use joda-time or any other date library. Therefore I never ran into the issue of Date and Time serialization from hibernate to the underlying persistence store. Since I upgraded all my underlying dependencies to use the latest version, my expectation was that I could adjust my Entities using java.time.LocalDate and java.time.LocalDateTime without any further adjustments. Surprisingly – at least for me – this was not the case.

The following two sections give an overview on what I discovered and how I was able to overcome my implementation serialization issues.

Serialization on DDL Generation

The first observation I made was during the DDL generation of Hibernate. Although some of you may point out, that Hibernate’s DDL generation is by no means built to keep you production database schema up to date and there are better tools to manage your database schema changes, it is a great way to setup your test database. It is also a good indicator to see Hibernate’s default type mapping, i.e. how the object relational mapper (ORM) treats the LocalDate type.

Default Column Definiton: LocalDate to TINYBLOB

The first change I made was adjusting the existing orderDate and migrate it from java.util.Date to java.time.LocalDate. Since I wanted to reveal the mapping behavior, I did not specify a column definition in my @Column JPA annotation. After the change, my @Entity object contained an id and a date field.

In order to generate the DDL on application startup, I enabled the HibernateJpaVendorAdapter to generate the DDL. Surprisingly I found out that Hibernate treats the LocalDate field as binary object and translates it into a TINYBLOB. Having written quite a few SQL statements in my life, I can definitly confirm that date and time functions are quite common in order to extract meaningful information of your data. Since the default mapping ends up as a binary object, it becomes necessary to translate the binary object into a date object with every statement execution that requires data and time functions. Additionally, any index over a date may be wasted, since the binary-to-date translation may not operate directly on the index and will therefore not achieve the desired performance optimization.

Custom Column Definiton: LocalDate to DATETIME

Since the default mapping on DDL generation did not result in the desired DATETIME fields, it becomes necessary to add a column definition to the JPA entity. Specifying an annotation attribute columnDefintion = “DATETIME” will force Hibernate or an other ORM to use the specified database type.

The explicit definition of the column results in the the right DATETIME field in the database. However, it is worth to note that specifying columnDefinitions will bind the application code closer to the underlying RDBMS and may introduce therefore a bigger effort when migrating between different database systems. This statement is especially valid when using RDBMS specific data types in your columnDefinition section.

Data Serialization on Query Execution

Although we are now able to map the LocalDate object to the correct database type on DDL generation, this does not imply that the ORM system is already capable to serialize objects with the correct data type. To evaluate the statement execution, I’ve created a simple test case that attempts to insert an object to the database. The following snippet shows the Hibernate generated SQL statement with relevant fields.

When executing this statement, I ran into a DataIntegrityViolationException that finally pointed to an insert attempt of an incorrect DateTime object to the ORDER_DATE column. The behavior was somewhat expected, since the columnDefinition has no direct impact on the query or statement execution and does therefore not facilitate any mapping from LocalDate to Date.

In order to support JSR310 when using JPA and an underlying ORM, it is still necessary to convert from LocalDate to Date objects. If you require full control over your date conversion, you might want to consider writing your Spring @Converter yourself. Since I had less ambitious goals, I found a nice spring-data-jpa class called Jsr310JpaConverters that contained the mapping logic meeting my needs. to configure the converter, I simply added the conversion package to my entity manager package scan. The entity manager will pickup the converter class and execute the conversion back to java.util.Date so that any JSR310 DateTime object can be directly used in your @Entity.

Conclusion

The migration to Java 8 offers a lot more functionality and it is worth to consider an upgrade, without mentioning the official support and maintenance cycles of each version. However, when you consider to upgrade and adopt new features, make sure that the underlying dependencies support the features already. The spring-data-jpa and Hibernate example shows that certain components have a faster adoption rate whereas others may need more time to implement new features provided.

If you consider upgrading to Java 8, this article demonstrates some pitfalls in case you may want to apply the new DataTime features to your JPA entities. I also hope, that the article provides sufficient detail to make a decision if the new DateTime functionality adds actually a lot value to your entities, or if the conversion should be executed at a different application level. Personally, in my scenario, it was a good decision to migrate my entities, since I was able to apply the conversion class provided by spring-data.

Release of Camel-Extra 2.14.0 and LGPL License Support

Only few people may have recognized that June 21st 2015 was a big day for the Camel-Extra community. Besides the fact that this day in June marks the first release of Camel-Extra that supports ASF Camel 2.14.x, it is the release where we opened up the license support to the most open OSS license that is applicable for the underlying component.

Free and Open Source Software

Free and Open Source Software

Finding the appropriate software license is always difficult for an open source project. This statement is even more valid if you start building off existing components where each has its own license model. Since Camel-Extra extends the enterprise integration project ASF Camel with components that can not be hosted within the Apache infrastructure, it is built into its DNA to deal with third-party libraries. The reason why Camel-Extra components can not be hosted within the Apache infrastructure is related to license compatibility issues between the Apache License and the third-party OSS licenses.

Courageous Camel riders have therefore built an environment within Apache-Extras, to provide an environment where the development and community support for these components is possible. In order to simplify the build structure and reduce the amount of headache with new components, the initial attempt was to base Camel-Extra upon the GNU General Public License. However, this approach lead to several discussions within the community, since the GPL License is not very friendly for commercial adoption. As a consequence, we’ve decided to approach a multi-license support strategy, which allows us to open the software license for components that are not dependent upon GPL libraries. On the other hand, those components that have still dependencies to GPL libraries still reside under its respective parent license.

The following list will help to understand the current license assignments. Please note that these assignments may change over time, in case the underlying library adjust their license model.

GPL License LGPL License
camel-db4o camel-couchbase
camel-esper camel-exist
camel-spring-neo4j camel-hibernate
camel-vtdxml camel-jboss
camel-jboss6
camel-jcifs
camel-rcode
camel-virtualbox
camel-zeromq

I hope you all enjoy the new license approach and many of you will be able to adopt components within your projects. Please remember that we love contributions. Anything you share with the community (i.e.filing bugs, contributing code, helping with documentation) will help to maintain the project better!

Esper Component Configuration and Config File Support in Camel-Extra

While I was going through the list of enhancement requests for Camel-Extra, which is a community project related to Apache Camel, I came across an old request asking to support the default Esper configuration in order to ease the development of event patterns and queries. Camel-Extra is a sister project of ASF Camel that hosts components, which are not compliant to the Apache license. Within that space Esper is a LGPL licensed library that supports complex event processing (CEP) and analytics on event series. The the Camel component has been generously contributed to Camel-Extra by James Strachan in November 2007.

Although Esper possesses only a small number of configuration parameters, it is sometimes quite useful to simplify event patterns and event processing language (EPL) statements by providing a small amount of configuration parameters. Additionally, it might be useful to provide some tuning parameters meeting specific requirements. However, I am not tempted to offer a detailed description on how to configure the Esper engine in order to meet your specific requirements, since you will find a comprehensive guide in the Esper documentation. I will rather write on how to use the current camel-esper component.

Using Camel-Esper to Query Event Streams

Before diving into the configuration example, I would like to provide an overview, on how the camel-esper component can be configured in your route configuration in order to execute queries upon event streams without neglecting the fact, that you will always find the most recent documentation within the component description of Apache Camel. The component adheres to the overall Camel concept that defines a processing chain via an integration DSL calling subsystems via endpoint URI configurations. Esper can be considered within this context as one of those subsystems, which leads us to the fact that you need to configure an endpoint for being able to interact with the Esper library and framework.

Conceptual Overview: Calling Esper from Camel

Since Esper is embedded within Camel as component concept, addressing the component works like addressing any other subsystem of Camel. Fig. 1 provides an overview a route configuration which is being used as example. Route 1 has a direct endpoint, which serves as interface where any event producer can send messages in a synchronous invocation style. All messages being consumed from this endpoint will be passed towards an configured Esper endpoint, identified via a name as representing the internal ID within the Camel context that serves as addressable endpoint. The second part of the Esper configuration is a query or pattern piece, which does query the event streams coming through the specific Esper communication channel. Finally, after having executed the evaluation, the message will be passed within route 2 towards a consuming, direct endpoint.

Fig.1: Conceptual route configuration

Fig.1: Conceptual route configuration

This simple example shows already, that the logic to evaluate the event streams is encoded within the Esper endpoint. Esper offers basically two different options to write evaluation statements for event streams, a pattern language and an event query language. Both options have been integrated to the Esper component provided by camel-extra and will be introduced in the next two sections.

Esper Event Query Language Configuration

The first configuration example demonstrates, how it is possible to configure the camel-extra Esper component endpoint to query an event stream via the event processing language. The query language is a SQL like language, specifically designed to query event streams in contrast to database tables. The concept of stream therefore replaces the commonly known concept of tables. Nevertheless, since events are nothing else than data, the existing SQL concepts of joins, filtering and aggregation via grouping can be effectively applied upon streams as well.

In order to run your event queries based upon the event processing language, it is necessary to specify the eql option followed by the actual expression. In our case, we are looking for all events of type StockTick having the symbol AAPL which results in the select statement of the above endpoint configuration.

Esper Event Pattern Language Configuration

The second example shows a configuration option for using the event pattern language. The pattern language is based upon University research, originally conducted within the “Rapid” project at Stanford University. The Esper implementation is based upon dynamic state trees and can be considered as so called delta networks, where only changes to data is being communicated across object boundaries. Additionally, changes are only propagated, if the information is needed somewhere else. To optimise performance, Esper operates upon indices for data retrieval operations. The entire grammar of the pattern language is build on top of ANTLR, based on the Extended Backus-Naur Form (EBNF).

To enable the Esper pattern language, it is required to define the pattern option in your endpoint configuration followed by the pattern expression you want to execute. In the example, we are looking for every Stock tick that contains the symbol ‘AAPL’, since we want to retrieve all information related to Apple.

One may have noticed, that addressing an event object requires the full package name, in order to acquaint Esper with the respective event type. Since typing the entire package name including the name of the actual Java object can be cumbersome, the following section introduces a different way, to induce the addressing of event types.

Enabling the File-based Configuration for Esper

Esper contains an option to provide configuration via an external XML-based file. The purpose of the configuration file is on the one hand to simplify queries, written in EPL and pattern language and on the other hand, to tune the engine behaviour to meet your individual requirements. Camel-extra’s Esper component supports the configuration via the default configuration file.

To enable the configuration via XML in camel-esper, it is required to set the configured option to true, a flag set per default to false. The specification of this parameter ensures, that camel-esper conducts a lookup for the esper.cfg.xml file in the root of your class path (e.g. <project_home>/src/main/resources/esper.cfg.xml).

Having enabled the XML-based configuration, it is now possible to add a name for an event type and it’s corresponding class with fully qualified name (i.e. including the package name). This way, Esper knows, that the name refers to a specific address so that the name can be used within the EPL or pattern language. This example shows only a very limited set of configuration options provided by Esper. For a full reference, please refer to the configuration section of the Esper documentation.

Summary

This article briefly introduces camel-esper, a component hosted within the camel-extra project. It outlines the integration and use of Esper within a camel route configuration at a conceptual level and demonstrates, how to adopt the EPL and pattern language, in order to select events from an event stream. Finally, one of the recently added features, of using the default Esper configuration via an XML file concludes the article and gives some insights in how to optimise your Camel application, when using Esper. Since the article introduces only the general usage concept, some further reading can be recommended.

  1. Apache Camel Component Concept
  2. Camel-Extra Esper Component
  3. Camel-Extra Project
  4. Esper Event Processing Language
  5. Esper Pattern Language
  6. Esper Configuration