Java Application Scalability

0
3

Scalability of applications has become a lot more complex than buying a bigger Windows server or z/OS mainframe.  It is not just a system’s ability to handle user requests well as the numbers grow, but has mostly to do with managing system load, handling priorities, remaining responsive under high-load situations, while scaling as linearly as possible. A key problem of large-scale systems is data integrity as the number of transactions with write operations grows and the danger of data locks slowing down the system increases. Often the maximum number of concurrent users is seen as a measurement of scalability, but that is a dangerous shortcut. The number and kind of transactions that are required within a certain time frame is the only true measure of scalability. Response times for a transaction mix should remain below a limit at the expected peak transaction load.

For Java applications is difficult and expensive to achieve predictable, scalable performance. Simple switching from the single-server environments to multi-server can cause overall application throughput to collapse. Adding more data can cause the database to need progressively more resources. Java applications that work instantly in test, can be twenty times slower in response time when loaded with concurrent users.

The only (very complex) way to truly scale Java applications is to use three server tiers. The first is a HTTP-driven GUI, the second a Java server that de/serializes the tables of the third database tier into Java objects to perform the application logic. These tree tiers are often referred to as clustering, which is incorrect. Clustering is not related to vertically fragmenting the application but paralleling each server tier horizontally. So clustering can only be implemented independently for each of the tiers and requires very different functionality in each. In the case of the Web tier, no server-to-server communication is necessary, whereas at the other extreme, the database tier must manage transactional consistency across its cluster. This means that the cost of scalability is dramatically different for each of those tiers. Management and tuning is also completely different for each as well.

The HTTP tier performs SSL encryption and decryption duties, handles unique client connections by means of HTTP headers with ‘sticky load balancing’ information. HTTP requests are load-balanced across stateless servers and routed to designated J2EE server for processing. The application has to support sticky load balancing from the Web tier to be able to cluster both HTTP and Java tiers. Such Java applications exhibit intra-tier communication complexity that directly impairs scalable performance. The clustering of the database server has to be a function of the database product used, but it is impossible to scale the Java server without implementing some form of local data caching. Common forms are coherent clustered caching, fully replicated caching and partitioned caching.

Clustered caching maintains data in the Java application tier to satisfy data access requests from the cache without loading the database. The cache must not grow too large and provide data as current as necessary. Not all data is needed up-to-date. The cache expires data by LRU – least recently used and elapsed time algorithms.  In clustered systems, servers notify each other about data modifications and either delete or refresh the outdated data page. Coherent clustered caching needs the same as application logic to synchronize cache pages to guarantee a thread-safe implementation. A kind of dual-phase commit on page changes is required in a distributed cache to maintain transactional data consistency.

A fully replicated cache however does not have the problems of clustered caching but shares modifications with other members of the cluster in a “push” model.  While the data access has no measurable latency, a transaction should only be closed when the data from the cache has been serialized to the database tier. This is however mostly done in lazy writes and thus risky. The other problem is related to data locking because theoretically two servers can access the same data to write at the same time and therefore cause conflicts at transaction close.

The solution is the partitioned cache where each cluster server owns one unique fragment of the complete data set. All data are cached to the other servers but writes have to be synchronized to the owning server, but because communication is any-to-any, partitioned caches scale linearly with number of requests and data volume as you add servers. Replicating data from the owner and de-serializing tables into objects adds substantial latency. Therefore the application servers typically ‘object-cache the database-cache’ to avoid the additional de-serialization overhead for multiple object requests. The cache can load many objects from a single cache page in one I/O request rather than reading multiple table entries.

However, a substantially growing number of applications no longer read data from a database. Particularly process oriented applications (and that should be all of them) require data synchronization via backend interfaces such as SOA or MQ series from application silos. More often than not such interfaces require stateful conversations. Compared to database access, Web services based communications are at least a magnitude worse in performance, response time and throughput. Much of that has to do with the problems of XML defined data structures, the weak, ambiguous, and non-canonical XSD-based data definitions and the resultant parsing and validation. To reduce the number of transfers, SOA requests often pass a lot of incidental data because it might be needed, to avoid multiple data accesses. Obviously that is similar to creating a kind of SOA cache, but without any of the above mentioned mechanisms of data concurrency. Web services provided data are in principle always out of date. Web services transactions are so slow that they can not be used for applications that involve user interactions but just for straight-through processes without user intervention. To implement clustered caching for SOA Web services creates substantial problems, mostly because there is no defined mechanism for push-updates and lazy database writes. All of it has to be manually implemented by the Java programmers and is mostly hard-coded and application specific.

The complexity of measuring and predicting hardware and software performance becomes therefore a very complex task. During my years in IBM one of the key elements of designing a well working mainframe environment was based on the balanced system approach. The right mix of CPU power, RAM, disk I/Os was needed to get the most bang for your buck. Today the need for network bandwidth for the various intricacies of each of the tiers becomes the key element while HW is looked at as a cheap commodity. The immense OVERHEAD involved in juggling Web/HTTP clusters, Java tier clusters and expensive High-Availability clusters for the database tier requires huge amounts of commodity HW that is mostly not balanced to achieve scalable applications. Most systems just provide enough of everything to work. The idea of Cloud computing has to do with providing virtualized resources to such tiered systems adding more overhead for the virtualization. Hm, it sounds not very professional and well thought out to me …

CONCLUSION: Could the substantial overhead in using sticky-load-balancing, transaction-safe Java caching and database clustering, plus the substantial overhead for parsing and validating the XML data for SOA not be avoided? All of the above have been reasons why I chose a different route for the Papyrus Platform. Rather than the immense complexity of multi-layered caches – with multiple conversion from tables to cache pages to objects and reverse – I decided to collapse the horizontal structures and work with a purely (vertical) object model from definition to storage. That enables the use of the meta-data repository for application life-cycle management and substantially simplifies systems management and tuning.

The proxy replication mechanism of the object-relational database of the Papyrus Platform uses a partitioned caching concept where there is always a unique data owner and each server node uses a replicated copy that is either pull or push updated as per proxy definition. The objects do not have to be de/serialized as they are cached/replicated/binary stored as is. The same mechanism works transparently for all objects that are populated via Web services or other backend interfaces. The drawback is that during testing or at low load it seems to be not as fast as a Java app, but in difference it does scale linearly without needing additional programming or software as you add more commodity HW. Papyrus uses the same concepts across all servers because there are no tiers necessary and it also does not need virtualization.

Recommended for you

Leave a Reply