Java Monitoring Tools For Performance

Copy file to shared folder java

To gain insight into the JVM itself, Java monitoring tools are required. A number of tools come with the JDK it self. Here we will discuss all possible and freely available java monitoring tools for monitoring JVM health.
Here in this article we will discuss broad areas of below iteams using the Java Monitoring Tools for health checkup of JVM.

1. Basic VM information
2. Thread information
3. Class information
4. Live GC analysis
5. Heap dump postprocessing
6. Profiling a JVM

Java monitoring tools available in default JDK provided by Oracle.

jcmd

Prints basic class, thread, and VM information for a Java process. This is suitable for use in scripts; it is executed like this:

% jcmd process_id command optional_arguments

Supplying the command help will list all possible commands, and supplying help <command> will give the syntax for a particular command.

jconsole

Provides a graphical view of JVM activities, including thread usage, class usage, and GC activities.

jhat

Reads and helps analyze memory heap dumps. This is a post processing utility.

jmap

Provides heap dumps and other information about JVM memory usage. Suitable for scripting, though the heap dumps must be used in a postprocessing tool.

jinfo

Provides visibility into the system properties of the JVM, and allows some system properties to be set dynamically. Suitable for scripting.

jstack

Dumps the stacks of a Java process. Suitable for scripting.

jstat

Provides information about GC and class-loading activities. Suitable for scripting.

jvisualvm

A GUI tool to monitor a JVM, profile a running application, and analyze JVM heap dumps (which is a postprocessing activity, though jvisualvm can also take the heap dump from a live program).

As we have now know brief details about the tools available and there capabilities, we will now go for the individual areas of java monitoring

Basic VM Information

JVM tools can provide basic information about a running JVM process: how long it has been up, what JVM flags are in use, JVM system properties, and so on.

Uptime
The length of time the JVM has been up can be found via this command:
% jcmd process_id VM.uptime

System properties

The set of items in System.getProperties() can be displayed with either of these commands:

% jcmd process_id VM.system_properties

or

% jinfo -sysprops process_id

This includes all properties set on the command line with a -D option, any properties dynamically added by the application, and the set of default properties for the JVM.

 JVM version

The version of the JVM is obtained like this:

% jcmd process_id VM.version

JVM command line
The command line can be displayed in the VM summary tab of jconsole, or via jcmd:

% jcmd process_id VM.command_line

JVM tuning flags
The tuning flags in effect for an application can be obtained like this:

% jcmd process_id VM.flags [-all]

 

A useful way to determine what the flags are set to on a particular platform is to execute this command:

% java other_options -XX:+PrintFlagsFinal -version
....................
....................
....................
uintx InitialHeapSize := 4169431040 {product}
intx InlineSmallCode = 2000 {pd product}

You should include all other options on the command line because some options affect others, particularly when setting GC-related flags. This will print out the entire list of JVM flags and their values (the same as is printed via the VM.flags -all option to jcmd for a live JVM).

Here is how to retrieve the values of all the flags in the process:

% jinfo -flags process_id

With the -flags option, jinfo will provide information about all flags; otherwise it prints only those specified on the command line. The output from either of these commands isn’t as easy to read as that from the -XX:+PrintFlagsFinal option, but jinfo has other features to keep in mind.

Thread Information
jconsole and jvisualvm display information (in real time) about the number of threads running in an application. It can be very useful to look at the stack of running threads to determine if they are blocked.

The stacks can be obtained via jstack:

% jstack process_id

Stack information can also be obtained from jcmd:

% jcmd process_id Thread.print

Class Information

Information about the number of classes in use by an application can be obtained from jconsole or jstat.

jstat can also provide information about class compilation.

You can check jstat command options here from oracle website.

Live GC Analysis

Virtually every monitoring tool reports something about GC activity. jconsole displays live graphs of the heap usage; jcmd allows GC operations to be performed; jmap can print heap summaries or information on the permanent generation or create a heap dump; and jstat produces a lot of different views of what the garbage collector is doing.

The JVM provides four different algorithms for performing GC.

1. The serial garbage collector
The serial garbage collector is the simplest of the four. This is the default collector if the application is running on a client-class machine (32-bit JVMs on Windows or singleprocessor
machines).The serial collector uses a single thread to process the heap. It will stop all application threads as the heap is processed (for either a minor or full GC). During a full GC, it will
fully compact the old generation. The serial collector is enabled by using the -XX:+UseSerialGC flag (though usually it is the default in those cases where it might be used).

2. The throughput collector

This is the default collector for server-class machines (multi-CPU Unix machines, and any 64-bit JVM). The throughput collector uses multiple threads to collect the young generation, which makes minor GCs much faster than when the serial collector is used.

3. The CMS collector

The CMS collector is designed to eliminate the long pauses associated with the full GC cycles of the throughput and serial collectors. CMS stops all application threads during a minor GC, which it also performs with multiple threads. Notably, though, CMS uses a different algorithm to collect the young generation (-XX:+UseParNewGC) than the
throughput collector uses (-XX:+UseParallelGC).

4. The G1 collector

The G1 (or Garbage First) collector is designed to process large heaps (greater than about 4 GB) with minimal pauses. It divides the heap into a number of regions, but it
is still a generational collector. Some number of those regions comprise the young generation, and the young generation is still collected by stopping all application threads
and moving all objects that are alive into the old generation or the survivor spaces. As in the other algorithms, this occurs using multiple threads.

You can check more on java garbage collectors article

Below are few articles for GC analysis.

Steps To Monitor garbage collection using jstat

Full GC monitoring

GC collectors comparison

Heap Dump Postprocessing

Heap dumps can be captured from the jvisualvm GUI, or from the command line using jcmd or jmap. The heap dump is a snapshot of the heap that can be analyzed with various tools, including jvisualvm and jhat. Heap dump processing is one area where thirdparty tools have traditionally been a step ahead of what comes with the JDK.

GC logs and the tools discussed are great at understanding the impact GC has on an application, but for additional visibility, we must look into the heap itself. The tools discussed in this section provide insight into the objects that the application is currently using.
Most of the time, these tools operate only on live objects in the heap—objects that will be reclaimed during the next full GC cycle are not included in the tools’ output. In some cases, tools accomplish that by forcing a full GC, so the application behavior can be affected after the tool is used. In other cases, the tools walk through the heap and report live data without freeing objects along the way. In either case, though, the tools require some amount of time and machine resources; they are generally not useful during measurement of a program’s execution.

Heap Analysis must be performed to know which kinds of objects are consuming large amounts of memory. The easiest way to do that is via a heap histogram. Histograms are a quick way to look at the number of objects within an application without doing a full heap dump (since heap dumps can take a while to analyze, and they consume a large amount of disk space). If a few particular object types are responsible for creating memory pressure in an application, a heap histogram is a quick way to find that.
Heap histograms can be obtained by using jcmd (here with process ID 9999):

% jcmd 9999 GC.class_histogram
9999:
num #instances #bytes class name
---------------------------------------------
1: 789087 31563480 java.math.BigDecimal
2: 237997 22617192 [C
3: 137371 20696640 <constMethodKlass>
4: 137371 18695208 <methodKlass>
5: 13456 15654944 <constantPoolKlass>
6: 13456 10331560 <instanceKlassKlass>
7: 37059 9238848 [B
8: 10621 8363392 <constantPoolCacheKlass>

In a heap histogram, Klass-related objects are often near the top; those are the metadata objects from loading the classes. It is also quite common to see character arrays ([C) and String objects near the top, as these are the most commonly created Java objects.

Byte arrays ([B) and object arrays ([Ljava.lang.Object;) are also quite common, since classloaders store their data in those structures. (If you’re unfamiliar with the syntax here, it comes from the way the Java Native Interface (JNI) identifies object types; see the JNI reference documentation for more details.)

GC.class_histogram includes only live objects, though the command does not force a full GC.

Similar output is available by running this command:

% jmap -histo process_id

The output from jmap includes objects that are eligible to be collected (dead objects).

To force a full GC prior to seeing the histogram, run this command instead:

% jmap -histo:live process_id

Histograms are quite small, so gathering one for every test in an automated system can be quite helpful. Still, because they take a few seconds to obtain, they should not be taken during a performance measurement steady state.

Heap Dumps

Histograms are great at identifying issues caused by allocating too many instances of one or two particular classes, but for deeper analysis, a heap dump is required. There are many tools that can look at heap dumps, and most of them can connect to a live program to generate the dump. It is often easier to generate the dump from the command line, which can be done with either of the following commands:

% jcmd process_id GC.heap_dump /path/to/heap_dump.hprof

or

% jmap -dump:live,file=/path/to/heap_dump.hprof process_id

Including the live option in jmap will force a full GC to occur before the heap is dumped. That is the default for jcmd, though if for some reason you want those other (dead) objects included, you can specify -all at the end of the jcmd command line. Either command creates a file named heap_dump.hprof in the given directory; a number of tools can then be used to open that file. Three of the most common are:

jhat

This is the original heap analyzer tool; it reads the heap dump and runs a small HTTP server that lets you look at the dump through a series of web page links.

jvisualvm

The monitor tab of jvisualvm can take a heap dump from a running program or open a previously produced heap dump. From there you can browse through the heap, examining the largest retained objects and executing arbitrary queries against the heap.

You can check more on the jvisualvm here

mat

The open source EclipseLink Memory Analyzer Tool (mat) can load one or more heap dumps and perform analysis on them. It can produce reports that suggest where problems are likely to be found, and it too can be used to browse through
the heap and execute SQL-like queries into the heap.

Below is a good article on analysing the heap dump using MAT.

http://eclipsesource.com/blogs/2013/01/21/10-tips-for-using-the-eclipse-memory-analyzer/

Profiling a JVM

Profilers are the most important tool in a performance analyst’s toolbox. There are many profilers available for Java, each with its own advantages and disadvantages. Profiling is one area where it often makes sense to use different tools—particularly if they are sampling profilers. One sampling profiler may find different problems than another one, even on the same application.

Almost all Java profiling tools are themselves written in Java and work by “attaching” themselves to the application to be profiled—meaning that the profiler opens a socket (or other communication channel) to the target application. The target application and the profiling tool then exchange information about the behavior of the target application.

Java Mission Control

The commercial releases of Java 7 (starting with 7u40) and Java 8 include a new monitoring and control feature called Java Mission Control. This feature will be familiar to users of JDK 6–based JRockit JVMs (where the technology originated), since it is part of Oracle’s merging of technologies for Java 7. Java Mission Control is not part of the open-source development of Java and is available only with a commercial license (i.e., the same procedure as for competitive monitoring and profiling tools from other companies).

The Java Mission Control program (jmc) starts a window that displays the JVM processes on the machine and lets you select one or more processes to monitor.

You can check more info on Java Mission Control program (jmc) here and here:

 

Java Flight Recorder

The key feature of Java Mission Control is the Java Flight Recorder (JFR). As its name suggests, JFR data is a history of events in the JVM that can be used to diagnose the past performance and operations of the JVM.
The basic operation of JFR is that some set of events are enabled (for example, one event is that a thread is blocked waiting for a lock). Each time a selected event occurs, data about that event is saved (either in memory or to a file). The data stream is held in a circular buffer, so only the most recent events are available. Java Mission Control can then display those events—either taken from a live JVM or read from a saved file and you can perform analysis on those events to diagnose performance issues.

All of that—the kind of events, the size of the circular buffer, where it is stored, and so on—is controlled via various arguments to the JVM, via the Java Mission Control GUI, and by jcmd commands as the program runs. By default, JFR is set up so that it has very low overhead: an impact below 1% of the program’s performance. That overhead will change as more events are enabled, or as the threshold at which events are reported is changed, and so on.

In case of any ©Copyright or missing credits issue please check CopyRights page for faster resolutions.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.