Diagnosing Resource Hang Detected For Pool Message in Logs | Weblogic

Oracle Weblogic Server

While doing a monthly maintenance of weblogic production servers, we went through log files for checking any unusual messages and found below connection pool closed info in logs.

####<Apr 29, 2013 2:00:45 PM EDT> <Info> <JDBC> <seed_datatech> <seed2> <ACTIVE ExecuteThread: ’31’ for queue: ‘weblogic.kernel.Default (self-tuning)’> <<anonymous>> <1367258445862> <BEA-001128>
<Connection for pool “seed_datasource” closed.>
####<Apr 29, 2013 2:00:45 PM EDT> <Info> <Common> <seed_datatech> <seed2> <ACTIVE ExecuteThread: ’29’ for queue: ‘weblogic.kernel.Default (self-tuning)’> <<anonymous>> <1367258445863> <BEA-000634>
<Resource hang detected for pool “seed_datasource”, group “DEFAULT_GROUP_ID”. Waited 20,008 milliseconds where a typical test has been taking 58>
####<Apr 29, 2013 2:00:45 PM EDT> <Info> <JDBC> <seed_datatech> <seed2> <ACTIVE ExecuteThread: ’14’ for queue: ‘weblogic.kernel.Default (self-tuning)’> <<anonymous>> <1367258445863> <BEA-001128>
<Connection for pool “seed_datasource” closed.>
####<Apr 29, 2013 2:00:45 PM EDT> <Info> <JDBC> <seed_datatech> <seed2> <ACTIVE ExecuteThread: ’28’ for queue: ‘weblogic.kernel.Default (self-tuning)’> <<anonymous>> <1367258445864> <BEA-001128>
<Connection for pool “seed_datasource” closed.>

In Oracle documentation we came to know below:
BEA-000634: Resource hang detected for pool “{0}”, group “{1}”. Waited
{2} milliseconds where a typical test has been taking {3}

Cause: Resource tests have been taking longer than expected.
Action: Correct condition that causes the resource tests to block
for extended periods of time.

Level: 1

Type: NOTIFICATION

Impact: Common

To get to the root cause of the issue we checked below details but did not find anything specific to the issues showing in logs.
1. Checked AWR Reports from DB
2. Checked JDBC debug messages like JDBCConn, JDBCInternal, JDBCSQL

Then finally after many iterations to find the issue, we came to know below details and it’s solution to fix it.

— The behaviour is an understandable and sometimes crucial protective measure WebLogic employs to avoid major hanging in JDBC calls if the network fails.
— In some such cases a thread or threads may hang uninterruptedly for minutes. So WebLogic pays attention to how long the connection test is taking.
— Usually though the connection test takes a few milliseconds and suddenly there is at least one or more conn test that has been hanging for over 20 seconds.
— Weblogic will leave any in-use connections for a further minute or so to run untouched, and only after, if there has been zero movement with an in-use connection will it be killed out from under the hung application.
— So the best bet is to try to see what could be happening at the DBMS such that a pool connection test as configured could ever wait for over 10 seconds to get an answer from the DBMS.

If you can find out the answer to the above wait question then it’s well and good but if not able to determine the answer and the app is running fine with these info messages then there is a workaround to stop these messages from coming up in logs.

You can configure a longer maximum test hang wait from the default 10 seconds you see here, to any larger value, or to never.

These messages can be safely ignored if there is no impact seen on performance or functionality or operation of apps deployed, as these are info messages for information purpose only.

You can add below JVM argument to increase the test wait seconds to 30seconds Or Completely disable the pool’s hang detection functionality by putting the value to zero like below.

For a thirty second max wait: -Dweblogic.resourcepool.max_test_wait_secs=30

To turn off the pool’s hang detection functionality: -Dweblogic.resourcepool.max_test_wait_secs=0

Note: Any of the above JVM argument set requires the managed server to restart once to take effect.

 

In case of any ©Copyright or missing credits issue please check CopyRights page for faster resolutions.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.