Bash Script for weblogic log monitoring
As an administrator I would suggest to use weblogic diagnostics module to filter logs in to *-diagnostics.log. It’s reliable and highly customizable. But sometimes due to environmental issues when diagnostics module stops filtering the logs properly, I use below given script in crontab as scheduled process to do the make shift job of filtering till the diagnostics module is fixed and our of error.
Below script can be run on multiple files at same time but the final diagnostics.log will be a single file though the logs will be from different log files. So it is suggested not to push multiple log files as inputs to the script. I made it like that as Development guys were able to recognize any issues whether it comes from the AdminServer.log or App specific logs. But for general use do not push multiple log files names for filtration(E.g “./LogFilter.sh AdminServer.log MymanagedServer.log App466yu.log”).
To run per log file run like below: ./LogFilter.sh AdminServer.log Not Recommended(but will work): ./LogFilter.sh AdminServer.log ManagedServer.log ManagedServer1.log
It will generate a diagnostic log with found Stuck threads or Dead locks or Unchecked Exceptions with 4 extra lines below the exact matching error for easy debugging.This is customizable if you put -A (N- Any number) in script on every occurrences of grep -A4.
you can increase the number of lines to cut from main file and push to diagnostics log file.
Issues with this type of filtration:
1. no auto management of size of log file.
2. Could take long time to generate Diagnostics log depending on size.
3. Manually needs rotation or else could eat up all Hard disk space over time.
Suggestion:
Use above script when you have some issue with Weblogic diagnostics module or in Production systems where installing diagnostics module installation is prohibited .Be cautious while using it in Production systems.
Please find the simple bash script below:
#!/bin/bash #Filter logs with STUCK threads, Deadlocks or Unchecked Exception. LOGNAMES=$@ Stuck() { grep -A4 "" "$LOGNAME" | grep -E -A4 "WL-000337|BEA-000337|WL-101020|WL-101017|WL-000802|WL-101020|BEA-101017" } DeadLock() { grep -E -A4 "|" "$LOGNAME" | grep -E -A4 "WL-000394|BEA-000394" } UncheckedException() { grep -A4 "" "$LOGNAME" | grep -E -A4 "WL-000337|BEA-000337" } Main() { Stuck DeadLock UncheckedException } for i in $@ do export LOGNAME=$i if test $LOGNAME then Main >> Diagnostics.log cat Diagnostics.log | uniq > Diagnostics.log else echo "Please provide the logfile name to search."; fi; done
Any improvements to the script are always welcomed, please post in comments, will update it.
In case of any ©Copyright or missing credits issue please check CopyRights page for faster resolutions.
WLS and OSB logs should be clean of Exceptions and stacktraces. Any exception or stacktrace should be explained by the development team. To facilitate this, all stack traces should be extracted from the logs, put into a QC, and assigned to the appropriate team.
The first part of this is to create a script that can be run daily to extract all the stacktraces and put into a file to review later. A new file should be created on each run.
please can you provide me with the script
Hi Sarath,
You can use below script to do that with little or no modification.
And if you want each and every exception to be parsed and logged then you can use grep like below:
grep “WARN\|ERROR\|FATAL\|Exception\|at.*\.java\:.*” logFile.log
And to run it daily create a cron to run the script daily.