OEM Ops Center - Operational Plans - Tricks and Tips #1
This is the first of a series of blogs pointing out some tricks and tips in the use of Operational Plans in Oracle Enterprise Manager Ops Center.
Ops Center Operational Plans, as you may or may not know, allow you to run scripts and prompt the user for values for variables at run time.
In the above case, with one value, you may think this is a trivial thing, but unless your script outputs the variable to standard out or standard error, so that it get captured in the Ops Center job log, you have no historical record of what the user entered. You should log the user input, whether it is just for the record or as an audit trail or to use as a debugging tool. You could always echo each variable you request out to the log file, but what is the chance that you will forget one? The easiest way to display every variable is to include the "env" command into your script.
This has the advantage that you log, not only the user entered variables, but shell environments of the target server that you are running on, as I am sure that on the 1000 servers you administer, no one has ever change the PATH variable without you knowing it.
While adding "env " is enough, I have included a slightly nicer formatted module:
#!/bin/kshLog_Env () { # Log shell environment for the record # Usage: Log_Env # This is really useful if you need to check the values entered as run-time into the operational plan. echo "INFO: Dumping the shell environment for the record..... [****Start****]" env echo "INFO: Dumping the shell environment for the record..... [**** End ****]" }Command (){echo "INFO: Doing nothing as this is a test"} ############## Main ## ########### Log_Env Command # Do the rest of your script here
I log the shell environment to standard out as it works for me, but as it has been pointed out to me that you could keep the script output logging to standard out and log diagnostic information like the "env" command, output could be logged to standard error. It is a simple change to the above script and I am still of two minds as to which is best, so choose whichever option (STDERR/STDOUT) suits you best.
So to see the script working:
First, load the Op Plan into your EC. Give it a name and select a target type (Operating Systems in this case).
Load or cut and paste the script in.
Define some variables (These are examples, just for testing).
Review the summary page and click finish in the wizard.
So we have our Operational Plan - Sample_Env_Script.
Then, let's run it on a couple of hosts:
I have chosen a group called "ProxyControllers" which has two hosts in it.
Fill in some values in the Additional Environment Variables fields.
Schedule the job.
Review the summary and apply.
Looking at the job output:
We see it has been successful on both hosts.
Clicking on a host, we can go into the job detail for each host. And there for all to see, recorded as part of the job log, is the values that the user inputted at run-time and the shell environment the script ran in. Both are great things to have if you ever have to debug one of your Op Plans.
If you click the export button and select full job log, you can see the job output for all the hosts that were part of the job. This is sometimes easier than clicking into each host.
Of course the full log can be saved to do with what you will.
I hope that this little tip helps you get more use out of Ops Center Operational Plans.
Regards,
Rodney