SubmarineLead BallastCasting Aluminum
Home-  Sailboat-  Submarines-  ROVs-  Metal Working-  Other Stuff -  About Us

Authenication Control
Change Control
NetApp and Oracle
    Backup
    Disaster Recovery

 

Change Control

Automating the the tedious and error prone DBA work involved with moving developers changes to test and production.  Change Controls need to be done in a controlled and accurate process, but they do not need to make the developers wait, or occupy the DBA's time with mundane task. 

The apply_pending.sh script and it's supporting scripts are completely eliminate the DBA form the majority of this work while keeping you informed by email. If there is a problem you will know about it right away from the subject line of the email and you can simply scroll down to see the details.

  • Automate the tedious and error prone DBA work or moving developers changes to Production.
  • It's UNIX shell scripts!  Easily customized to your particular needs.
  • Developers can optionally apply their own scripts to Test and Quality databases.
  • Scripts are applied to production with the same process so there are no unexpected errors.
  • Emails reports are sent to the developers and DBA's keeping them informed of changes.
  • Any errors are noted and brought to the attention of the developers and DBAs.
  • Backups are automatically run just prior to applying changes.
  • After scripts are applied a checkup of the database reports problems such as invalid objects.
  • Detail logs are kept for each script as well as the historical activity.
  • Scripts are automatically reapplied after a database is cloned from production.
  • Developers can modify the list of scripts that will be applied after the next clone.
  • Production changes can be scheduled or staged and triggered on demand by any administrator.
  • Works at the schema level for databases that contain multiple schemas for multiple applications.

The normal corporate environment gives the developers full control to create, drop and update objects in the development database.  They normally keep track of their changes and then collect these into one or more SQL scripts. 

In some cases a quality control database exists, and so the developers will also have full control over the test database, where they can run their SQL scripts from development and merge code ready to move onto production with the code from other developers . Once they test their code in the cleaner database and workout any issues, they email their SQL scripts to the DBA.

The DBA applies the scripts to the quality database, or to the test database if a quality database is not part of the cycle.  Only the DBA is allowed to run scripts in the final database prior to production whether it be quality or test.  Developers are restricted from connecting to this database with the schema owners or with any account that has the ability to change database objects. This insures that no object change will be made to quality/test that is not done from a script that will later be applied to production.

The DBA is also responsible for applying the script to the production database and this task is often done after hours and may need to be coordinated with the activities of the business unit and application server administrators.

Warning

Automating the DBA's Change Control process relies greatly on uniform standards in database configuration, naming standards, backups, and clones.  It will require work to adapt the scripts described here, but it is possible with average skill. These scripts are the result of years of organic growth.  They continue to evolve to meet the requirements of new versions of Oracle as well as changes in corporate controls.  What makes them complex is also what makes them so valuable, as they can be adapted as needed.  You will not find yourself spending  $40,000 on a software packaged and $30,000 on an IBM consultant to install it, so that it can slowly dissolve into disuse only to be abandoned.

Download

Download and unzip: apply_pending.zip
 

How It Works - The Basics

The scripts and databases all run under AIX Unix, one set of scripts per host.  A single shared storage volume is NFS mounted on each host and also mapped to Windows via Samba.  Scripts normally arrive from developers attached to an email addressed to the DBA.  The DBA reviews the script, modifies it as needed, and saves it to a directory on the shared volume.  Since the shared volume is mounted on every Oracle database host, the script is now available to the appropriate databases wherever they reside.  The is no further need to copy the script. 

The directory name is in the format of the host box where the database resides, the name of the database, and the name of the schema if the database host multiple schemas for distinct applications. For example the directory: RSTEST3_TEST_CASH is the CASH schema in the TEST database on the rstest3 host.  When the script is saved, a prefix is added which identifies the production database and the date that the script was received and applied.  All space bars are also removed and the extension name is changed to .sql if needed.  For example an attached script named "update taxes.txt" would be renamed to "PROD_CASH_Feb12_update_taxes.sql" and saved to the RSTEST3_TEST_CASH directory. 

The script will never be copied from this directory.  When it is eventually applied to PROD, it will be called from RSTEST3_TEST_CASH directory.  This insures that the same script is applied to both TEST and PROD.  Only the DBA's have access to the shared storage so we also insure that script is not modified by the developers.

The shared storage directory also contains a file named PENDING.txt.  This file is a list of the scripts that have been applied to the specific development/test/quality database but not yet been applied to the production database. While saving the script, it is convenient to copy it's name to the clipboard, then open the PENDING.txt file and add the script's name to the bottom of the list. Any script already in the list that has been applied to the current database will have a comment beside it's name listing the last time it was applied and if there were any ORA- errors in the log.  For example the PENDING.txt list in the RSTEST3_TEST_CASH directory may look like this:

# PENDING
PROD_CASH_Oct18_CASH_UTILITY_PKG.sql --Last Applied Oct18 05:07PM
PROD_CASH_Feb12_update_taxes.sql

It is important not to remove the comment "#" from any comment line or else the comment will be considered a script name. It is also important not to remove the "--Last Applied" comment or else that SQL script will be considered as "new" and applied again to the database.  Applying all of the scripts in the PENDING.txt list is appropriate after clones, but not normally done when one or more new scripts are being applied. 

Scripts are applied in the order they are listed in the PENDING.txt file.  You can change the order and then reapply all of the scripts if necessary. This would commonly be done in conjunction with a clone so the results are repeatable when applied to production.

With the PENDING.txt list updated, the next step is to run the apply_pending.sh script on the appropriate host.  For example, to apply the new script only to the CASH schema of the TEST database on rstest3, we would  login to rstest3 as the oracle user and run:  apply_pending.sh TEST CASH -new

The "-new" parameter instructs apply_pending.sh to only apply scripts in the Pending list that are not followed by a "--Last Applied" comment.  If a backup is completely unnecessary we could run: apply_pending.sh TEST CASH -new -sb
The "-sb" instructs apply_pending.sh to skip the backup.  If applying script to a database that does not support multiple applications with multiple schemas then the command might be:  apply_pending.sh QTIPS -new -sb

Running apply_pending.sh On-Demand

Optionally we can avoid having to login to the host box by utilizing the apply_pending_on_demand.sh script.  This script runs from crontab every minute and simply checks for the existence of a flag file on the host. For example the flag file might be named "apply_pending_TEST_CASH.flag"  The flag file is created by a simple Windows VB script named "FlagApplyPending_TEST_CASH.vbs" and it places the flag file on the host using ftp. 

When FlagApplyPending_TEST_CASH.vbs is run, it parses it's own name for the database name, TEST in this example and the schema name, CASH if a schema name is required.  It then finds the host box for the database.  Company standards name all production host boxes with a substring of "prod".  The script uses this to determine if the target database is production.  For a production database the script will simply populate a flag file with the OS username of the person that ran the script and ftp it to the appropriate host.

If the host is not production then the script will prompt you with the question "Only apply scripts that have not yet been applied?"  If you answer No, then all of the scripts in the Pending list will be applied.  Most likely you want to answer Yes, which will only apply the new scripts and a parameter is appended to the flag file that is sent to the host box via ftp.  The information in the flag file will be parsed by apply_pending_on_demand.sh and passed as arguments to apply_pending.sh, and the apply_pending.sh script will also read the flag file in order the get the OS username of the person who ran the Windows VB script so it can send that person an email report with the resultes.

It's convenient to place a copy of FlagApplyPending_TEST_CASH.vbs in each of the shared storage directories.  To configure it you only need to change it's name. For example to flag the CASH schema on the TEST database the name of the VB script is FlagApplyPending_TEST_CASH.vbs, but to flag the TIPS database the name would be FlagApplyPending_TIPS.vbs.  The VB script read's and parses it's name in order to derive it's parameters.  It is necessary to have the Oracle client installed on the client you from which you call the VB script because it uses the tnsping utility in order to find the host were the database is running.  An account with ftp access is also required on each of the host and this account and password are configured in the VB script.  The account only needs access to create the flag file in a directory such as /u01/flag_ftp, where the oracle account which runs the apply_pending_on_demand.sh has full rights.  The oracle user on the host will take care of reading the flag file, deleting it, and then calling the apply_pending.sh script with the proper arguments.  To avoid using ftp, an alternative approach would be to have the VB script write the flag file to the Samba attached shared storage volume and alter FLAG_DIR variable in the apply_pending_on_demand.sh to use the appropriate Unix path.

Coordinating Change Control Activity with Windows Administrators and Developers

The Windows VB script and apply_pending_on_demand.sh script can be used to coordinate production SQL script runs with Windows administrators and developers making application changes on production servers. Some change control tickets will specify that Pending SQL scripts should be run immediately before or after work done by the Windows administrators, requiring coordination of activities between the DBA and a Windows administrator who is often drawn from a pool making it necessary for the DBA to wait until a specific administrator has been assigned.  With the apply on demand process, the DBA can configure a Windows VB script in directory accessible by the Windows administrator and then update the change control ticket with instructions for the Window administrator run the script at their convenience. When the Windows VB script is run it reads the OS username which is the Windows Administrator and embeds that information in the flag file sent to the database host.  The apply_pending.sh script will test for the existence of a flag file and if one is found it will read the OS username from the flag file and include that user in the list of users that will receive an email report with the details of the database changes upon completion.  In this way, the Windows administrator is notified when the database portion of the change is completed.

The same process can be used with Developers who in some special cases control their own modifications to the application. Some application can also modify database structures but still require additional changes made by SQL scripts.  The DBA simply stages the appropriate scripts in the pending list and provides the developer with an appropriately named Windows VB script to trigger the execution on demand.
 

Allowing Developers to Submit SQL Scripts to Test/Quality

Some developers submit SQL scripts two or three times a day, as well as frequently remove scripts from the pending list, and run clones.  For those developers it may be best for them and the DBA if the developers are able to submit and apply scripts on their own.

The process still prevents developers from directly modifying the database and it secures the submitted scripts to insure they are not modified, and it test the system to make sure there are no invalid objects, disabled constraints,  unusable indexes, that new objects are in their proper tablespaces, and so on.  If an ORA- error occurs it reports it to the developers and the DBAs for review.

Normally developers attach SQL scripts to an email and send it to the DBA; but to submit their own scripts the developer simply copies the SQL script/s into a designated directory for a specific database. For example, to apply a script to the Quality TIPS database or QTIPS, the developer copies the SQL Script to a directory named QTIPS_script_drop where an Oracle domain user has full access.

From the developer's drop box directory a scheduled batch file moves the scripts to a script_in_box directory just below the database directory on the shared storage for the specific database.  For example for the QTIPS database the directory would be RSTEST8_QTIPS/script_in_box.  The batch file runs on a client under an account that has access to both the developers directory and the shared storage directory via Samba.  This isolates the developer from the shared storage directories.

On the database host, a crontab scheduled script named apply_pending_check_in_box.sh moves the scripts from the script_in_box to the parent directory and renames them to include a prefix of the target production database and the date.  The production target database is found by taking advantage of the company naming standard which means that the first letter only needs to be removed. For example the production database for QTIPS is TIPS.  For different standards or special cases it would not be difficult to utilize a case statement in the script or even access a database in order to identify a production database and it's supporting databases.  For databases that support multiple schemas for multiple applications, a lookup list in the script is already used. 

Any space bars in the original SQL script's names are also replace by underscores.  Finally the new SQL scripts names are appended to the PENDING.txt file and the apply_penidng.sh script is called with the "-new" parameter so only new scripts are applied.

Optionally the developers can modify the PENDING.txt file by simply dropping a replacement for it into the same directory where they would drop SQL scripts. Most of the time they would do this in order to remove one or more scripts from the PENDING.txt list.  This does nothing to remove remove the scripts from the database, but that can be done by a database restore or a new clone from the production database.  Both of these methods can be triggered by the developers again making the process as responsive to the developer's needs as possible.
 

What apply_pending.sh Does

The apply_pending.sh script will first verify that it can find the scripts listed in the PENDING.txt list.  These scripts do not have to be in the current database's shared directory as all of the directories related to the production database will be searched.  However in most situations and especially when the developer has submitted the scripts, they will be in the current database directory, which is either the test or quality database.

Once the SQL scripts existence is verified a database backup will be made.  The apply_pending.sh script calls an external script named backup.sh which is the company's standard backup script, but the apply_pending.sh script can be modified to us any backup process.  For example, the default call is: "backup.sh hot QTIPS -snap=3 -v".  This creates a hot NetApp snapshot backup of the database volume that will be retained for 3 days.  The "-v", verbose parameter will generate an emails to the DBAs and the primary contacts for the specific instance.  The email recipients are configured in text files on the host which are specified in the configuration section of the backup.sh file.

Each new SQL script is applied in the order it is listed.  A new script is identified as a script in the PENDING.txt file that does not have a comment of "--Last Applied".  As each SQL script is run a log file for it is created in shared storage director for the current database, even if the SQL script is being read from a different directory. Start and stop timestamps as well as the log for each script is concatenated in to a single file that will be the body of an email report.  When every new SQL script has been run, the PENDING.txt list is updated to included the "--Last Applied" comment beside each of the new scripts.  This comments includes the timestamp that the script was applied and "with errors" is added if the log files contains any ORA- error messages.  A file named: historical_pending.log is also appended with the updated PENDING.txt file in order to maintain a history of the activity.

Next the checkup.sh script is called which runs the checkup.sql script against the database. The checkup script makes one pass to compile any invalid object and reports any objects that remain invalid.  It also reports phantom synonyms, objects in the wrong tablespace, disabled triggers, disabled constraints, schema owners with DBA access, or create database link privilege, tablespaces with low autoextend growth ability, excess undo and temp tablespace use, shared pool and buffer memory requirements, and many other parameters.  Like the backup.sh script the checkup.sh script is written specifically for the company database standards, so it will not work properly on any database.  However it is just a SQL script and can easily be adapted.  If the checkup script finds any problems it, like the backup.sh script, will send an email using the list configured on the host box.  The recipients included the DBA and most likely the primary contacts for the database. 

When the SQL scripts are done, the complete report is emailed to primary contacts for the effected database.  It acquires the email addresses from the same files used by the backup.sh and checkup.sh scripts as well as addresses acquired from a database of contacts and finally to the user who may have triggered the run using the Windows VB script that may have launched the apply_pending.sh script.

If the target database was a production database then the apply_pending.sh script updates the PENDING.txt script by commenting out the applied scripts and adding a comment noting the time they were applied to production.  For example;

# PENDING
# PWP_Jan20_ACL_grant.sql --Applied to Production Jan20 05:11PM

Then apply_pending.sh finds the PENDING.txt file in the supporting test or quality database and makes the same change to the SQL scripts that were applied to production.  In this way the the SQL script is effectively removed from the test/quality pending list.


Configuring apply_pending.sh

The apply_pending.sh script is tightly integrated with the company standards for database naming and configuration, as well as with the checkup.sh and backup.sh scripts that it calls.  However it is just shell scripting and only a matter of adjusting the script to fit the environment.

The configuration section of the apply_pending.sh script,  is found just below the "# MAIN PROGRAM" break. Below is a snip from that session and a description of each line follows.

      #Configuration
      EMAIL_LIST="$SCRIPT_DIR/sid_email.txt"
      FLAG_DIR="/u01/flag_ftp"
      REQUIRE_SCHEMA="DVLP:TEST:PROD DEV10:TST10:PRD10 DEV11:TST11:PRD11"
      ORATAB=/etc/oratab
      DOCS=/oracle_share/docs
      CAPP_CONNECTION=capp/topthis@prod
      RESTRICT_HOST_COMMAND=YES

EMAIL_LIST is a text file in the same directory as the apply_pending.sh script.  There is one email address per line, with a prefix that identifies that email with a specific database or "ALL" if it is to be identified with all databases.  Example contents of sid_email.txt:

      # You can comment out lines.
      # ALL|sid:email_address
      ALL:djackson@company.com
      DTIPS:ray.naser@company.com
      DTIPS:david.ducken@company.com
      DPTM:david.ducken@company.com
      DBQES:david.ducken@company.com
      DBQES:stan.thienhardt@company.com

FLAG_DIR is the host directory where the Windows VB script can write a flag file to trigger apply_pending.sh to run.  It is actually apply_pending_on_demand.sh that runs from a conrtab schedule, which reads the flag file, parses the information and calls the apply_pending.sh script.

REQUIRE_SCHEMA - there is a set of databases that are not dedicated to a specific application.  Instead they are composed of multiple schemas and each schema supports a different application.  For this set of databases it is necessary to identify the schema that is being updated because the directory on the shared storage volume uses the schema name as part of the directory name. For example, the directory for the CASH schema in the PROD database on the rsprod11 host is: RSPROD11_PROD_CASH.  The setting also identifies the related development, test and production databases. For example: "DVLP:TEST:PROD" are all related. They are the same version of Oracle and the CASH schema will be found in each.  When apply_pending.sh runs against PROD, it will use this list to lookup the name of the test database where it should search for the SQL scripts.

ORATAB points to the oratab file on the host which is used to verify the database name parameter passed to the apply_pending.sh script.

DOCS is the path to the database directories on the shared storage. 

CAPP_CONNECTION - The sid_email.txt file on the host only contains email addresses related to the database, however when the target is a schema in one of the database's which host multiple schemas for multiple applications then we need to know who to email regarding changes to that schema.   This information happens to already exist in the Capacity Planning database so we simply tap the capp.capp_contacts_vw in that database for the information.

RESTRICT_HOST_COMMAND - Set this to YES will enable the SqlPlus "host" command which opens and OS shell from SqlPlus. See more about this setting below.
 

Special Cases

There are three special cases that sometimes arise with changes to the database.

  1. You need to run host OS commands or programs like export/import, SqlLoader, etc.
  2. You need to change the host directory from the default which will be the database's shared storage director to a directory that contains a large set of vendor supplied scripts because it would take too long to figure out which ones are actually used and list all of those in the Pending list.
  3. The SQL script runs something that must run as the schema owner and not the SYS user which is normally used.  This might be creating a private database link or it might be a script that makes use of the user_tables view instead of the dba_tables view.

 Both #1 and 2 above can be solved by running OS commands as the oracle user that is running the applied_pending.sh script.

Running an OS Command in a SQL Script

The apply_pending.sh script runs as the oracle user on the host, and it changes the environment to the specified database before it runs any scripts.  To run OS commands you might normally use the SqlPlus "host" command which will shell out to the OS and run the command.  For example:  "host exp file=scott.dmp owner=scott userid=scott/tiger".   You can still do that but you will first need to change the default configuration setting for RESTRICT_HOST_COMMAND in the apply_pending.sh script from "YES" to "NO".  The host command is restricted just incase some dynamic SQL in a script some day comes up with a command like: "! rm -fr *"   So instead or your host system getting a bunch of files deleted you will get the following message:
SP2-0738: Restricted command "! (HOST)" not available

A safer way is to add the OS commands as a header to a SQL script that is applied with apply_pending.sh.  To specify a command in the SQL script just prefix it with: "-- HOST COMMAND:"   You don't even need to have and SQL statement in the file.  The host commands will be parsed from the file and executed as a shell script before the same file is run as a SQL script.  Since the "-- HOST COMMAND:" lines are actually comments in SqlPlus they will be ignore during the SqlPlus run.   For example to export an table the top of the SQL script would be something like this: 

-- HOST COMMAND: exp file=scott.dmp tables=scott.employee userid=scott/tiger

If you need to run a host command or two, perform some SQL commands and then run another host command; you will need to run two script files from the Pending list.  The first one can run the first host command/s and the SQL, and the second will run the follow-up host commands after the SQL has been completed.

The output from the host command will be included as part of the apply_pending.sh email report.  Keep that in mind when you use passwords as arguments to the host commands.

The default directory for output from the SQL scripts is normally the shared storage directory for the database.  For example: "RSTEST8_QTIPS".  If you have a large set of vendor supplied scripts to run, you can place them in a subdirectory under the database directory and then use a "cd" command to change the default directory.  For example you could place the vender supplied scripts in "/oracle_share/docs/RSTEST8_QTIPS/Feb15_upgrade/".  You then need to find the main main SQL script amongst the vendor scripts that starts calling all of the other scripts and copy it to the parent directory. Then add a host command to the top of that script to change to the Feb15_upgrade directory.  For example, if the main script is named upgrade.sql, you would copy it to RSTEST8_QTIPS, rename it to TIPS_Feb15_upgrade.sql and add the following like to the top of the script:
-- HOST COMMAND:cd /oracle_share/docs/RSTEST8_QTIPS/Feb15_upgrade
Then add TIPS_Feb15_upgrade.sql to the Pending list, and run "apply_pending.sh QTIPS -new".   Note that you do not need to make any modification to the TIPS_Feb15_upgrade.sql script when it is applied to the production because it will simply use the SQL scripts that are in the Feb15_upgrade subdirectory under the RSTEST8_QTIPS directory.  The log file however will be in the production directory as usual.

Running SQL Scripts as the Schema Owner

Developers often write their scripts while connected as the schema owner so their code often does not qualify statements with the schema owner's name.  The easy fix is to train the developers to set the current_schema in the top of each SQL script or just do it for them.  For example a SQL script that should run as the LEAD schema will need the following command added to the top of the script: "alter session set current_schema=LEAD;"

There are however occasions where setting the current_schema will not work, such as dynamic SQL that uses one of the "user" views such as "user_tables", or commands that must be run as the schema owner such as creating a private database link.  In those cases the apply_pending.sh script can connect as the schema owner.  To connect as the schema owner when using Oracle 10 or lower versions you only need to specify the schema owners name followed by a colon as a prefix to the name of the script in the Pending list.  For example, to run the PROD_CASH_Feb12_update_taxes.sql script as the CASH user you the Pending list would be:

# PENDING
CASH:PROD_CASH_Feb12_update_taxes.sql

The apply_pending.sh script first write a script the will reset the CASH user's password using it's encrypted value read from the dba_users table.  It then grants the schema owner the DBA role, sets the schema owner's password to "temp", connects as the schema owner, and then runs the previously written script to set the password back to whatever it was.  The DBA role will be revoked after the SQL script is run.  

On Oracle 11 databases the password is no longer available.   In that case the schema owner and the password is added to the Pending list in the format:  schema_owner/password:SQL_script.  For example if the CASH schema owner's password was "treasury" then the example above would look like this:

# PENDING
CASH/treasury:PROD_CASH_Feb12_update_taxes.sql

Sample Reporting

Below is a sample email generated by apply_pending.sh

continued...

The historial_pending.log is simply as chronological list of the Pending list after each run of apply_pending. It is the resource to use when you want to see when a script was first applied or when it was applied to production.