BOX ship - rationale and FAQ - database cleaning

Beginning of Expedition Database Cleaning

The intent of these processes is to

  • Honor moratorium (i.e. remove previous expedition's data and content that new participants have no business using).

  • Establish a variety of setups required for a new expedition (project) and its personnel to do their data management jobs.

Not all of the activities discussed here are required for every expedition. The intent is to also provide history, background for exceptional cases or variations that do come up. Sometimes the process is more creative than routine. We want you to have a foundation for those times.

Q: Where should I run these processes?
A: It is recommended to conduct BOX database cleaning processes on a host other than your laptop.

  • If you think your counterpart may need to check on the process when they come on shift. Consider using a shared profile like SHIP\daq on the DEV host or the BUILD (code and dependency repositories) host.

  • If you have no counterpart consider using your Active Directory profile on the DEV host or the BUILD host.

The rationale is that some of these processes take a long time. Perhaps you don't want to tie up your laptop while waiting. Historically the processes which deleted result records and defragmented storage would run for worst cases of 9-14 hours. On current ODAs and versions of Oracle the longest processes we have take no more than 90 minutes to complete.

Q: What tool should I use to run BOX database cleaning and most other data management processes?
A: SQL Developer.

All the instructions and explicit details in the procedural notes expect that a copy of SQL Developer is configured for appropriate access via your DBA credentials--whether that be on your laptop (not recommended), or a copy of the tool installed on the DEV or BUILD hosts. It is your responsibility to be familiar when re-establishing that connectivity if it is broken. We frequently update tools like SQL Developer to keep pace with Oracle technology changes and security fixes.

Formerly these instructions were written so as to generate log files that you and your colleagues could inspect. We no longer do that in this process since Feb 2023 (Exp 398).

All TRANSFER procedures have been revised record activity to TRANSFER.LOG. Provides some bread-crumbs to review past activity. Several LIMS procedures and functions are also revised to record activity to this same log.

Q: A database process is taking too long, I want another way to check what's happening.
A:

  • Open an independent Oracle session via PuTTY or SQL Developer.

  • Run commands against the tables being operated on.

 It is not always effective due to database security and models of atomicity, completeness, independence, and durability. Amount of parallelism changes how Oracle manages this information too.

This command is useful to review active SQL for a list of users. Modify the username list to see active SQL for other accounts. Given the SID, SERIAL# in the SQL below OEM (Oracle Enterprise Manager) and other tools can be used to monitor and manage long-running or even SQL blocked by row-locking mechanisms.

select u.sid, u.serial#, s.rows_processed, s.disk_reads, s.buffer_gets, s.last_active_time, s.physical_read_by tes, s.physical_write_bytes, s.sql_id, u.username, substr(s.sql_text,1,50) from v$sql s, v$session u where s.sql_id =u.sql_id and u.username in ('MY_DBA','GUEST','JRS_xxx');

 

Q: My command-line login is failing. What are some less usual causes?
A:

For command-line tools--like sql, sqlplus, rman: if your password has spaces or symbol characters, you must quote the password, e.g.

> sql x@yz Password:"@ s3cr3t fr45e"

 

Discussion

Database Schemas

The database consists of distinct schemas. Each schema serves different functions

  • DESCINFO2. Records configuration of parameters, templates, value lists, and users for the descriptive information eco-system.

  • LIMS. The sample catalog and repository of experimental results against those samples. Includes a catalog of files (ASMAN) associated with those samples.

  • OPS. Repository of drilling operations information. Bathymetry and navigation content was removed from here as of Subic Bay Tie-up 353P Oct 23, 2014. The content is time and activity based, not sample based. Other workflows manage the bulk of this data outside Oracle. What is recorded here is a small subset of the total operational content we keep.

  • TRANSFER. Contains scripts and tables for managing data transfer and cleanup processes. Some run here for BOX.

  • GEODCAT. Carries taxonomy, templates, observable definitions globally available to GEODESC operators.

  • GEODP###. Carries taxonomy, templates, observable definitions specific to the expedition/project.

Cleaning Process Architecture

End-of-expedition processing leaves data in LIMS, TRANSFER. That content may be removed. The beginning-of-expedition processing removes that content. The procedures to conduct the removal are owned by the TRANSFER schema.

An Annoyance: Monitoring for Archive Log Space Filling

Oracle treats the changes resulting from end-of-expedition processing like any other transactions. Due to the size of these transactions it is likely that the 100GiB reserved for archive logging will be consumed. When this occurs, the database blocks all activity until archive log space is freed. All attempts at new transactions will receive the message ORA-00257: archiver stuck.

This method of releasing the archiver assumes there is actually some spare space on the system. Via a DBA account--to increase the amount of archive log space available:
alter system set db_recovery_file_dest_size=1000G scope=memory;

The ODA systems carry plenty of disk. The above will get Oracle back up and running with enough breathing room for you to connect and do the following. Because the scope is "memory" only, setting will revert when the database is restarted. Good practice to set it back to 100G when done. We run a long time before restarting the database.

Disk space for archive logging is freed on a weekly basis (Sun) by backing up Oracle transactions to tape, then removing them from disk. If this isn't soon enough, contact MCS and your DBA.

Monitoring Archive Log Generation

The HTTP-based Oracle Enterprise Manager (OEM) provides a page for detailed monitoring and management of archive logs. Pre-requisites for usage

  • Request a starter set of credentials and permissions for OEM from the DBAs. Specify which environment you are in.

  • OEM credentials are distinct from RDBMS credentials. You must have DBA level privileges and credentials to the database being managed.

  • OEM and RDBMS credentials are distinct from operating system credentials. You must have access to the host oracle account for some operations.


(1)
Connect to OEM at https://oemjr.ship.iodp.tamu.edu:7802/em
Login with your OEM credential.

(2)
Select the menu Target > Databases. Click on the link corresponding to the database you wish to manage.
Login with your DBA privileged RDBMS credential.

(3)
To simply monitor archive log usage visit this page.
On the secondary menu select  Administration > Storage > Archive Logs
Refresh the page as-needed to update the statistics.

To manage the archive logs visit this page
On the secondary menu select Availability > Backup & Recovery > Manage Current Backups
Operating system credential is required to send RMAN (recovery manager) commands.

TODO Provide more detail here as we get more practice.

  • Must be logged into this page and monitoring the database before the archive logger blocks. If the archive logger is already blocked current experience indicates that OEM is not effective. Direct host login to RMAN becomes necessary.

  • The OEM times out your login in about 10/15 minutes. If running out of archive log space is still a real concern with a terabyte of disk, refresh the page frequently.

  • Until MCS and developers become more comfortable with the backup process, it is preferable to increase archive log space by the alter system commands above.

  • The OEM provides email alerts that trigger on thresholds (e.g. percentage of archive log space filled). Notification settings are determined by the OEM manager.

More info about archive logging

Run this SQL as the DBA to verify info about archive logging status.

sql>
archive log list


Expect output like this

Database log mode No Archive Mode
Automatic archival Disabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 27612
Current log sequence 27614


or this

Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 30532
Current log sequence 30760

How to Invoke PL/SQL Scripts

All scripts are run from procedures owned by the TRANSFER account. The typical data cleanup and preening session is shown above and repeated here

call transfer.util_cleanout_transfer_new_tables();
call transfer.box_asman_catalog_delete('EXPtest');
call transfer.box_lims_expedition_delete('test');


Most scripts take a single string parameter for expedition. The ASMAN scripts take a single string parameter indicating the catalog to be deleted--usually formed as 'EXP'+'theExpeditionNumber', e.g. 'EXP339'.   Catalogs: DESCINFO and DROPS are kept per user request.

Run the scripts from SQL, SQL*Plus or from within a SQLDeveloper worksheet. SQL and SQL*Plus have the advantage of being light-weight, and provides facilities to spool all transactions that transpire for post-cleanup review and comparison. SQLDeveloper has the advantage of readily reviewing and verifying the scripts being invoked.

The scripts are designed to provide feedback, however the Oracle mechanisms are not real-time. To turn on useful feedback mechanisms, apply these commands from the SQL*PLUS command-line mileage will vary if tried in the context of SQLDeveloper:

set timing on
set serveroutput on


To capture the content of a SQL*Plus session, do something like this:

spool box_processing-340T.log
set timing on
set serveroutput on
[other stuff is run here...]
spool off


The duration of the session is very dependent on the quantity of data collected. Be patient. It takes time to delete 14-20 million rows. Current takes about 30 minutes. Not too long ago took between 9 and 20 hours.
Special behavior for scripts is noted below. E.g. order of execution is important (in some cases); some processes are fast; some are slow. Double-check.

Data cleanup is routine, but will have variances. The additional scenarios provided below attempt to clarify the variances we encounter. Apply the scenario or combination of scenarios that best fits current expedition requirements.

You will see a number of procedures in TRANSFER that are not documented here. These processes have been in disuse for so long additional thought, testing, and documentation should be applied to them when they come up again. Specific questions? Ask around. Read the code.

A Cookbook and Scenarios.

The routine processes are described first. Then less common scenarios are described.

Clean the TRANSFER schema

Irrevocably and quickly drop all content in the TRANSFER schema tables with "NEW" in their names. No parameter required.

call transfer.util_cleanout_transfer_new_tables();
commit;

Clean out the ASMAN file catalogs(will also remove Autosaves from Descinfo2)

This script removes the database records only. It is a separate step to clean out the file system for the catalog. You are responsible for the database cleanup. The MCS take care of the file system. Talk with each other.

call transfer.box_asman_catalog_delete('EXP123');
commit;


Repeat box_asman_catalog_delete for as many catalogs as you desire to remove. Confirm removal by re-running the survey script. Commit when done. The DESCINFO and DROPS catalogs are to be preserved--specific request by OPS and DESC technicians.

select count(*), catalog from lims.files
group by catalog;

Clean out LIMS 

This is a longer process, but no more than 30 minutes with current database size and hardware. 6-20 hours is thankfully a historical artifact. Progress of the script can still be checked by counting samples, tests, and particularly results.

call transfer.box_lims_expedition_delete('123');
commit;


Repeat box_lims_expedition_delete for as many expeditions as required. The smaller the quantity of data, the faster it will go.
See detailed process for setting of expedition specific variables.  ----

Recommend the various select count(*) variants as routine content checks before and after the removal step.

select count(*), x_expedition from lims.sample
group by x_expedition;
select count(*), x_project from lims.sample
group by x_project;
select count(*), x_project from lims.test
group by x_project;
select count(*), analysis from lims.result
group by analysis;


This procedure is slow. Data we want to keep is intermixed with data to be deleted. So we do it rows at a time. The database is a 24/7 tool--there's always some activity against it.

These statistics are obsolete and in need of update. Delete of 398 result table content took about 30 minutes.

  • Allow 3 hours per 10 million rows of results.

  • Allow 3 hours for defragmenting and condensing space allocated to tables and indexes.

This procedure should NOT be run during routine laboratory data collection. For selected tables, it turns off triggers, and reduces archive logging.The procedure MAY be run during periods where only data reporting is being done.

Scenario: Brand new expedition, previous content being removed.

This is the most typical expedition cleanup scenario:

  • One expedition of OPS data to delete.

  • One expedition of ASMAN content to remove.

  • One expedition of LIMS content to remove.

Scenario: Current expedition is continuation of previous.

Previous expedition content is being preserved on ship due to continuation with a new science party. Previously curated and collected samples are for reference only. The "legacy" content should be locked down so that it is not unintentionally modified. There is no special handling for ASMAN content. You have to remember that this expedition is now legacy and remove it at the appropriate time.

Scenario: Remove request codes used by the previous expedition.

The curator manages this using the request code manager application. No developer involvement is required.

Load information being transferred from shore.

Transfer of content has been on an ad hoc basis. In general the need should be flagged before the expedition and managed as needed. Common scenarios are noted here.

Scenario: New containers to be loaded for MAD analyses.

See the MAD container load - process. And additional related documentation on the shore wiki and in Subversion. The names and masses of containers are delivered in a spreadsheet (or text file).

Scenario: Load pre-existing LIMS data

Some expeditions are continuations or extensions of previous work. For convenience they may wish to have a local copy of previous samples and analytical content. In these cases, pre-expedition preparation is preferred. Various methods are available

  • Copy content to a TRANSFER schema, export it for transport. On delivery--re-import, copy into the ship production schema.

  • Use SQL Developer to export CSV files of the relevant data. Transport them. Reload those records into the ship production schema.

Similar scenarios apply for carrying test data.

Scenario: Load pre-existing ODP, DSDP data

Some expeditions are continuations or extensions of previous work. For convenience, the science party may require a local copy of previous samples and analytical content. If small amounts of legacy material is brought out for re-sampling, re-analysis, it is easiest to just re-catalog the material in LIMS with Catwalk Sample and SampleMaster.

For re-measurement of ODP, DSDP material with current equipment, plan on bring out the sample catalog (site, hole, core, section, section half, whole rounds) migrated into LIMS from Janus some time ago.

Cleaning test data

For new participants and staff to experience and practice using integrated shipboard systems, it is helpful to have some data to play with. The same is required for development that continues between expeditions.

Records are accumulated against an expedition/project called 999. After awhile it is helpful to clean out tests and results. Be judicious and selective. Ask around: is anyone using this data? For example, sometimes end-user testing or development is in progress of which you may not have been aware. Once you have go-ahead, it is often sufficient to clean out tests and results. Unless specifically testing sample cataloging we prefer to avoid the effort of recreating a complete and representative set of samples. These clean out scripts should only be used in the scenario where samples are being preserved [curatorial tests and results], but science data [non-curatorial] tests and results are being removed.

These are encoded in the TRANSFER script UTIL_LIMS_TEST_RESULT_DELETE.

delete from lims.result
where sample_number in (
select sample_number from sample
where x_expedition='999')
and analysis not in ('BHA', 'BITS', 'DRILLING', 'LATLONG', 'OBSLENGTH', 'ORIGDEPTH')
;
 
delete from lims.test
where sample_number in (
select sample_number from sample
where x_expedition='999')
and analysis not in ('BHA', 'BITS', 'DRILLING', 'LATLONG', 'OBSLENGTH', 'ORIGDEPTH')
;