Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

Elaboration and discussion of EOX processes and variations: background notes, explanations, considerations, and troubleshooting for end-of-expedition data processing.
The actual scripts to run are here.

Negotiate a time window

The end-of-expedition process may be run at when data collection is complete.

Current hardware You will need at least 2 hours to run the scripts, 4 for more productive expeditions.
Speak with the MCS to gauge their timing for backups.
Speak with the Staff Scientist.
Speak with the technical staff.

When will you be done adding new data to the database?

Pick a time, then stick to it: any new data applied after this point will not return to shore this cycle. To capture it would require an additional end-of-expedition data-processing cycle.

Bring home a copy

The MCS place database snapshots and files on tape.
You may be requested to carry a tape home.

If you are carrying a travel drive, please bring a copy of both database snapshots home. The additional redundancy has value. Do not attempt to bring back the ASMAN file directory.

Procedural notes

Accounts. Three accounts are used to execute the end-of-expedition data processing. The oracle user (via the ODA operating system) runs the database exports. The Oracle database accounts your-surname_dba and transfer are applied within the export utility. Credentials for oracle and transfer are stored in \\novarupta\vol1\tas\AD\pwsafe\ship.psafe3.

Executing database export (and import). This process requires the expdp and impdp commands. These are command-line data management tools not SQLPLUS commands. They should be run from the account oracle@k1.ship.iodp.tamu.edu--i.e. the database host console. The utilities are database version specific.

Database cleanout. Step (2) is fast (since expedition 324). It guarantees that all the TRANSFER schema tables are empty for subsequent steps.

Executing a stored procedure. Stored procedures must be run from within an Oracle SQLPLUS environment. SQLDeveloper provides this. The stand-alone command-line version of SQLPLUS installed with a full Oracle client provides this. The command-line environment is recommended as it requires the least overhead.

  • The order of invocation is not critical. Each call to a stored procedure manages an atomic, consistent, isolated, durable subset of the data content.
  • Repetition of a procedure is Ok. The scripts have been revised to prevent generation of duplicate copies of the data.
  • Repetition of a procedure is required to capture multiple subsets of the data.
  • Each procedure should return a message to the effect Script completed successfully. The set serveroutput on command enables this feedback.
  • For typical expedition data collection no script is expected to run more than 20 minutes. The procedures lims_expedition_to_new(), descinfo_to_new(), and ops_expedition_to_new() are the slowest as these move, respectively, the greatest number of bytes of data, and the greatest number of rows.

Monitoring execution progress. SQLDeveloper provides a "Monitor session..." facility that lets you know the process is still working. A positive feedback check is to use SQL to count the rows in the TRANSFER tables being updated. If the task was started within SQLDeveloper, menu item View > Task Progress offers some visual feedback on progress.

Environmental concerns. To avoid glitches, interruptions, downtime in the EOX process, it is best to run these processes at the ODA console. Doing so insulates the activity from the multitude of other activities that occur at the end of an expedition, or within a port call.

Procedure invocation variant. A procedure may be invoked with either of these keywords: execute or call. The latter requires fewer keystrokes. Both generate an anonymous PL/SQL block to execute the specified procedure.

Specifying parameters. The procedures are all written to take single-valued parameters. All are strings and must be single-quoted. If data is to be exported for multiple expeditions, the procedures must be invoked for each unique occurrence. The descinfo_to_new() procedure does not require a parameter, this data model is not expedition specific.

Parameter prefixes. The prefix EXP is only required for the parameter of asman_catalog_to_new(). For all lims...() and ops...() procedures require the bare expedition identifier. The descinfo_to_new() procedure does not require any parameter.

Transfer tables. The tables owned by the TRANSFER schema.

  • They have the same columns as their production namesakes.
  • The tables are intended to be empty at the beginning of EOX.
  • These tables do not have any indexes, constraints, or foreign key dependencies.

Why multiple exports?

  • Redundancy.
  • Ensures that content gets home that is not part of routine EOX/BOX handling, including:
    • Location-specific configuration [LIMS analysis, components, constants, instrument, and user definitions.]
    • DESCINFO raw workspaces that were precursors to DescLogik uploads into the LIMS schema.
    • Log content (PITALOG).
  • Useful to provide real content for development, demo, test, and training databases.

Troubleshooting Notes

Archive logging--This is typically not an issue for EOX, but is still a very real issue for BOX.

Oracle treats the changes resulting from end-of-expedition processing just like any other transactions. Due to the size of these transactions it is possible (though unlikely) that EOX activities will exceed 20 GiB allocated for archive logging. When this occurs, the database blocks all activity until archive log space is freed. All attempts at new transactions will receive the message ORA-00257: archiver stuck.

TO BE REVISED. Should check disk space, and current settings before any modification to DB_RECOVERY_FILE_DEST_SIZE. This is a managed space that affects many systems on the ODA. Coordinate with the DBA.

As an Oracle sysdba user, apply this command to temporarily increase the amount of archive log space available

alter system set db_recovery_file_dest_size=100G scope=memory;

Set it back to 20GiB when done. The above will get Oracle back up and running with enough breathing room for you to connect and do the following.

# Apply these script to see how much archive log space is used
pico {oracle} > cd /oracle/scripts
pico {oracle} > chk_archive_log.ksh

Connect to RMAN like this, e.g.

pico {oracle} > rman
RMAN> connect target

No additional credential should be required. You authenticated when you logged in as the server Oracle user.

Use these RMAN (Oracle's Backup and Recovery Manager) commands to clear the archive log backlog. THIS IS TO BE AVOIDED ON THE PRODUCTION SYSTEM. IT BREAKS OUR ABILITY TO RECOVER TO THAT POINT IN TIME.

list archivelog all;
crosscheck archivelog all;
delete archivelog all;

Monitor the generation of archive logs using the Enterprise Manager Console or the RMAN tool.

Export fails with permission errors

The export [in step (5) of the short form] requires that the system and transfer schemas have permission from Oracle and the ODA operating system to write files to the directory /u02/exports.

This SQL verifies the existence of the directory named dmpdir. Inspection verifies whether it points to /u02/exports.

/* as system@limssodv */
select * from dba_directories;

This SQL creates the directory entry, if missing

/* as system@limssodv */
create directory dmpdir as '/u02/exports';

The SQL grants operating system read, write access for the specified directory

/* as system@limssodv */
grant read, write on directory dmpdir to transfer, system;

The export fails with an ORA-39095
This indicates that disk space on the ODA export volume is exhausted. The volume is shared by several Oracle facilities and instances: trace logging, archive logging for production, test and other database instances. To clear the error, find files we do not need on that volume and remove them.

When this occurred at EOX 374, the DBA removed trace logs related to the OEMREPJR database instance to make sufficient space available.

These activities are driven by the MCS and DBA. Coordinate with both.

This script isn't working, why?

The underlying SQL for each procedure is viewable (and modifiable) under SQLDeveloper:

  • Open SQLDeveloper.
  • Login as the TRANSFER user.
  • Expand the procedures tree.
  • Click on the procedure you would like to review.

Most likely source of error: database model changes during the expedition. Model changes must be replicated to the various transfer tables and the EOX scripts.

Something went wrong, I want to start over

Not a problem. The function of these scripts is to make a copy of data for backup. There is no irreversible damage being done by any of these activities. It is sufficient to begin again at step (2) of the short form.

The source data schemas are not modified in any way by this process. The various stored procedures only copy from a source (LIMS, DESCINFO2, OPS, etc.) to the target TRANSFER schema.

Prior to 324, the cleanout method relied on Oracle delete. Oracle's guarantee of atomic, consistent, isolated, durable transactions translates to considerable overhead when millions of records are involved. The truncate command bypasses this overhead, but does not guarantee atomicity, consistency, isolation, or durability.

Data cleanout

CLEANOUT_NEW_TABLES. Conducts a fast and non-recoverable truncation of table content. Only applied on tables in the transfer schema. DO NOT APPLY THE SAME TECHNIQUE on production tables as this command is not intended to process indexes, constraints, table statistics, and other table-related data objects that are affected by data removal.

The truncation command has been applied since expedition 324. Though overflow of the available archive logger space is still possible, use of the truncation command alleviates that issue for EOX processing.

Data copy architecture

LIMS_EXPEDITION_TO_NEW. Copies data by expedition [not by project!] for sample, test, result and several other tables. The style is a series of insert into ... select * from ... statements.

  • The script does not check that the tables are empty.
  • The script may be run multiple times.
  • Records that match will only be copied once (changed as of expedition 324).
  • When new records are identified, they are accumulated in the transfer tables.



Useful Utilities

How do I recreate a missing TRANSFER table?
Before we give you the commands, some background information:

  • The TRANSFER schema tables are column-by-column mirrors of their production counterparts.
  • The naming convention indicates the intended use of the table:
    • NEW_tablename indicates data to be moved from ship and added to the shore warehouse.
    • LEGACY_tablename indicates data copied from the shore warehouse for repeat presentation in the shipboard database.
    • CONT_tablename indicates container data to be applied in the shipboard environment for the moisture and density measurement methods.

To create a new table within the TRANSFER schema:


Login as the TRANSFER user.
Apply the appropriate prefix and tablename, as in these examples:


create table transfer.NEW_SAMPLE as select * from lims.SAMPLE where 1=0;


The idiom where 1=0 ensures that only the table structure is created--no records are copied.

The current list of TRANSFER tables may be displayed via:

connect transfer@limssodv
set pagesize 50000
select table_name from tabs order by table_name;

The source table for any given entry may be found by dropping the prefix (NEW_, LEGACY_, CONT_). Source tables are in the schemas LIMS, OPS, DESCINFO2.

*nix to Windows Copy examples


Various methods for transferring files if you aren't initially comfortable on a *nix box.Via PuTTY. From the database host to the local Windows box:

pscp oracle@db.ship.iodp.tamu.edu:/u02/exports/transfer*335* c:/Volumes/ozy/2hq/data_snapshots/




Tools such as IPSWITCH_FTPFileZilla and WinSCP provide GUIs to perform the same task for both secure copy (scp) and secure file transfer (sftp) protocols. On the Unix/Linux side, scp is the equivalent tool. Type man scp for command-line help.

Are the copies you made any good? Are you sure?

Checksum tools read every byte of a file to generate a signature for the file. If the signatures differ between copies of a file, then the copies are not identical. The signatures should match, regardless of the platform used. These tools are particularly useful to verify transfers conducted over our (slow, less-reliable-than-i'd-like) satellite connection.

Redhat$ md5sum *.dmpdp.bz2

Solaris> digest -a md5 -v transfer_741.dmpdp
md5 (transfer_323.dmpdp) = 9a043b6b899b6eddebc70f30e7df450c

DOS> md5sum transfer_323.dmpdp
9a043b6b899b6eddebc70f30e7df450c *transfer_741.dmpdp

MacOSX$ md5 transfer_741.dmpdp
MD5 (transfer_741.dmpdp) = 9a043b6b899b6eddebc70f30e7df450c


TECHDOC Cumulus Content

TECHDOC is not brought home. No copy is needed. If it is required, the MCS will manage it.

Labstation / Instrument Host Cleanup

Work with the techs and research folks to get any data needed off of the labstations and cleanup any files on those that need doing. Procedures on this will evolve as we gain more experience.

Not our job. Current tech staff and LO, ALOs expect lab specialists to learn and manage this to accommodate moratorium protections.

The work can be scripted very effectively, but such scripts are also equally dangerous as the intent is data removal. Triggering them in the middle of an expedition would be a "bad thing".

DOCUMENT THOSE PROCEDURES HERE AS THEY ARE IDENTIFIED.

  • No labels