EOX shore - detail

BOX - beginning of expedition
EOX - end of expedition

The DBA is responsible for shore EOX processing.

This expedition is ended. Within two weeks, science participants expect to have moratorium access to LIMS content. DBA, Systems/MCS, and sometimes Programmers need to coordinate to help this happen. The Database Librarian needs to know when these changes occur too.

Make sure we are getting a complete a copy of the shipboard data to be warehoused on shore.

0. Notify that the data load is in progress

Lot's of folks want to know: Systems, Data Librarian, Programmers, EPMs, Managers, selected TAS staff.

It is helpful to chat with Curation. If there is a sampling party or post-cruise meeting in progress, it is preferrable to post-pone the Copy the content step to a less busy time. Otherwise activities up to that point can be completed without any production impact.

1. Retrieve data content from tape

Subsets of the EOX tapes are distributed to Publications, Data Librarian, DBA, Operations, and public access storage locations. - Systems

A copy of the shipboard database export is stage for DBA use at OEM:/backup/eox/

  • The database dump is backed up to tape via the host OEMJR:/backup/eox/.  TBD--confirm location and match in DEV and MCS SOPs.

  • When restored, the file is copied to OEM:/backup/eox/.

There should be one data dump file and an accompanying log of the export activity.

  • When the file contains a full shipboard database export the naming pattern is like limsjr_full-398.dmpdp

  • When the file contains selected schema exports it is named like limsjr_schemas-398.dmpdp

For the shore EOX work, the filename is typically shortened to the expedition designation, e.g. 398.dmpdp.

2. Restore and review

Restore routinely selected database content from the full backup to the shore production transfer schema. - DBA

Nota bene. The expectation is that the data staged will contain the new expedition, QAQC, and 999 (operational test expedition).

In some cases, data is included from a prior expedition.

  • If new content was added, those additions should be preserved. Run this process as-is with the extra expedition. Shipboard changes to legacy content will overwrite the older content.

  • If no new content was added, that reference material should be expunged (already present in the aggregated shore warehouse). A modification of TRANSFER.BOX_LIMS_EXPEDITION_DELETE() is in development for this particular use-case.

Presently a common edge case with poor automation. Tread lightly. Check your work.

Once staged extract, transfer, and load scripts may be run.

  • DBA scripting for the EOX (and other) processes are hosted at OEM:/backup/scripts/rci_work/exp/
    Please create a subdirectory for expedition-specific outputs.

  • This scripting is designed to be run from an ODA command-prompt.

  • Privilege to sudo to the ODA oracle user is required.

Other pre-requisites.

  • What database instance are you connecting to? You must be at a command-line for that ODA instance.

  • As the OS oracle user specify which database instance you are using with this syntax

    . oraenv ORACLE_SID = [oracle] ? LIMSHQ (or RTLIMS or LIMS19)

     

From that command-line, the following shell and SQL scripts (bold header) are run. In the order shown.

import_transfer.ksh

When executed, this script

  • Prompts for the local name (and extension) of the database export file. Upon <Enter>

  • Shipboard content is imported into the TRANSFER schema. Indexes are created for the largest rowcount tables: new_sample, new_test, new_result.

  • The script now prompts for the expedition of interest, e.g. 398, 397T, etc. Upon <Enter>

  • The script imports reference copies of GEODCAT (renamed to include the expedition) and GEODP### schemas.

On completion--of the schema imports above--a variety of SQL statements are run to highlight what data was brought back and how many rows.

  • This script imported 398 content over multiple tests between 30 and 40 minutes.
    Additional data review scripts take another 10-15 minutes to run.

  • The script requires no TRANSFER schema preparation--all the tables required are present, the import methodology truncates the tables before importing new content.

  • The content is imported directly from SCHEMAS exported on ship (LIMS, DESCINFO2, OPS, GEODCAT, GEODP###, etc.).
    There is no duplication of keys incurred by the ETL cycle now in use.

 

import_transfer.ksh
EOX_EXP_DIR=`sqlplus -s / as sysdba << EOT set pages 0 feedback off SELECT directory_path FROM dba_directories WHERE directory_name = 'EOX_EXP_DIR'; exit EOT` # DIRECTORY_PATH # -------------------------------------------------------------------------------- # /backup/eox if [ ! -d $EOX_EXP_DIR ] then echo $EOX_EXP_DIR not found... exit 1 fi if [ $# -ne 1 ] then echo Enter dump file name read DUMP_FILE_NAME else DUMP_FILE_NAME=$1 fi if [ ! -r $EOX_EXP_DIR/$DUMP_FILE_NAME ] then echo $EOX_EXP_DIR/$DUMP_FILE_NAME not found exit 2 fi DT=`date +"%m%d%Y"` SCHEMA_NAME=transfer LOG_FILE_NAME=${SCHEMA_NAME}_import_${DT}.log # Import LIMS, DESCINFO2, and OPS tables into target TRANSFER schema. impdp system/$s directory=eox_exp_dir DUMPFILE=$DUMP_FILE_NAME LOGFILE=${LOG_FILE_NAME} VERSION=LATEST PARFILE=eox_impdp_to_transfer.par while [ ${EOX}1 == "1" ] do echo Enter EOX# read EOX done if [ ${EOX}1 == "1" ] then echo EOX is not set. exit 1 fi sql / as sysdba <<+++ drop user geodcat${EOX} cascade; drop user geodp${EOX} cascade; exit; +++ # Import GEODCAT to an GEODCAT### where ### is the expedition being processed. SCHEMA_NAME=geodcat${EOX} LOG_FILE_NAME=${SCHEMA_NAME}_import_${DT}.log impdp system/$s directory=eox_exp_dir DUMPFILE=$DUMP_FILE_NAME LOGFILE=${LOG_FILE_NAME} VERSION=LATEST CONTENT=ALL SCHEMAS=GEODCAT REMAP_SCHEMA=GEODCAT:${SCHEMA_NAME} # Import GEODP### where ### is the expedition being processed. SCHEMA_NAME=geodp${EOX} LOG_FILE_NAME=${SCHEMA_NAME}_import_${DT}.log impdp system/$s directory=eox_exp_dir DUMPFILE=$DUMP_FILE_NAME LOGFILE=${LOG_FILE_NAME} VERSION=LATEST CONTENT=ALL SCHEMAS=${SCHEMA_NAME} # TBD--revise to log this output to a single file for reference after data modifications and copying. sql / as sysdba <<+++ set newpage 0 set linesize 100 set pagesize 0 set timing on set time on set feedback on set heading on set trimspool on set trimout on set tab off /* These indexes help speed up later bulk data manipulation and merge activites. */ drop index transfer.new_result_idx; drop index transfer.new_result_result_number; drop index transfer.new_test_idx; drop index transfer.new_test_test_number; drop index transfer.new_sample_sample_number; create bitmap index transfer.new_result_idx on transfer.new_result ( sample_number, test_number, result_number ) tablespace transfer parallel; create unique index transfer.new_result_result_number on transfer.new_result (result_number) tablespace transfer parallel; create bitmap index transfer.new_test_idx on transfer.new_test ( sample_number, test_number ) tablespace transfer parallel; create unique index transfer.new_test_test_number on transfer.new_test (test_number) tablespace transfer parallel; create unique index transfer.new_sample_sample_number on transfer.new_sample (sample_number) tablespace transfer parallel; /* Review the content loaded in TRANSFER in finer detail than the import activity. */ -- Review the sample, test, result content just loaded to TRANSFER alter session set current_schema=transfer; select 'REVIEW TRANSFER.NEW_SAMPLE' from dual; select s.group_name, s.x_expedition, s.status, count(*) from transfer.new_sample s where s.x_expedition in ('${EOX}', 'QAQC', '999') group by s.group_name, s.x_expedition, s.status order by s.x_expedition, s.group_name, s.status; select 'REVIEW TRANSFER.NEW_TEST' from dual; select t.group_name, s.x_expedition, t.status, count(*) from transfer.new_sample s, transfer.new_test t where s.sample_number=t.sample_number and s.x_expedition in ('${EOX}', 'QAQC', '999') group by t.group_name, s.x_expedition, t.status order by s.x_expedition, t.group_name, t.status; select 'REVIEW TRANSFER.NEW_RESULT' from dual; select r.group_name, s.x_expedition, r.status, count(*) from transfer.new_sample s, transfer.new_result r where s.sample_number=r.sample_number and s.x_expedition in ('${EOX}', 'QAQC', '999') group by r.group_name, s.x_expedition, r.status order by s.x_expedition, r.group_name, r.status; -- Compare to content in LIMS alter session set current_schema=lims; select 'REVIEW LIMS.SAMPLE' from dual; select s.group_name, s.x_expedition, s.status, count(*) from lims.sample s where s.x_expedition in ('${EOX}', 'QAQC', '999') group by s.group_name, s.x_expedition, s.status order by s.x_expedition, s.group_name, s.status; select 'REVIEW LIMS.TEST' from dual; select t.group_name, s.x_expedition, t.status, count(*) from lims.sample s, lims.test t where s.sample_number=t.sample_number and s.x_expedition in ('${EOX}', 'QAQC', '999') group by t.group_name, s.x_expedition, t.status order by s.x_expedition, t.group_name, t.status; select 'REVIEW LIMS.RESULT' from dual; select r.group_name, s.x_expedition, r.status, count(*) from lims.sample s, lims.result r where s.sample_number=r.sample_number and s.x_expedition in ('${EOX}', 'QAQC', '999') group by r.group_name, s.x_expedition, r.status order by s.x_expedition, r.group_name, r.status; -- Gather stats for the TRANSFER schema exec dbms_stats.gather_schema_stats('TRANSFER', cascade=>TRUE); -- Show info about tables imported from data pump export select table_name, num_rows, avg_row_len, blocks, empty_blocks, owner from all_tables where owner='TRANSFER' order by owner, num_rows desc; exit +++

and its companion

eox_impdp_to_transfer.par
CONTENT=DATA_ONLY TABLE_EXISTS_ACTION=TRUNCATE # Fully load all tables required. # Complex filtering was problematic, so handling it post import. # Nota bene--DESCINFO2 tables on ship will be retired within 2023. # LIMS TABLES=LIMS.FILES, LIMS.SAMPLE, LIMS.TEST, LIMS.RESULT, LIMS.X_EXTERNAL_KEYS, LIMS.X_SCALE, LIMS.X_HOLE_TO_SCALE, LIMS.X_SAMPLE_DEPTH, LIMS.SCORS_AFFINE, LIMS.SCORS_AFFINE_DETAIL, LIMS.SCORS_SPLICE, LIMS.SCORS_SPLICE_DETAIL, LIMS.X_LIME_AUDIT_MAIN, LIMS.X_LIME_AUDIT_DETAIL, LIMS.GEOD_METADATA, LIMS.GEOD_DATA REMAP_SCHEMA=LIMS:TRANSFER REMAP_TABLE=LIMS.FILES:NEW_FILES, LIMS.SAMPLE:NEW_SAMPLE, LIMS.TEST:NEW_TEST, LIMS.RESULT:NEW_RESULT, LIMS.X_EXTERNAL_KEYS:NEW_X_EXTERNAL_KEYS, LIMS.X_SCALE:NEW_X_SCALE, LIMS.X_HOLE_TO_SCALE:NEW_X_HOLE_TO_SCALE, LIMS.X_SAMPLE_DEPTH:NEW_X_SAMPLE_DEPTH, LIMS.SCORS_AFFINE:NEW_SCORS_AFFINE, LIMS.SCORS_AFFINE_DETAIL:NEW_SCORS_AFFINE_DETAIL, LIMS.SCORS_SPLICE:NEW_SCORS_SPLICE, LIMS.SCORS_SPLICE_DETAIL:NEW_SCORS_SPLICE_DETAIL, LIMS.X_LIME_AUDIT_MAIN:NEW_X_LIME_AUDIT_MAIN, LIMS.X_LIME_AUDIT_DETAIL:NEW_X_LIME_AUDIT_DETAIL, LIMS.GEOD_METADATA:NEW_GEOD_METADATA, LIMS.GEOD_DATA:NEW_GEOD_DATA # OPS #TABLES=OPS.RIS_DATA #REMAP_SCHEMA=OPS:TRANSFER #REMAP_TABLE=OPS.RIS_DATA:NEW_RIS_DATA

3. Establish moratorium credentials and controls

For the new expedition data. - DBA

The credential is selected by the Director of the Staff Scientist group. Enquire with Leah Levay--or see the password safe for entries provided for FY2024. The moratorium name follows this pattern: EXP###--examples EXP399, EXP400T, EXP398, EXP398P, etc.

The account is shared by a science party for the moratorium period (typically 18 months)--it enables specific support for SDRM; it enables access to data in moratorium within the web-based LORE reporting system.

The moratorium name is generally selected and specified by the operator of this process.

eox_gen_in_moratorium_script.ksh [moratorium-group]

This script accepts a moratorium name, e.g. SHACKLETON, SANTORINI, etc. Restrictions on that name are listed below.

The script (below) is generated by the above. It writes the SQL to specifically modify NEW_TEST, NEW_RESULT, and NEW_GEOD_DATA tables for moratorium concerns.

  • If a moratorium group is not provided, the script will prompt.

  • The script does not check these requirement

    • The GROUP_NAME field that this string populates is 10 characters or less.

    • The string must also be a valid name for a ROLE. Example: should not start with a number; must be valid for naming a Unix file. By convention we stick with names using A to Z uppercase only, no symbols or special characters.

 

eox_gen_in_moratorium_script.ksh

 

in_moratorium_[moratorium-group].ksh

Running this script

  • Ensures the current expedition content is tagged for moratorium control.

  • Do review the script. Sometimes the EOX expedition number is cleared before running eox_gen_in_moratorium_script.ksh.

Upon running the script--wait.

  • The NEW_RESULT table typically contains 20-30 million rows. This update takes the longest.

  • The NEW_TEST and NEW_SAMPLE table are much faster due to being 3 orders of magnitude smaller.

  • Timing is turned on--the scripts will show how long they took to run.

The result table took 2-5 minutes to complete (with indexes in place) for each of ORTLIMS, LIMS19, RTLIMS, and LIMSHQ.

in_moratorium_GROUPNAME.ksh

 

crt_user_exp.sql

Review the script.

  • Make a copy of it specifically for the expedition you are processing--e.g. crt_user_398.sql

  • As of expedition 401, the user credential will apply LONG_PASSWORD requirements--edit the file accordingly. Use the credential provided by the EPM.

Run the script via SQL.
Provide an explicit path to the script to run it.
Pass the expedition as a parameter to the script. The pattern to follow based on expedition 398 activity

Note that this script may be run at any time. It is free of any dependencies on other scripts here.

This credential needs to be distributed to the following email distributions for filing: EPM for the expedition in processing, it_ops@iodp.tamu.edu, programmers@iodp.tamu.edu, database@iodp.tamu.edu (Data Librarian), webmaster@iodp.tamu.edu

crt_user_exp.sql

 

The above script must be run for the test database instance selected, and for the production instance to which the data is merged.

4. Copy the content

Into the publicly accessible LIMS. - DBA

The data is first imported into a test database instance in the transfer schema. After the data has been remediated to support moratorium concerns, a snapshot of this data may be exported for re-use. The export will be re-imported into the production database.

After completing the above export from the selected test instance, complete these commands at the production instance (LIMSHQ).

 

chk_tablespace_free.ksh

Review the space available in LABWARE, LABWAREIDX.

chk_tablespace_free.ksh

 

grp_cnt_transfer.ksh, grp_cnt_lims.ksh

Run these scripts to obtain row counts by moratorium group, expedition, and record status after the moratorium remediation is complete. Preparatory step before data is merged. The output of these scripts are text files. Review them. File the info in the expedition-specific directory.

grp_cnt_transfer.ksh

 

grp_cnt_lims.ksh

 

call transfer.eox_shore_merge_transfer_new_to_lims()

This procedure merges TRANSFER schema NEW_* table content into LIMS. Including

  • scales, depth, samples, tests, results; and

  • two (recently added) GEODESC tables.

Please review the procedure. To run it

 

Noted

  • Without a unique result_number index, the bulk merging of the NEW_RESULT content took upwards of 6 hours.

  • With a unique index on NEW_RESULT.RESULT_NUMBER the bulk merge was observed to complete in 4 hours (this for ~26 million rows).

  • Support is no longer provided for merging content from the DESCINFO2 schema. No new data is being generated for that product.

  • Script operation does briefly disable triggers on the LIMS.SAMPLE, LIMS.TEST tables. For this reason, it is best run outside of business hours where samples and tests are being modified. The script is preferred. It does not forget to re-enable triggers.

  • The triggers being enabled/disabled are owned by the LIMS schema. To enable the TRANSFER schema to effect changes, simple procedures owned by the LIMS schema manage enable/disable support. The TRANSFER schema is granted permission to execute these LIMS-owned procedures.

grp_cnt_lims.ksh

Re-run this script for review purposes. Data formerly only in the TRANSFER schema should now be present in the same numbers in the LIMS schema.

5. Miscellaneous: Update the Database Overview, Ensure Moratorium User privileges are set

Configuration updates provide summary of the expedition data for participants and future users. - DBA

call transfer.dbo_compute_exp('###')

Run this script to compute new expedition summaries for display in the database OVERVIEW application. Substitute the desired expedition number.

Upon completion, review the database overview application from a browser: : https://web.iodp.tamu.edu/OVERVIEW/
A new column of summary statistics should be visible for the new expedition.

 

DEV - The moratorium user must be modified within the https://web.iodp.tamu.edu/Auther/ application.

  • The moratorium user must be added to the account list (match the pattern of other moratorium accounts).

  • The moratorium user requires the roles: MORATORIUM USER and ALL_SPLAT.

DEV - When moratorium and all major sampling parties are complete for this data, the Auther configurations may be removed. This tends to lag behind moving data out of moratorium.

6. Test

Spot-check and test availability and completeness.

  • Confirm that moratorium access is functioning by running LORE reports. - TAS, DEV

    • Can the EXP### moratorium account login to https://web.iodp.tamu.edu/LORE?

    • Can the EXP### moratorium account see LIMS data for the expedition--images, etc. Spotcheck.

    • Can the EXP### actually download some of the file content?

  • Confirm that moratorium access and account permissions are functioning for GEODESC data products. - TAS, DEV

    • Can the EXP### moratorium account login to https://web.iodp.tamu.edu/DataAccess?

    • Can the EXP### moratorium account see the GEODESC Final Product files via DataAccess?

    • Can EXP### actually download some of those files?

7. Notify interested parties

Notify interested parties that new moratorium data is available in LIMS

  • EPM

  • DBA - RCI

  • Data Librarian

  • Web Master

  • Programmers

  • IT OPS

 

 

 

Shore EOX process applied by the DBA prior to expedition 398.
Re-worked below into process above--polishing and efficiency improvements welcome.
The scripts below are equivalent to steps 2. thru 5.

EOX load and merge - rci_work/exp

The collection of scripts in this location is used to automate the shore components of EOX processing: retrieve last expedition content from the database dump, create and apply moratorium supports, merge the data into production; provide statistics which give quality assurance/quality control on the process.

These are the scripts (in-order) that are typically used

# Shore EOX processing for a data load
# per Sunil Armarni from a Slack thread.
export EOX=#
chk_eox_count.ksh
select sum(bytes)/1024/1024 from dba_free_space where tablespace_name = 'TRANSFER'
import_transfer.ksh
transfer_upd_stat.ksh
data_pre_01_crt_unq.ksh
data_pre_01_chk_dup.ksh
eox_gen_in_moratorium_script.ksh <GROUP>
in_moratorium_<GROUP>.ksh
crt_user_exp.ksh $EOX
chk_tablespace_free.ksh
grp_cnt_transfer.ksh
data_pre_03_dis_trig.ksh
dis_trig_all_<YYYMMDD_HHMMSS>.sql
chk_eox_count.ksh exp#
exec transfer.lims_expedition_delete('#')
chk_eox_count.ksh
grp_cnt_lims.ksh
data_pre_05_without_DI_v6.ksh # calls scors also
grp_cnt_lims.ksh
chk_scale_data.ksh
data_pre_05_scale_data.ksh
chk_scale_data.ksh
enb_trig_all_<YYYMMDD_HHMMSS>.sql
merge_descinfo2_data_v2.ksh
insert_dbo_#.sql - N/A
dbo_compute_v4.sql exp#
data_post_06_upd_stat.ksh

Â