Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Restore routinely selected database content from the full backup to the shore production transfer schema. - DBA

Once staged extract, transfer, and load scripts may be run.

  • DBA scripting for the EOX (and other) processes are hosted at OEM:/backup/scripts/rci_work/exp/
  • This scripting is designed to be run from an ODA command-prompt.
  • Privileges to sudo to the ODA oracle user are required.

Other pre-requisites.

...

As the OS oracle user specify which database instance you are using with this syntax

Code Block
. oraenv
ORACLE_SID = [oracle] ? LIMSHQ (or RTLIMS or LIMS19)

From that command-line, the following shell and SQL scripts (bold header) are run. In the order shown.

import_transfer.ksh

When executed, this script

  • Prompts for the local name (and extension) of the database export file. Upon <Enter>
  • Shipboard content is imported into the TRANSFER schema. Indexes are created for the largest rowcount tables: new_sample, new_test, new_result.
  • The script now prompts for the expedition of interest, e.g. 398, 397T, etc. Upon <Enter>
  • The script imports reference copies of GEODCAT (renamed to include the expedition) and GEODP### schemas.

On completion--of the schema imports above--a variety of SQL statements are run to highlight what data was brought back and how many rows.

  • This script imported 398 content over multiple tests between 30 and 40 minutes.
    Additional data review scripts take another 10-15 minutes to run.
  • The script requires no TRANSFER schema preparation--all the tables required are present, the import methodology truncates the tables before importing new content.
  • The content is imported directly from SCHEMAS exported on ship (LIMS, DESCINFO2, OPS, GEODCAT, GEODP###, etc.).
    There is no duplication of keys incurred by the ETL cycle now in use.
EOX_EXP_DIR=`sqlplus -s / as sysdba << EOT set pages 0 feedback off SELECT directory_path FROM dba_directories WHERE directory_name = 'EOX_EXP_DIR'; exit EOT` # DIRECTORY_PATH # ----------------------
Code Block
titleimport_transfer.ksh
collapsetrue
Info

Nota bene. The expectation is that the data staged will contain the new expedition, QAQC, and 999 (operational test expedition).

In some cases, data is included from a prior expedition.

  • If new content was added, those additions should be preserved. Run this process as-is with the extra expedition. Shipboard changes to legacy content will overwrite the older content.
  • If no new content was added, that reference material should be expunged (already present in the aggregated shore warehouse). A modification of TRANSFER.BOX_LIMS_EXPEDITION_DELETE() is in development for this particular use-case.

Presently a common edge case with poor automation. Tread lightly. Check your work.

Once staged extract, transfer, and load scripts may be run.

  • DBA scripting for the EOX (and other) processes are hosted at OEM:/backup/scripts/rci_work/exp/
    Please create a subdirectory for expedition-specific outputs.
  • This scripting is designed to be run from an ODA command-prompt.
  • Privilege to sudo to the ODA oracle user is required.

Other pre-requisites.

  • What database instance are you connecting to? You must be at a command-line for that ODA instance.
  • As the OS oracle user specify which database instance you are using with this syntax

    Code Block
    . oraenv
    ORACLE_SID = [oracle] ? LIMSHQ (or RTLIMS or LIMS19)


From that command-line, the following shell and SQL scripts (bold header) are run. In the order shown.

import_transfer.ksh

When executed, this script

  • Prompts for the local name (and extension) of the database export file. Upon <Enter>
  • Shipboard content is imported into the TRANSFER schema. Indexes are created for the largest rowcount tables: new_sample, new_test, new_result.
  • The script now prompts for the expedition of interest, e.g. 398, 397T, etc. Upon <Enter>
  • The script imports reference copies of GEODCAT (renamed to include the expedition) and GEODP### schemas.

On completion--of the schema imports above--a variety of SQL statements are run to highlight what data was brought back and how many rows.

  • This script imported 398 content over multiple tests between 30 and 40 minutes.
    Additional data review scripts take another 10-15 minutes to run.
  • The script requires no TRANSFER schema preparation--all the tables required are present, the import methodology truncates the tables before importing new content.
  • The content is imported directly from SCHEMAS exported on ship (LIMS, DESCINFO2, OPS, GEODCAT, GEODP###, etc.).
    There is no duplication of keys incurred by the ETL cycle now in use.


Code Block
titleimport_transfer.ksh
collapsetrue
EOX_EXP_DIR=`sqlplus -s / as sysdba << EOT
	set pages 0 feedback off
	SELECT directory_path FROM dba_directories WHERE directory_name = 'EOX_EXP_DIR';
	exit
EOT`
# DIRECTORY_PATH
# --------------------------------------------------------------------------------
# /backup/eox
if [ ! -d $EOX_EXP_DIR ]
then
	echo $EOX_EXP_DIR not found...
	exit 1
fi

if [ $# -ne 1 ]
then
	echo Enter dump file name
	read DUMP_FILE_NAME
else
	DUMP_FILE_NAME=$1
fi
if [ ! -r $EOX_EXP_DIR/$DUMP_FILE_NAME ]
then
	echo $EOX_EXP_DIR/$DUMP_FILE_NAME not found
	exit 2
fi

DT=`date +"%m%d%Y"`
SCHEMA_NAME=transfer
LOG_FILE_NAME=${SCHEMA_NAME}_import_${DT}.log
# Import LIMS, DESCINFO2, and OPS tables into target TRANSFER schema.
impdp system/$s directory=eox_exp_dir DUMPFILE=$DUMP_FILE_NAME LOGFILE=${LOG_FILE_NAME} VERSION=LATEST PARFILE=eox_impdp_to_transfer.par

while [ ${EOX}1 == "1" ]
do
  echo Enter EOX#
  read EOX
done
if [ ${EOX}1 == "1" ]
then
  echo EOX is not set.
  exit 1
fi

sql / as sysdba <<+++
drop user geodcat${EOX} cascade;
drop user geodp${EOX} cascade;
exit;
+++

# Import GEODCAT to an GEODCAT### where ### is the expedition being processed.
SCHEMA_NAME=geodcat${EOX}
LOG_FILE_NAME=${SCHEMA_NAME}_import_${DT}.log
impdp system/$s directory=eox_exp_dir DUMPFILE=$DUMP_FILE_NAME LOGFILE=${LOG_FILE_NAME} VERSION=LATEST CONTENT=ALL SCHEMAS=GEODCAT REMAP_SCHEMA=GEODCAT:${SCHEMA_NAME}

# Import GEODP### where ### is the expedition being processed.
SCHEMA_NAME=geodp${EOX}
LOG_FILE_NAME=${SCHEMA_NAME}_import_${DT}.log
impdp system/$s directory=eox_exp_dir DUMPFILE=$DUMP_FILE_NAME LOGFILE=${LOG_FILE_NAME} VERSION=LATEST CONTENT=ALL SCHEMAS=${SCHEMA_NAME}

# TBD--revise to log this output to a single file for reference after data modifications and copying.
sql / as sysdba <<+++
set newpage 0
set linesize 100
set pagesize 0
set timing on
set time on
set feedback on
set heading on
set trimspool on
set trimout on
set tab off

/* These indexes help speed up later bulk data manipulation and merge activites. */
drop index transfer.new_result_idx;
drop index transfer.new_result_result_number;
drop index transfer.new_test_idx;
drop index transfer.new_test_test_number;
drop index transfer.new_sample_sample_number;
create bitmap index transfer.new_result_idx on transfer.new_result ( sample_number, test_number, result_number ) tablespace transfer parallel;
create unique index transfer.new_result_result_number on transfer.new_result (result_number) tablespace transfer parallel;
create bitmap index transfer.new_test_idx on transfer.new_test ( sample_number, test_number ) tablespace transfer parallel;
create unique index transfer.new_test_test_number on transfer.new_test (test_number) tablespace transfer parallel;
create unique index transfer.new_sample_sample_number on transfer.new_sample (sample_number) tablespace transfer parallel;

/* Review the content loaded in TRANSFER in finer detail than the import activity. */
-- Review the sample, test, result content just loaded to TRANSFER
alter session set current_schema=transfer;
select 'REVIEW TRANSFER.NEW_SAMPLE' from dual;
select s.group_name, s.x_expedition, s.status, count(*) from transfer.new_sample s where s.x_expedition in ('${EOX}', 'QAQC', '999') group by s.group_name, s.x_expedition, s.status order by s.x_expedition, s.group_name, s.status;
select 'REVIEW TRANSFER.NEW_TEST' from dual;
select t.group_name, s.x_expedition, t.status, count(*) from transfer.new_sample s, transfer.new_test t where s.sample_number=t.sample_number and s.x_expedition in ('${EOX}', 'QAQC', '999') group by t.group_name, s.x_expedition, t.status order by s.x_expedition, t.group_name, t.status;
select 'REVIEW TRANSFER.NEW_RESULT' from dual;
select r.group_name, s.x_expedition, r.status, count(*) from transfer.new_sample s, transfer.new_result r where s.sample_number=r.sample_number and s.x_expedition in ('${EOX}', 'QAQC', '999') group by r.group_name, s.x_expedition, r.status order by s.x_expedition, r.group_name, r.status;

-- Compare to content in LIMS
alter session set current_schema=lims;
select 'REVIEW LIMS.SAMPLE' from dual;
select s.group_name, s.x_expedition, s.status, count(*) from lims.sample s where s.x_expedition in ('${EOX}', 'QAQC', '999') group by s.group_name, s.x_expedition, s.status order by s.x_expedition, s.group_name, s.status;
select 'REVIEW LIMS.TEST' from dual;
select t.group_name, s.x_expedition, t.status, count(*) from lims.sample s, lims.test t where s.sample_number=t.sample_number and s.x_expedition in ('${EOX}', 'QAQC', '999') group by t.group_name, s.x_expedition, t.status order by s.x_expedition, t.group_name, t.status;
select 'REVIEW LIMS.RESULT' from dual;
select r.group_name, s.x_expedition, r.status, count(*) from lims.sample s, lims.result r where s.sample_number=r.sample_number and s.x_expedition in ('${EOX}', 'QAQC', '999') group by r.group_name, s.x_expedition, r.status order by s.x_expedition, r.group_name, r.status;

-- Gather stats for the TRANSFER schema
exec dbms_stats.gather_schema_stats('TRANSFER', cascade=>TRUE);

-- Show info about tables imported from data pump export
select table_name, num_rows, avg_row_len, blocks, empty_blocks, owner from all_tables where owner='TRANSFER' order by owner, num_rows desc;

exit
+++

...

For the new expedition data. - DBA

Pick a name and credential. The account is shared by a science party for the moratorium period. The role created with the same name ensures that science party is granted the exception to see its moratorium data.

eox_gen_in_moratorium_script.ksh [moratorium-group]

This script accepts a moratorium name, e.g. SHACKLETON, SANTORINI, etc.

It generates another script (below) that will be run to modify NEW_TEST, NEW_RESULT, and NEW_GEOD_DATA tables for moratorium concerns.

...

The credential is selected by the Director of the Staff Scientist group. Enquire with Leah Levay--or see the password safe for entries provided for FY2024. The moratorium name follows this pattern: EXP###--examples EXP399, EXP400T, EXP398, EXP398P, etc.

The account is shared by a science party for the moratorium period (typically 18 months)--it enables specific support for SDRM; it enables access to data in moratorium within the web-based LORE reporting system.

The moratorium name is generally selected and specified by the operator of this process.

eox_gen_in_moratorium_script.ksh [moratorium-group]

This script accepts a moratorium name, e.g. SHACKLETON, SANTORINI, etc. Restrictions on that name are listed below.

The script (below) is generated by the above. It writes the SQL to specifically modify NEW_TEST, NEW_RESULT, and NEW_GEOD_DATA tables for moratorium concerns.

  • If a moratorium group is not provided, the script will prompt.
  • The script does not check these requirement
    • The GROUP_NAME field that this string populates is varchar2(10)-- 10 characters or less. It is conventional to use a memorable uppercase name. Often the EPMs will provide both a name and a credential.The string must also be a valid
    • The string must also be a valid name for a ROLE. Example: should not start with a number; must be valid for naming a Unix file. By convention we stick with names using A to Z uppercase only, no symbols or special characters.


Code Block
titleeox_gen_in_moratorium_script.ksh
collapsetrue
DT=`date +"%Y%m%d_%H%M%S"`
LOG1=upd1_result_grp_name_${DT}.txt
LOG2=upd2_result_grp_name_${DT}.txt
echo "Started at `date`" 
if [ $# -ne 1 ]
then
	echo Enter GRP_NM
	read GRP_NM
else
	GRP_NM=$1
fi
KSH_SCRIPT=in_moratorium_${GRP_NM}.ksh
echo "sql / as sysdba <<+++" >${KSH_SCRIPT}
echo "alter session set current_schema=transfer;" >>${KSH_SCRIPT}
echo "set echo on" >>${KSH_SCRIPT} 
echo "set pagesize 0" >>${KSH_SCRIPT} 
echo "set linesize 132" >>${KSH_SCRIPT} 
echo "set timing on" >>${KSH_SCRIPT} 
echo "set heading off" >>${KSH_SCRIPT} 
echo "spool in_moratorium_${GRP_NM}.log" >>${KSH_SCRIPT} 
echo "update transfer.new_result set group_name='${GRP_NM}' where group_name='DEFAULT' and analysis not in ('BHA','BIT','DEPLETED','DRILLING','LATLONG','ORIGDEPTH','ROUTING','SCALINPUTS') and sample_number in (select sample_number from transfer.new_sample where x_expedition='${EOX}');" >>${KSH_SCRIPT}
echo "commit;" >>${KSH_SCRIPT}
echo "update transfer.new_test set group_name='${GRP_NM}' where group_name='DEFAULT' and analysis not in ('BHA','BIT','DEPLETED','DRILLING','LATLONG','ORIGDEPTH','ROUTING','SCALINPUTS') and sample_number in (select sample_number from transfer.new_sample where x_expedition='${EOX}');" >>${KSH_SCRIPT}
echo "commit;" >>${KSH_SCRIPT}
echo "update transfer.new_result set group_name='DEFAULT' where rowid in (select t.rowid from transfer.new_sample s, transfer.new_result t where s.sample_number=t.sample_number and s.x_expedition in ('QAQC','999'));" >>${KSH_SCRIPT}
echo "commit;" >>${KSH_SCRIPT}
echo "update transfer.new_test set group_name='DEFAULT' where rowid in (select t.rowid from transfer.new_sample s, transfer.new_test t where s.sample_number=t.sample_number and s.x_expedition in ('QAQC','999'));" >>${KSH_SCRIPT}
echo "commit;" >>${KSH_SCRIPT}
echo "select 'REVIEW CHANGED ROWS' from dual;" >>${KSH_SCRIPT}
echo "select count(*), r.group_name, s.x_expedition from transfer.new_sample s, transfer.new_result r where s.sample_number=r.sample_number group by r.group_name, s.x_expedition;" >>${KSH_SCRIPT}
echo "select count(*), t.group_name, s.x_expedition from transfer.new_sample s, transfer.new_test t where s.sample_number=t.sample_number group by t.group_name, s.x_expedition;" >>${KSH_SCRIPT}
echo "exit;" >>${KSH_SCRIPT} 
echo "+++" >>${KSH_SCRIPT} 
chmod 777 ${KSH_SCRIPT}
echo "Completed at `date`"

...

  • Make a copy of it specifically for the expedition you are processing--e.g. crt_user_398.sql
  • Modify As of expedition 401, the user credential to reflect COMPLEXwill apply LONG_PASSWORD requirements. If the EPM has provided a credential set, use that--edit the file accordingly. Use the credential provided by the EPM.

Run the script via SQL.
Provide an explicit path to the script to run it.
Pass the expedition as a parameter to the script. The pattern to follow based on expedition 398 activity

...

This credential needs to be distributed to the following email distributions for filing: EPM for the expedition in processing, it_ops@iodp.tamu.edu, programmers@iodp.tamu.edu, database@iodp.tamu.edu (Data Librarian), webmaster@iodp.tamu.edu

Code Block
titlecrt_user_exp.sql
collapsetrue
set echo on 
create user exp&1 identified by "secret-secure" default tablespace labware profile COMPLEX_PASSWORD;
grant connect to exp&1;
create role "GROUPNAME";
grant "GROUPNAME" to exp&1;
exit;

4. Copy the content

Into the publicly accessible LIMS. - DBA

chk_tablespace_free.ksh

Review the space available in LABWARE, LABWAREIDX.

Code Block
titlechk_tablespace_free.ksh
collapsetrue
DT=`date +"%Y%m%d_%H%M%S"`
OUT=chk_tablespace_free_${DT}.txt
TMP=chk_tablespace_free_${DT}.tmp
sqlplus -s "/ as sysdba"  2>/dev/null 1>$TMP <<+++
set newpage 0
set linesize 150
set pagesize 0
set echo off
set feedback off
set tab off

column TABLESPACE_NAME        FORMAT A15    HEADING 'Tablespace';
column file_name              FORMAT A45    HEADING 'File Name ';
COLUMN allocated_mb           FORMAT 999,999,999,999  HEADING 'Tablespace|Pieces';
cOLUMN free_mb                FORMAT 999,999,999,999 HEADING 'Free|Mbytes';
cOLUMN free_mb_per            FORMAT 999.99 HEADING 'Free       ';

SELECT
   a.tablespace_name,
   a.file_name,
   (a.bytes/1024) allocated_mb,
   (b.free_bytes/1024) free_mb,
   (b.free_bytes/a.bytes)*100 free_mb_per
FROM
   dba_data_files a,
   (SELECT file_id, SUM(bytes) free_bytes
    FROM dba_free_space b GROUP BY file_id) b
WHERE
   a.file_id=b.file_id
ORDER BY
   a.tablespace_name;
exit;
+++
grep -v "^$" $TMP >$OUT
rm $TMP
egrep "LABWARE |LABWAREIDX |UNDOTBS1|TRANSFER|TEMP" $OUT
#cat $OUT

grp_cnt_transfer.ksh, grp_cnt_lims.ksh

Review. Run both of these scripts to gather baseline row counts by moratorium group, expedition, and record status. Preparatory step before data is merged.

Code Block
titlegrp_cnt_transfer.ksh
collapsetrue
DT=`date +"%Y%m%d_%H%M%S"`
OUT=grp_cnt_transfer_${DT}.txt
sql -s "/ as sysdba"  2>>$OUT 1>>$OUT <<+++
alter session set current_schema=transfer;
--connect transfer/HQtransfer:dbashare.hhhhh;
set newpage 0
set linesize 100
set pagesize 0
set echo on
set timing on
set time on
set feedback off
set heading off
set tab off
select 'REVIEW TRANSFER.TEST' from dual;
select t.group_name, s.x_expedition, t.status, count(*) from transfer.new_sample s, transfer.new_test t where s.sample_number=t.sample_number group by t.group_name, s.x_expedition, t.status order by s.x_expedition, t.group_name, t.status;
select 'REVIEW TRANSFER.RESULT' from dual;
select r.group_name, s.x_expedition, r.status, count(*) from transfer.new_sample s, transfer.new_result r where s.sample_number=r.sample_number group by r.group_name, s.x_expedition, r.status order by s.x_expedition, r.group_name, r.status;
exit;
+++
Code Block
titlegrp_cnt_lims.ksh
collapsetrue
DT=`date +"%Y%m%d_%H%M%S"`
OUT=grp_cnt_lims_${DT}.txt
sql -s "/ as sysdba"  2>>$OUT 1>>$OUT <<+++
alter session set current_schema=lims;
set newpage 0
set linesize 100
set pagesize 0
set timing on
set time on
set feedback off
set heading off
set tab off
select 'REVIEW LIMS.TEST' from dual;
select t.group_name, s.x_expedition, t.status, count(*) from lims.sample s, lims.test t where s.sample_number=t.sample_number and s.x_expedition in ('${EOX}', 'QAQC', '999') group by t.group_name, s.x_expedition, t.status order by s.x_expedition, t.group_name, t.status;
select 'REVIEW LIMS.RESULT' from dual;
select r.group_name, s.x_expedition, r.status, count(*) from lims.sample s, lims.result r where s.sample_number=r.sample_number and s.x_expedition in ('${EOX}', 'QAQC', '999') group by r.group_name, s.x_expedition, r.status order by s.x_expedition, r.group_name, r.status;
exit;
+++

data_pre_03_dis_trig.ksh

Run this script to generate a pair of scripts that respectively enable and disable triggers.

Future: Remove this script by applying the use of "execute immediate" support for disabling respective triggers automatically in the TRANSFER.EOX_SHORE_MERGE_TRANSFER_NEW_TO_LIMS() script--around LIMS.SAMPLE, LIMS.TEST, LIMS.RESULT.

The LIMS.GEOD_DATA trigger should not be disabled. No need. Its a primary key trigger only. Data being merged (should) never have an empty primary key.

Code Block
titledata_pre_03_dis_trig.ksh
collapsetrue
DT=`date +"%Y%m%d_%H%M%S"`
sqlplus -s "/ as sysdba"  2>/dev/null 1>/dev/null <<+++
set newpage 0
set linesize 100
set pagesize 0
set echo off
set feedback off
set heading off
set tab off

spool dis_trig_all.sql 

set pagesize 50000
set linesize 160
set trimspool on
set trimout on
select 'alter trigger '||owner||'.'||trigger_name||' disable;' from all_triggers
where owner in ('LIMS') and table_name in ('SAMPLE')
order by owner;
spool off
exit;
+++

sed 's/ disable;/ enable;/g' < dis_trig_all.sql >enb_trig_all.sql
mv dis_trig_all.sql dis_trig_all_${DT}.sql
mv enb_trig_all.sql enb_trig_all_${DT}.sql

dis_trig_all_<yyyymmdd_hhmmss>.sql

Run this script via SQL to disable all schema triggers.

  • Ensures the data goes in faster. And
  • Doesn't get modified (again) on the way by.

Only the triggers shown should be disabled. All others are primary key generators which have no effect for data merge.

Code Block
titledis_trig_all_<yyyymmdd hh24miss>.sql
collapsetrue
alter trigger LIMS.TRG_NAME_CHANGE_AI disable;
alter trigger LIMS.TRG_NAME_CHANGE_AIFER disable;
alter trigger LIMS.TRG_NAME_CHANGE_BEFORE disable;
alter trigger LIMS.TRG_NO_SAMPLE_NAME_BEFORE disable;
alter trigger LIMS.MAINTAIN_ORIG_LEN disable;

data_pre_05_without_DI_v6.ksh

Script deprecated--not modified for current use. Superseded by the script below.

call transfer.eox_shore_merge_transfer_new_to_lims()

This procedure merges TRANSFER schema NEW_* table content into LIMS. Including

  • scales, depth, samples, tests, results; and
  • two (recently added) GEODESC tables.

Please review the procedure. To run it

Code Block
sql / as sysdba
SQL>
call transfer.eox_shore_merge_transfer_new_to_lims()

Noted

  • Without a unique result_number index, the bulk merging of the NEW_RESULT content took upwards of 6 hours.
  • With a unique index on NEW_RESULT.RESULT_NUMBER the bulk merge was observed to complete in 4 hours (this for ~26 million rows).
  • Although the data is available, the script does not provide for the merge of DESCINFO2 data--system is no longer used, so omitting now.

Future: The scripts are constructed around MERGE. Test in LIMS19 or RTLIMS to see if this technique is more performant for the bulk data moves. Since RESULT is the largest table with the most indexes we can gain the most improvement in execution times by optimizing this table's activity.

Code Block
titleProposed more performant than merge for RESULT
collapsetrue
execute immediate "alter table lims.result nologging;"
insert /*+append*/ lims.result
select * from transfer.new_result where result_number in
(select result_number from transfer.new_result
 minus
 select result_number from lims.result
);
execute immediate "alter table lims.result logging;"
commit;

grp_cnt_lims.ksh

Re-run this script for review purposes. Data formerly only in the TRANSFER schema should now be present in the same numbers in the LIMS schema.

enb_trig_all_<yyyymmdd_hhmmss>.sql

If the row counts match, re-enable operational triggers.

Reiteration: no need to disable/enable the LIMS.GEOD_DATA primary key trigger. Content being merged already has keys.

Code Block
titleenb_trig_all_<yyyymmdd_hh24miss>.sql
collapsetrue
alter trigger LIMS.TRG_NAME_CHANGE_AI enable;
alter trigger LIMS.TRG_NAME_CHANGE_AIFER enable;
alter trigger LIMS.TRG_NAME_CHANGE_BEFORE enable;
alter trigger LIMS.TRG_NO_SAMPLE_NAME_BEFORE enable;
alter trigger LIMS.MAINTAIN_ORIG_LEN enable;

...

titlecrt_user_exp.sql
collapsetrue
set echo on 
create user exp&1 identified by "secret-secure" default tablespace labware profile LONG_PASSWORD;
grant connect to exp&1;
create role "GROUPNAME";
grant "GROUPNAME" to exp&1;
exit;


The above script must be run for the test database instance selected, and for the production instance to which the data is merged.

4. Copy the content

Into the publicly accessible LIMS. - DBA

The data is first imported into a test database instance in the transfer schema. After the data has been remediated to support moratorium concerns, a snapshot of this data may be exported for re-use. The export will be re-imported into the production database.

Code Block
sql my_dba@instance
dp export -directory dmpdir -dumpfile ###-transfer.dmpdp -schemas transfer -filesize 15gb

After completing the above export from the selected test instance, complete these commands at the production instance (LIMSHQ).

Code Block
sql my_dba@limshq
call transfer.util_cleanout_transfer_new_tables();
dp import -directory dmpdir -dumpfile ###-transfer.dmpdp -schemas transfer -nameexpr table="like 'NEW__%'" -includemetadata false -includerows true


chk_tablespace_free.ksh

Review the space available in LABWARE, LABWAREIDX.

Code Block
titlechk_tablespace_free.ksh
collapsetrue
DT=`date +"%Y%m%d_%H%M%S"`
OUT=chk_tablespace_free_${DT}.txt
TMP=chk_tablespace_free_${DT}.tmp
sqlplus -s "/ as sysdba"  2>/dev/null 1>$TMP <<+++
set newpage 0
set linesize 150
set pagesize 0
set echo off
set feedback off
set tab off

column TABLESPACE_NAME        FORMAT A15    HEADING 'Tablespace';
column file_name              FORMAT A45    HEADING 'File Name ';
COLUMN allocated_mb           FORMAT 999,999,999,999  HEADING 'Tablespace|Pieces';
cOLUMN free_mb                FORMAT 999,999,999,999 HEADING 'Free|Mbytes';
cOLUMN free_mb_per            FORMAT 999.99 HEADING 'Free       ';

SELECT
   a.tablespace_name,
   a.file_name,
   (a.bytes/1024) allocated_mb,
   (b.free_bytes/1024) free_mb,
   (b.free_bytes/a.bytes)*100 free_mb_per
FROM
   dba_data_files a,
   (SELECT file_id, SUM(bytes) free_bytes
    FROM dba_free_space b GROUP BY file_id) b
WHERE
   a.file_id=b.file_id
ORDER BY
   a.tablespace_name;
exit;
+++
grep -v "^$" $TMP >$OUT
rm $TMP
egrep "LABWARE |LABWAREIDX |UNDOTBS1|TRANSFER|TEMP" $OUT
#cat $OUT


grp_cnt_transfer.ksh, grp_cnt_lims.ksh

Run these scripts to obtain row counts by moratorium group, expedition, and record status after the moratorium remediation is complete. Preparatory step before data is merged. The output of these scripts are text files. Review them. File the info in the expedition-specific directory.

Code Block
titlegrp_cnt_transfer.ksh
collapsetrue
DT=`date +"%Y%m%d_%H%M%S"`
OUT=grp_cnt_transfer_${DT}.txt
sql -s "/ as sysdba"  2>>$OUT 1>>$OUT <<+++
alter session set current_schema=transfer;
--connect transfer/HQtransfer:dbashare.hhhhh;
set newpage 0
set linesize 100
set pagesize 0
set echo on
set timing on
set time on
set feedback off
set heading off
set tab off
select 'REVIEW TRANSFER.TEST' from dual;
select t.group_name, s.x_expedition, t.status, count(*) from transfer.new_sample s, transfer.new_test t where s.sample_number=t.sample_number group by t.group_name, s.x_expedition, t.status order by s.x_expedition, t.group_name, t.status;
select 'REVIEW TRANSFER.RESULT' from dual;
select r.group_name, s.x_expedition, r.status, count(*) from transfer.new_sample s, transfer.new_result r where s.sample_number=r.sample_number group by r.group_name, s.x_expedition, r.status order by s.x_expedition, r.group_name, r.status;
exit;
+++


Code Block
titlegrp_cnt_lims.ksh
collapsetrue
DT=`date +"%Y%m%d_%H%M%S"`
OUT=grp_cnt_lims_${DT}.txt
sql -s "/ as sysdba"  2>>$OUT 1>>$OUT <<+++
alter session set current_schema=lims;
set newpage 0
set linesize 100
set pagesize 0
set timing on
set time on
set feedback off
set heading off
set tab off
select 'REVIEW LIMS.TEST' from dual;
select t.group_name, s.x_expedition, t.status, count(*) from lims.sample s, lims.test t where s.sample_number=t.sample_number and s.x_expedition in ('${EOX}', 'QAQC', '999') group by t.group_name, s.x_expedition, t.status order by s.x_expedition, t.group_name, t.status;
select 'REVIEW LIMS.RESULT' from dual;
select r.group_name, s.x_expedition, r.status, count(*) from lims.sample s, lims.result r where s.sample_number=r.sample_number and s.x_expedition in ('${EOX}', 'QAQC', '999') group by r.group_name, s.x_expedition, r.status order by s.x_expedition, r.group_name, r.status;
exit;
+++


call transfer.eox_shore_merge_transfer_new_to_lims()

This procedure merges TRANSFER schema NEW_* table content into LIMS. Including

  • scales, depth, samples, tests, results; and
  • two (recently added) GEODESC tables.

Please review the procedure. To run it

Code Block
sql / as sysdba
SQL>
call transfer.eox_shore_merge_transfer_new_to_lims()


Noted

  • Without a unique result_number index, the bulk merging of the NEW_RESULT content took upwards of 6 hours.
  • With a unique index on NEW_RESULT.RESULT_NUMBER the bulk merge was observed to complete in 4 hours (this for ~26 million rows).
  • Support is no longer provided for merging content from the DESCINFO2 schema. No new data is being generated for that product.
  • Script operation does briefly disable triggers on the LIMS.SAMPLE, LIMS.TEST tables. For this reason, it is best run outside of business hours where samples and tests are being modified. The script is preferred. It does not forget to re-enable triggers.
  • The triggers being enabled/disabled are owned by the LIMS schema. To enable the TRANSFER schema to effect changes, simple procedures owned by the LIMS schema manage enable/disable support. The TRANSFER schema is granted permission to execute these LIMS-owned procedures.

grp_cnt_lims.ksh

Re-run this script for review purposes. Data formerly only in the TRANSFER schema should now be present in the same numbers in the LIMS schema.

5. Miscellaneous: Update the Database Overview, Ensure Moratorium User privileges are set

Configuration updates provide summary of the expedition data for participants and future users. - DBA

dbo_compute_v5.sql <exp>

Deprecated. While still functional, prefer the use of the script below.. - DBA

call transfer.dbo_compute_exp('###')

Run this script to compute new expedition summaries for display in the database OVERVIEW application. Substitute the desired expedition number.

Code Block
sql / as sysdba
SQL>
set serveroutput on
call transfer.dbo_compute_exp('398');

Upon completion, review the database overview application from a browser: : https://web.iodp.tamu.edu/OVERVIEW/to compute new expedition summaries for display in the database OVERVIEW application. Substitute the desired expedition number.

Code Block
sql / as sysdba
SQL>
set serveroutput on
call transfer.dbo_compute_exp('398');

Upon completion, review the database overview application from a browser: : https://web.iodp.tamu.edu/OVERVIEW/
A new column of summary statistics should be visible for the new expedition.


DEV - The moratorium user must be modified within the https://web.iodp.tamu.edu/Auther/ application.

  • The moratorium user must be added to the account list (match the pattern of other moratorium accounts).
  • The moratorium user requires the roles: MORATORIUM USER and ALL_SPLAT.

DEV - When moratorium and all major sampling parties are complete for this data, the Auther configurations may be removed. This tends to lag behind moving data out of moratorium.

6. Test

Spot-check and test availability and completeness.

Confirm that moratorium access is functioning. - TAS, DEV

In working ORTLIMS, LIMS19, RTLIMS, and LIMSHQ needed to run Auther application to assign the MORATORIUM_USER role to the moratorium user.

Ensure that GEODP### data is available. - DEV

Oracle grants and Auther configurations are not managed by this EOX process. Likely need to check both for full access to this legacy data. The GEODESC project schemas in theory (not yet practiced) may be expunged after post-cruise and the moratorium period is complete.

availability and completeness.

  • Confirm that moratorium access is functioning by running LORE reports. - TAS, DEV
    • Can the EXP### moratorium account login to https://web.iodp.tamu.edu/LORE?
    • Can the EXP### moratorium account see LIMS data for the expedition--images, etc. Spotcheck.
    • Can the EXP### actually download some of the file content?
  • Confirm that moratorium access and account permissions are functioning for GEODESC data products. - TAS, DEV
    • Can the EXP### moratorium account login to https://web.iodp.tamu.edu/DataAccess?
    • Can the EXP### moratorium account see the GEODESC Final Product files via DataAccess?
    • Can EXP### actually download some of those files?

7. Notify interested parties

...

  • EPM
  • DBA - RCI
  • Data Librarian
  • Web Master
  • Programmers
  • IT OPS

...




Shore EOX process applied by the DBA prior to expedition 398.
Re-worked below into process above--polishing and efficiency improvements welcome.
The scripts below are

...

equivalent to steps 2. thru 5.

EOX load and merge - rci_work/exp

...