...
This credential needs to be distributed to the following email distributions for filing: EPM for the expedition in processing, it_ops@iodp.tamu.edu, programmers@iodp.tamu.edu, database@iodp.tamu.edu (Data Librarian), webmaster@iodp.tamu.edu
Code Block | ||||
---|---|---|---|---|
| ||||
set echo on
create user exp&1 identified by "secret-secure" default tablespace labware profile COMPLEX_PASSWORD;
grant connect to exp&1;
create role "GROUPNAME";
grant "GROUPNAME" to exp&1;
exit; |
4. Copy the content
Into the publicly accessible LIMS. - DBA
...
Review the space available in LABWARE, LABWAREIDX.
grp_cnt_transfer.ksh, grp_cnt_lims.ksh
Review. Run both of these scripts to gather baseline row counts by moratorium group, expedition, and record status. Preparatory step before data is merged.
data_pre_03_dis_trig.ksh
Run this script to generate a pair of scripts that respectively enable and disable triggers.
Future: Remove this script by applying the use of "execute immediate" support for disabling respective triggers automatically in the TRANSFER.EOX_SHORE_MERGE_TRANSFER_NEW_TO_LIMS() script--around LIMS.SAMPLE, LIMS.TEST, LIMS.RESULT, LIMS.GEOD_DATA, LIMS.GEOD_METADATA.
dis_trig_all_<yyyymmdd_hhmmss>.sql
Run this script via SQL to disable all schema triggers.
- Ensures the data goes in faster. And
- Doesn't get modified (again) on the way by.
data_pre_05_without_DI_v6.ksh
Script deprecated--not modified for current use. Superseded by the script below.
call transfer.eox_shore_merge_transfer_new_to_lims()
This procedure merges TRANSFER schema NEW_* table content into LIMS. Including
- scales, depth, samples, tests, results; and
- two (recently added) GEODESC tables.
Please review the procedure. To run it
Code Block |
---|
sql / as sysdba
SQL>
call transfer.eox_shore_merge_transfer_new_to_lims() |
Noted
- Without a unique result_number index, the bulk merging of the NEW_RESULT content took upwards of 6 hours.
- With a unique index on NEW_RESULT.RESULT_NUMBER the bulk merge was observed to complete (once) 30 minutes, but more typically in 4 hours.
- Although the data is available, the script does not provide for the merge of DESCINFO2 data--system is no longer used, so omitting now.
Future: The scripts are constructed around MERGE. Test in LIMS19 or RTLIMS to see if this technique is more performant for the bulk data moves. Since RESULT is the largest table with the most indexes we can gain the most improvement in execution times by focussing on this table.
Code Block |
---|
execute immediate "alter table lims.result nologging;"
insert /*+append*/ lims.result
select * from transfer.new_result where result_number in
(select result_number from transfer.new_result
minus
select result_number from lims.result
);
execute immediate "alter table lims.result logging;"
commit; |
grp_cnt_lims.ksh
Re-run this script for review purposes. Data formerly only in the TRANSFER schema should now be present in the same numbers in the LIMS schema.
enb_trig_all_<yyyymmdd_hhmmss>.sql
...
Code Block | ||||
---|---|---|---|---|
| ||||
DT=`date +"%Y%m%d_%H%M%S"`
OUT=chk_tablespace_free_${DT}.txt
TMP=chk_tablespace_free_${DT}.tmp
sqlplus -s "/ as sysdba" 2>/dev/null 1>$TMP <<+++
set newpage 0
set linesize 150
set pagesize 0
set echo off
set feedback off
set tab off
column TABLESPACE_NAME FORMAT A15 HEADING 'Tablespace';
column file_name FORMAT A45 HEADING 'File Name ';
COLUMN allocated_mb FORMAT 999,999,999,999 HEADING 'Tablespace|Pieces';
cOLUMN free_mb FORMAT 999,999,999,999 HEADING 'Free|Mbytes';
cOLUMN free_mb_per FORMAT 999.99 HEADING 'Free ';
SELECT
a.tablespace_name,
a.file_name,
(a.bytes/1024) allocated_mb,
(b.free_bytes/1024) free_mb,
(b.free_bytes/a.bytes)*100 free_mb_per
FROM
dba_data_files a,
(SELECT file_id, SUM(bytes) free_bytes
FROM dba_free_space b GROUP BY file_id) b
WHERE
a.file_id=b.file_id
ORDER BY
a.tablespace_name;
exit;
+++
grep -v "^$" $TMP >$OUT
rm $TMP
egrep "LABWARE |LABWAREIDX |UNDOTBS1|TRANSFER|TEMP" $OUT
#cat $OUT |
grp_cnt_transfer.ksh, grp_cnt_lims.ksh
Review. Run both of these scripts to gather baseline row counts by moratorium group, expedition, and record status. Preparatory step before data is merged.
Code Block | ||||
---|---|---|---|---|
| ||||
DT=`date +"%Y%m%d_%H%M%S"`
OUT=grp_cnt_transfer_${DT}.txt
sql -s "/ as sysdba" 2>>$OUT 1>>$OUT <<+++
alter session set current_schema=transfer;
--connect transfer/HQtransfer:dbashare.hhhhh;
set newpage 0
set linesize 100
set pagesize 0
set echo on
set timing on
set time on
set feedback off
set heading off
set tab off
select 'REVIEW TRANSFER.TEST' from dual;
select t.group_name, s.x_expedition, t.status, count(*) from transfer.new_sample s, transfer.new_test t where s.sample_number=t.sample_number group by t.group_name, s.x_expedition, t.status order by s.x_expedition, t.group_name, t.status;
select 'REVIEW TRANSFER.RESULT' from dual;
select r.group_name, s.x_expedition, r.status, count(*) from transfer.new_sample s, transfer.new_result r where s.sample_number=r.sample_number group by r.group_name, s.x_expedition, r.status order by s.x_expedition, r.group_name, r.status;
exit;
+++ |
Code Block | ||||
---|---|---|---|---|
| ||||
DT=`date +"%Y%m%d_%H%M%S"`
OUT=grp_cnt_lims_${DT}.txt
sql -s "/ as sysdba" 2>>$OUT 1>>$OUT <<+++
alter session set current_schema=lims;
set newpage 0
set linesize 100
set pagesize 0
set timing on
set time on
set feedback off
set heading off
set tab off
select 'REVIEW LIMS.TEST' from dual;
select t.group_name, s.x_expedition, t.status, count(*) from lims.sample s, lims.test t where s.sample_number=t.sample_number and s.x_expedition in ('${EOX}', 'QAQC', '999') group by t.group_name, s.x_expedition, t.status order by s.x_expedition, t.group_name, t.status;
select 'REVIEW LIMS.RESULT' from dual;
select r.group_name, s.x_expedition, r.status, count(*) from lims.sample s, lims.result r where s.sample_number=r.sample_number and s.x_expedition in ('${EOX}', 'QAQC', '999') group by r.group_name, s.x_expedition, r.status order by s.x_expedition, r.group_name, r.status;
exit;
+++ |
data_pre_03_dis_trig.ksh
Run this script to generate a pair of scripts that respectively enable and disable triggers.
Future: Remove this script by applying the use of "execute immediate" support for disabling respective triggers automatically in the TRANSFER.EOX_SHORE_MERGE_TRANSFER_NEW_TO_LIMS() script--around LIMS.SAMPLE, LIMS.TEST, LIMS.RESULT.
The LIMS.GEOD_DATA trigger should not be disabled. No need. Its a primary key trigger only. Data being merged (should) never have an empty primary key.
Code Block | ||||
---|---|---|---|---|
| ||||
DT=`date +"%Y%m%d_%H%M%S"`
sqlplus -s "/ as sysdba" 2>/dev/null 1>/dev/null <<+++
set newpage 0
set linesize 100
set pagesize 0
set echo off
set feedback off
set heading off
set tab off
spool dis_trig_all.sql
set pagesize 50000
set linesize 160
set trimspool on
set trimout on
select 'alter trigger '||owner||'.'||trigger_name||' disable;' from all_triggers
where owner in ('LIMS') and table_name in ('SAMPLE')
order by owner;
spool off
exit;
+++
sed 's/ disable;/ enable;/g' < dis_trig_all.sql >enb_trig_all.sql
mv dis_trig_all.sql dis_trig_all_${DT}.sql
mv enb_trig_all.sql enb_trig_all_${DT}.sql |
dis_trig_all_<yyyymmdd_hhmmss>.sql
Run this script via SQL to disable all schema triggers.
- Ensures the data goes in faster. And
- Doesn't get modified (again) on the way by.
Only the triggers shown should be disabled. All others are primary key generators which have no effect for data merge.
Code Block | ||||
---|---|---|---|---|
| ||||
alter trigger LIMS.TRG_NAME_CHANGE_AI disable;
alter trigger LIMS.TRG_NAME_CHANGE_AIFER disable;
alter trigger LIMS.TRG_NAME_CHANGE_BEFORE disable;
alter trigger LIMS.TRG_NO_SAMPLE_NAME_BEFORE disable;
alter trigger LIMS.MAINTAIN_ORIG_LEN disable; |
data_pre_05_without_DI_v6.ksh
Script deprecated--not modified for current use. Superseded by the script below.
call transfer.eox_shore_merge_transfer_new_to_lims()
This procedure merges TRANSFER schema NEW_* table content into LIMS. Including
- scales, depth, samples, tests, results; and
- two (recently added) GEODESC tables.
Please review the procedure. To run it
Code Block |
---|
sql / as sysdba
SQL>
call transfer.eox_shore_merge_transfer_new_to_lims() |
Noted
- Without a unique result_number index, the bulk merging of the NEW_RESULT content took upwards of 6 hours.
- With a unique index on NEW_RESULT.RESULT_NUMBER the bulk merge was observed to complete in 4 hours (this for ~26 million rows).
- Although the data is available, the script does not provide for the merge of DESCINFO2 data--system is no longer used, so omitting now.
Future: The scripts are constructed around MERGE. Test in LIMS19 or RTLIMS to see if this technique is more performant for the bulk data moves. Since RESULT is the largest table with the most indexes we can gain the most improvement in execution times by optimizing this table's activity.
Code Block | ||||
---|---|---|---|---|
| ||||
execute immediate "alter table lims.result nologging;"
insert /*+append*/ lims.result
select * from transfer.new_result where result_number in
(select result_number from transfer.new_result
minus
select result_number from lims.result
);
execute immediate "alter table lims.result logging;"
commit; |
grp_cnt_lims.ksh
Re-run this script for review purposes. Data formerly only in the TRANSFER schema should now be present in the same numbers in the LIMS schema.
enb_trig_all_<yyyymmdd_hhmmss>.sql
If the row counts match, re-enable operational triggers.
Reiteration: no need to disable/enable the LIMS.GEOD_DATA primary key trigger. Content being merged already has keys.
Code Block | ||
---|---|---|
| ||
alter trigger LIMS.TRG_NAME_CHANGE_AI enable;
alter trigger LIMS.TRG_NAME_CHANGE_AIFER enable;
alter trigger LIMS.TRG_NAME_CHANGE_BEFORE enable;
alter trigger LIMS.TRG_NO_SAMPLE_NAME_BEFORE enable;
alter trigger LIMS.MAINTAIN_ORIG_LEN enable; |
5. Update the Database Overview
...