...
Formerly these instructions were written so as to generate log files that you and your colleagues could inspect. We no longer do that in this process since Feb 2023 (Exp 398).If you are
All TRANSFER procedures have been revised record activity to TRANSFER.LOG. Provides some bread-crumbs to review past activity. Several LIMS procedures and functions are also revised to record activity to this same log.
Q: A database process is taking too long, I want another way to check what's happening.
A:
...
It is not always effective due to database security and models of atomicity, completeness, independence, and durability. Amount of parallelism changes how Oracle manages this information too.
Q: My command-line login is failing. What are some less usual causes?
A:
For command-line tools--like sql, sqlplus, r
man:
if your password has spaces or symbol characters, you must quote the password, e.g.
Code Block |
---|
> sql x@yz
Password:"@ s3cr3t fr45e" |
Discussion
Database Schemas
The database consists of distinct schemas. Each schema serves different functions
- DESCINFO2. Records configuration of parameters, templates, value lists, and users for the descriptive information eco-system.
- LIMS. The sample catalog and repository of experimental results against those samples. Includes a catalog of files (ASMAN) associated with those samples.
- OPS. Repository of drilling operations information. Bathymetry and navigation content was removed from here as of Subic Bay Tie-up 353P Oct 23, 2014. The content is time and activity based, not sample based. Other workflows manage the bulk of this data outside Oracle. What is recorded here is a small subset of the total operational content we keep.
- TRANSFER. Contains scripts and tables for managing data transfer and cleanup processes. Some run here for BOX.
- GEODCAT. Carries taxonomy, templates, observable definitions globally available to GEODESC operators.
- GEODP###. Carries taxonomy, templates, observable definitions specific to the expedition/project.
Cleaning Process Architecture
End-of-expedition processing leaves data in LIMS, TRANSFER. That content may be removed. The beginning-of-expedition processing removes that content. The procedures to conduct the removal are owned by the TRANSFER schema.
An Annoyance: Monitoring for Archive Log Space Filling
Oracle treats the changes resulting from end-of-expedition processing like any other transactions. Due to the size of these transactions it is likely that the 100GiB reserved for archive logging will be consumed. When this occurs, the database blocks all activity until archive log space is freed. All attempts at new transactions will receive the message ORA-00257: archiver stuck
.
This method of releasing the archiver assumes there is actually some spare space on the system. Via a DBA account--to increase the amount of archive log space available:alter system set db_recovery_file_dest_size=1000G scope=memory;
The ODA systems carry plenty of disk. The above will get Oracle back up and running with enough breathing room for you to connect and do the followingThis command is useful to review active SQL for a list of users. Modify the username
list to see active SQL for other accounts. Given the SID, SERIAL# in the SQL below OEM (Oracle Enterprise Manager) and other tools can be used to monitor and manage long-running or even SQL blocked by row-locking mechanisms.
Code Block |
---|
select u.sid, u.serial#, s.rows_processed, s.disk_reads, s.buffer_gets, s.last_active_time, s.physical_read_by
tes, s.physical_write_bytes, s.sql_id, u.username, substr(s.sql_text,1,50) from v$sql s, v$session u where s.sql_id
=u.sql_id and u.username in ('MY_DBA','GUEST','JRS_xxx'); |
Q: My command-line login is failing. What are some less usual causes?
A:
For command-line tools--like sql, sqlplus, r
man:
if your password has spaces or symbol characters, you must quote the password, e.g.
Code Block |
---|
> sql x@yz
Password:"@ s3cr3t fr45e" |
Discussion
Database Schemas
The database consists of distinct schemas. Each schema serves different functions
- DESCINFO2. Records configuration of parameters, templates, value lists, and users for the descriptive information eco-system.
- LIMS. The sample catalog and repository of experimental results against those samples. Includes a catalog of files (ASMAN) associated with those samples.
- OPS. Repository of drilling operations information. Bathymetry and navigation content was removed from here as of Subic Bay Tie-up 353P Oct 23, 2014. The content is time and activity based, not sample based. Other workflows manage the bulk of this data outside Oracle. What is recorded here is a small subset of the total operational content we keep.
- TRANSFER. Contains scripts and tables for managing data transfer and cleanup processes. Some run here for BOX.
- GEODCAT. Carries taxonomy, templates, observable definitions globally available to GEODESC operators.
- GEODP###. Carries taxonomy, templates, observable definitions specific to the expedition/project.
Cleaning Process Architecture
End-of-expedition processing leaves data in LIMS, TRANSFER. That content may be removed. The beginning-of-expedition processing removes that content. The procedures to conduct the removal are owned by the TRANSFER schema.
An Annoyance: Monitoring for Archive Log Space Filling
Oracle treats the changes resulting from end-of-expedition processing like any other transactions. Due to the size of these transactions it is likely that the 100GiB reserved for archive logging will be consumed. When this occurs, the database blocks all activity until archive log space is freed. All attempts at new transactions will receive the message ORA-00257: archiver stuck
.
This method of releasing the archiver assumes there is actually some spare space on the system. Via a DBA account--to increase the amount of archive log space available:alter system set db_recovery_file_dest_size=1000G scope=memory;
The ODA systems carry plenty of disk. The above will get Oracle back up and running with enough breathing room for you to connect and do the following. Because the scope is "memory" only, setting will revert when the database is restarted. Good practice to set it back to 100G when done. We run a long time before restarting the database.
Disk space for archive logging is freed on a weekly basis (Sun) by backing up Oracle transactions to tape, then removing them from disk. If this isn't soon enough, contact MCS and your DBA.
Monitoring Archive Log Generation
...
Run this SQL as the DBA to verify info about archive logging status.
|
Expect output like this
|
or this
|
How to Invoke PL/SQL Scripts
All scripts are run from procedures owned by the TRANSFER account. The typical data cleanup and preening session is shown above and repeated here
|
Most scripts take a single string parameter for expedition. The ASMAN scripts take a single string parameter indicating the catalog to be deleted--usually formed as 'EXP'+'theExpeditionNumber', e.g. 'EXP339'. Catalogs: DESCINFO and DROPS are kept per user request.
Run the scripts from SQL, SQL*Plus or from within a SQLDeveloper worksheet. SQL and SQL*Plus have the advantage of being light-weight, and provides facilities to spool all transactions that transpire for post-cleanup review and comparison. SQLDeveloper has the advantage of readily reviewing and verifying the scripts being invoked.
The scripts are designed to provide feedback, however the Oracle mechanisms are not real-time. To turn on useful feedback mechanisms, apply these commands from the SQL*PLUS command-line mileage will vary if tried in the context of SQLDeveloper:
|
To capture the content of a SQL*Plus session, do something like this:
|
The duration of the session is very dependent on the quantity of data collected. Be patient. It takes time to delete 14-20 million rows. Current takes about 30 minutes. Not too long ago took between 9 and 20 hours.
Special behavior for scripts is noted below. E.g. order of execution is important (in some cases, ); some processes are fast, ; some are slow. Some should be double-checkedDouble-check.
Data cleanup is routine, but will have variances. The additional scenarios provided below attempt to clarify the variances we encounter. Apply the scenario or combination of scenarios that best fits current expedition requirements.
You will see a number of procedures in TRANSFER
that are not documented here. These processes have been in disuse for so long additional thought, testing, and documentation should be applied to them when they come up again. Specific questions? Ask around. Read the code.
...
Irrevocably and quickly drop all content in the TRANSFER schema tables with "NEW" in their names. No parameter required.
|
Clean out the ASMAN file catalogs(will also remove Autosaves from Descinfo2)
This script removes the database records only. It is a separate step to clean out the file system for the catalog. You are responsible for the database cleanup. The MCS take care of the file system. Talk with each other.
|
Repeat box_asman_catalog_delete for as many catalogs as you desire to remove. Confirm removal by re-running the survey script. Commit when done. The DESCINFO and DROPS catalogs are to be preserved--specific request by OPS and DESC technicians.
|
Clean out LIMS
This is a longer process, but no more than 30 minutes with current database size and hardware. 6-20 hours is thankfully a historical artifact. Progress of the script can still be checked by counting samples, tests, and particularly results.
|
Repeat box_lims_expedition_delete
for as many expeditions as required. The smaller the quantity of data, the faster it will go.
See detailed process for setting of expedition specific variables. ----
Recommend the various select count(*) variants as routine content checks before and after the removal step.
|
This procedure is slow. Data we want to keep is intermixed with data to be deleted. So we do it rows at a time. The database is a 24/7 tool--there's always some activity against it.
...
See the MAD container load - process. And additional related documentation on the shore wiki and in Subversion. The names and masses of containers are delivered in a spreadsheet (or text file).
...
These are encoded in the TRANSFER script UTIL_LIMS_TEST_RESULT_DELETE.
|