Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

The outgoing programmer is responsible for all of the end-of-expedition (EOX) activities.

The BOX - beginning of expedition
EOX - end of expedition

About EOX

The outgoing programmer(s) is(are) responsible for all EOX activities.

The first goal is to ensure a good copy of the data gets back to HQ.
An equally important goal is to leave laboratory and data management systems in an operational state for hand-off to your counterparts.

Please read this entire document and EOX ship - detailrationale and FAQ.  Expect Expect to take 3 to 4 hours to complete this exercise. (I keep saying this only takes 15 minutes. But that really is just the individual database dump processes.) While the database dump can occur in 20 min for 61 GiB--operational decisions and technical staff concerns with their portion of the data management will take time to address.

The process has been much simplified. Do not hurry. If you are "fried"--work together. 

0. Readiness assessment.

...

  • Are we done collecting data for this laboratory area?
  • Has all the data content that you want to go home been:
      uploaded to the database
      copied to data1 OR
      copied to the EXPEDITION PROCEEDINGS volume
    ?
  • Where is the data? A data disposition form is available in s:\data1.
    It is helpful to the data librarian to have that filled out.

If no, obtain estimates when the data collection will be done. Schedule accordingly.   

1. Notify everyone. Database snapshot in progress.

  • Notify jr_developer and the shipboard technical staff that the EOX database backup is in progress.
  • The database is available for reports, but content cannot be changed while this activity is in progress.

2. Make the database read only.

...

. oraenv
ORACLE_SID = [LIMSJR] ? LIMSJR
sqlplus your-name_dba
sql>
alter tablespace labware read only;

At the SQL> prompt, run the last command.
Stop. Leave this window open for use below.

Technical note. This command prevents write access to any table content that resides in the storage facility under the "labware" table space. In our database implementation, this happens to be all tables owned by the LIMS schema. Use of Drill Report ("ops" schema) is not affected. Aspects of DescLogik software configuration are also still open to maintenance, though use of DescLogik itself is locked out by this change.

3. Full snapshot as oracle@k1.

...

NOTE: While performing the EOX for expedition 384, it was discovered that the path represented by "data_pump_dir" did not exist.  You can discover what the data_pump_dir is supposed to be using this query (logged in as your DBA account):

select * from dba_directories;

The results have a lot of stuff that's not useful, but the DATA_PUMP_DIR should be there.  Currently it is /backup/LIMSJR/dpdump.  Note that "backup" is singular (it used to be plural) and there is apparently some desire to change this, so keep that in mind.  If you find that the folder does not exist, the command in this step will fail and you may need to get an MCS or a DBA on shore to help create it.

You can also try:

create directory data_pump_dir as '/backup/export';
grant read, write on directory data_pump_dir to transfer, system;

10/5/2021: Folder did not exist, as above, but MCS suggested I try creating it myself (in Putty, logged in as Oracle) using mkdir commands.  That initially seemed to work, but it broke some kind of symbolic or virtual link that caused the file to be written to the wrong physical location (with reduced disk space), so not recommended.

. oraenv
ORACLE_SID = [LIMSJR] ? LIMSJR
# Note that the folder you're cd-ing to here should be the "data_pump_dir" referenced in the next command.  This comes from the dba_directories table (see note above).
cd /backup/LIMSJR/dpdump
time expdp YOUR_DBA_ACCOUNT directory=dmpdir full=y logfile=limsjr_full-YOUR_EXP_NUMBER-export.log dumpfile=limsjr_full-YOUR_EXP_NUMBER-YOUR_INITIALS.dmpdp

...

4. Restore database read/write capability.

From the prompt [oracle@k1 ~]$

run these command, supply the appropriate credential

sqlplus YOUR_DBA_ACCOUNT
sql>
alter tablespace labware read write;
exit

Stop. Leave this window open for use below.

5. Compress the exports for transport. Cleanup. 

From the prompt [oracle@k1 ~]$
run these commands

cd /backup/LIMSJR/dpdump
time bzip2 --best limsjr*YOUR_EXP_NUMBER*.dmpdp
# once the above completes, then run this
ls -l
md5sum *.bz2

Cut and paste the digest keys and file listing into a notification email. Enables remote users to verify the integrity of the files.

           Please log these in a txt file in the \AD\support\'expedition named sub directory'\ see other expeditions for example.

Statistics. 367: 22 min to compress both files (~5 GiB to < 1 GiB); 345: 22 min 53 sec for bzip step. 351: full 137 min 46 for ~8 GiB content.  360: 67 min 27 sec for ~5 GiB file. 45-60 min for high recovery expeditions.

Exp 396: we had trouble initially because we ran out of disk space.  This was apparently due to my creating the /backup/LIMSJR/dpdump folder under the Oracle account.  This broke some kind of link-up that gives the backup folder lots of additional drive space.  Don't do that.

...

6. Notify everyone. Database snapshot process is complete.

 Notify jr_developer and expedition technical staff that the process is done.

Speak with the MCS. They will pick up the data from here.

oemjr:/backup/export
    limsjr*.dmpdp
    limsjr*.log

...

Inquire with the MCS. Ensure the (above) database content and the (below) ASMAN content are being taken to media for transport to HQ.

...

Checklist / Overview

  1. Deliver the end-of-expedition report to the LO (that you have been writing throughout the expedition). Post a copy to the development team via Slack. See examples from prior technical reports for formats and variations.
  2. Establish with MCS, LOs, CoChiefs when the final database snapshot will be taken for EOX.
  3. Spot check data copy efforts. What data didn’t get into the database? To data1?
  4. Honor moratorium. Once content is uploaded to LIMS, and raw files have gone to data1, expedition-specific data may be cleaned off of instrument hosts and workstations. Good practice to confer with technical staff that manage this for their labs. There is variance between crews as to how these procedures are carried out.
  5. Conduct end of expedition procedures for backing up the database.
  6. Provide courier services if called upon to do so.
  7. Confirm with the MCS what is to be included in the backup going to shore and that it does cover all the information you are aware of that should go to shore.
  8. Ensure all your code changes are checked in.
  9. Clean the development office. Assist in the general cleaning efforts. 
  10. Be ready to move out. Your replacement will be here soon.
  11. Provide assistance and information to oncoming developers.
    Oncoming personnel expect functional laboratory and compute systems for successful execution of the new expedition.
    Assist with those goals.

Out of Scope

These checklist items are included to raise situational awareness, but are not in the responsibility of the offgoing developers. However, they are dependent on what you do. Other actors than the developers are responsible for these equally critical activities. The sooner done, the sooner accessible. Under routine circumstances we promote a two-week turn-around time for making shipboard gathered content available in shore systems.

  1. Retrieve data content from tape. Distribute it to Publications, Data Librarian, DBA, Operations, and public access storage locations. - Systems
  2. Restore (routinely selected) database content from the full backup to the shore production transfer schema. - DBA
  3. Establish moratorium credentials and controls for the expedition data. - DBA
  4. Copy the content into the publicly accessible LIMS. - DBA
  5. Update the Database Overview configuration to summarize the added content. - DBA