| |
- add_odp_msl(db_cx, odpMSLDataFile)
- use what janus web returns for MSL susceptibility
- buildPseudoThellierTable(db_cx)
- Go through the ARM data and process all of the pseudo-Thellier data.
See Lisa's pseudot program or pmag-py/pseudot.py for more info.
Parameters:
db_cx --- sqlite database connection
Returns:
nothing
Site Effects:
Adds a new table 'pseudot' to the database.
Right now, only reads from the mag table
- build_ams(db_cx, depthLookup, k15fileName)
- Parse Jeff Gee's kappa bridge format file
I think people will hate us at the end of this century since we
are using only 2 digits for the year. NICE! Not y2.1k
compliant.
id - Unique database id. Has no meaning
samplename - The user specified name in the k15 file
user - Who did the measuring on the kappabridge
datemeasured - Local time that the sample was measured on the KLY-2
cruise - The cruise identifier (e.g. bp04 == bpsio-04)
corenum - Number of the core for that cruise. 1 is the first core collected.
coretype - g == gravity, m == multicore, b == boxcore, p == piston, t == trigger
corehalf - w == working, a == archive half
section - section 1 is at the top (closest to the water/bottom interface)
sectionoffset - cm offset down from the top of that section
the number for pmag cubes refers to the top side of the cube
depth - Depth below the water/bottom interface in cm
counts - This MAY be the average value read by the kappabridge in counts. No range mult
sampleholder - SI value of the sample holder
k1 .. k15 - The 15 Jelinek positions from kappabridge. The raw data
s1 .. s6 - the results from k15_s
sigma - AKA s[7] the sigma for the 6 s values
#
##### HEXT SECTION
#
bulksusc REAL
F REAL
F12 REAL
F23 REAL
tau1 REAL
dec1 REAL
inc1 REAL
tau2 REAL
dec2 REAL
inc2 REAL
tau3 REAL
dec3 REAL
inc3
- build_ams_geo(db_cx)
- Takes the ams table in geographic rotated frame.
The ams_geo table does not have the k15 or s values since I can not rotate them right now.
This applies the rotation from the sactions table to rotate to
north and cope with swapped archive/work half
db_cx - open sqlite connection that has a valid mag table
FIX: make this use a dictionary rather than a list for handling the new key,value pairs
- build_cals7k2(db_cx, filename)
- Add the Korte and C. Constable model data for the Santa Barbara Location
Parameters:
db_cx --- open sqlite database connection.
- build_coreloc(db_cx)
- build_depth_age(db_cx, filename)
- Add Berger et al 2004 depth year table. Depth in cm
- build_mag(db_cx, depthLookup, magfileName)
- Create the magnetometer data. nrm, afdemag, etc
- build_mag_geo(db_cx)
- Takes the mag table an builds a mag_geo table with all the
cores rotated according the values in the sections table.
db_cx - open sqlite connection that has a valid mag table
FIX: make this use a dictionary rather than a list for handling the new key,value pairs
- build_sections(db_cx)
- How long is each section of cores. Cores are split to fit into D-tubes.
Here is what I did for the TTN136B cruise:
http://schwehr.org/TTN136B/Data/core_liner_lengths.txt
id - SQL unique id number in this table
cruise - bp04 for BPSIO-04, unique id for the cruise
corenum - The number of the core in that cruise. Core "1" is the first core
section - Which section of the core. 1 is the top at the bottom/water interface. 2 is deeper, etc
sectopdepth - add this number to sectionoffset to get the actual depth of a sample
units are cm
sectionlength - How long is this piece of mud in cm.
Using the info from core description sheets plus total length from
processCore to calc sections lengths.
- build_weights(db_cx, weightsfileName)
- read in a weights file of sample name, weight in grams, date and time
The weights table is available here:
http://schwehr.org/Gaviota/bpsio-Aug04/core-sampling/weights.dat
- findListKey(list, key)
- For a list of tuples where the first item in each tuple is the key.
Return the tuple
- help_pysqlite()
- Open a web browser on OSX with the pysqlite web page
- help_sqlite()
- Open a web browser on OSX with the sqlite web page
- makeSqlDate(date, time, ampm)
- Convert ' 1/3/05 4:33 PM' -> '2005-03-01 16:33:00'
- removeByName(list, key)
- Assuming a list of [(key,value),(key,value),...]
removes the first key with that name
returns the new list
Very inefficient
- replaceByName(list, key, newVal)
- Assuming a list of [(key,value),(key,value),...]
replace value for a certain key
Sorted dictionary land
|