- Brief review of Chris’ changes over the last month
- Brief review of Pete’s work on Radioactive Decay and Volatilization
- AI - Chris to incorporate Radioactive Decay into AqFates
- Implementing temperature correction for case –1 Henry’s Law Constants
- AI - Pete to add logic to do temperature correction for HLC across the board
- Sorting out particle radius determination for Volatilization
- AI - Pete and Jenna to spend ~1hr investigating new equation for volatilization. If unavailable/impractical, will use existing equation, but will spend another ~1hr fixing it.
- Sorting out the density adjustment when incorporating a constituent into a composite
from collections import Hashable | |
try: | |
import numpy as np | |
has_np=True | |
except ImportError: | |
has_np=False | |
def hash_any(value, hv=None): | |
hv = hv or 0 |
##Test Reuse via Multiple Inheritance This pattern allows reuse of the same set of "core" tests across many concrete implementations of a class hierarchy. The tests can be written to test core functionality while concrete test classes provide specifics geared towards each concrete implementation. In conjunction with the get_props decorator, this pattern can provide a very high degree of test reuse (resulting in increased test coverage) without sacrificing the specificity for concrete implementations. Addition of new concrete tests gets the benefit of all of the base tests with only a relatively small amount of property configuration.
##get_props Decorator Evaluates the contents of TESTING_PROPERTIES on a test-by-test basis and Provides a self.method_name.props attribute to the method that contains a dictionary of properties
The TESTING_PROPERTIES dictionary can be extended/amended by subclasses to provide specific properties on a class-wide and/or test-by-test basis
Hi all,
Luke and I just discussed the workflow for calculating Derived Parameters that require input from multiple Data Products – as is the case for calculating the L2 DOCONCS from DOSTA and CTD inputs (see section 2.2.2 of [DOCONCS DataProduct spec] [2]).
While the combinatorial functionality required at the Coverage Model level is already in place, there are some gaps in the resource, association, and preload layers of the system required to orchestrate this type of DataProduct. Here are the gaps that would need to be filled to realize an AggregateDataProduct (working name):
- Resource stuff:
- New resource – AggregateDataProduct: extends DataProduct
- Adds fields:
- complex_coverage_type or
quality_flag: !!python/tuple | |
- 15 | |
- variability: BOTH | |
_derived_from_name: quality_flag | |
display_name: '' | |
description: '' | |
extension: {} | |
name: quality_flag | |
reference_urls: '' | |
precision: '' |
#!/usr/bin/env python | |
""" | |
@package | |
@file memory_trials.py | |
@author Christopher Mueller | |
@brief | |
""" | |
from coverage_model import * |
#!/usr/bin/env python | |
import gevent | |
import gc | |
from load_datasets import * | |
from coverage_model import AbstractCoverage, ViewCoverage, ParameterContext, ParameterFunctionType, create_guid | |
def get_mem(): | |
import resource | |
return resource.getrusage(resource.RUSAGE_SELF).ru_maxrss / (1024.0**2) |
import gevent | |
import gc | |
from load_datasets import * | |
def get_mem(): | |
import resource | |
return resource.getrusage(resource.RUSAGE_SELF).ru_maxrss / (1024.0**2) | |
data_product_id, stream_id, route, stream_def_id, dataset_id = create_data_product('ctd_parsed_param_dict') | |
populate_data_product(data_product_id,1) |
##Other
- Consider a technology (i.e. vagrant) for enabling a common environment among developers
##Coverage Model
- Current usage of attributes in HDF may be a bad choice for a few reasons, should be reevaluated and/or replaced with another mechanism of storing attributes:
- After many ingestion events, metadata files end up very large and mostly 'unallocated' (i.e. empty) due to B-Tree change history
- May leak memory (unverified)
- May not be particularly fast (unverified)